Science.gov

Sample records for ontology lookup service

  1. Simple Lookup Service

    Energy Science and Technology Software Center (ESTSC)

    2013-05-01

    Simple Lookup Service (sLS) is a REST/JSON based lookup service that allows users to publish information in the form of key-value pairs and search for the published information. The lookup service supports both pull and push model. This software can be used to create a distributed architecture/cloud.

  2. The ontology-based answers (OBA) service: a connector for embedded usage of ontologies in applications.

    PubMed

    Dönitz, Jürgen; Wingender, Edgar

    2012-01-01

    The semantic web depends on the use of ontologies to let electronic systems interpret contextual information. Optimally, the handling and access of ontologies should be completely transparent to the user. As a means to this end, we have developed a service that attempts to bridge the gap between experts in a certain knowledge domain, ontologists, and application developers. The ontology-based answers (OBA) service introduced here can be embedded into custom applications to grant access to the classes of ontologies and their relations as most important structural features as well as to information encoded in the relations between ontology classes. Thus computational biologists can benefit from ontologies without detailed knowledge about the respective ontology. The content of ontologies is mapped to a graph of connected objects which is compatible to the object-oriented programming style in Java. Semantic functions implement knowledge about the complex semantics of an ontology beyond the class hierarchy and "partOf" relations. By using these OBA functions an application can, for example, provide a semantic search function, or (in the examples outlined) map an anatomical structure to the organs it belongs to. The semantic functions relieve the application developer from the necessity of acquiring in-depth knowledge about the semantics and curation guidelines of the used ontologies by implementing the required knowledge. The architecture of the OBA service encapsulates the logic to process ontologies in order to achieve a separation from the application logic. A public server with the current plugins is available and can be used with the provided connector in a custom application in scenarios analogous to the presented use cases. The server and the client are freely available if a project requires the use of custom plugins or non-public ontologies. The OBA service and further documentation is available at http://www.bioinf.med.uni-goettingen.de/projects/oba. PMID

  3. Research on e-learning services based on ontology theory

    NASA Astrophysics Data System (ADS)

    Liu, Rui

    2013-07-01

    E-learning services can realize network learning resource sharing and interoperability, but they can't realize automatic discovery, implementation and integration of services. This paper proposes a framework of e-learning services based on ontology, the ontology technology is applied to the publication and discovery process of e-learning services, in order to realize accurate and efficient retrieval and utilization of e-learning services.

  4. The construction and practice of GIS ontology service mechanism

    NASA Astrophysics Data System (ADS)

    Yang, Kun; Wang, Jun; Peng, Shuang-yun; Cheng, Hong-ping

    2005-10-01

    With the development of Semantic Web technology, the spatial information service based on ontology is an effective way for sharing and interoperation of heterogeneous information resources in the distributed network environment. Based on the deep analysis for the spatial information service mechanism of geo-ontology, the system construction strategy and service workflow and combined with the present mainstream commercial GIS software packages, three solutions of system construction for spatial information sharing and interoperation have been proposed here in this paper. The different geographic information application systems distributed on the internet may be integrated dynamically and openly by using one of the three solutions for realizing the sharing and interoperation of heterogeneous spatial information resources in the distributing environment. In order to realize the practical applications of spatial information sharing and interoperation in different brunches of police system, a prototype system for crime case information sharing based on geo-ontology has also been developed by using the methods described above.

  5. BioPortal: enhanced functionality via new Web services from the National Center for Biomedical Ontology to access and use ontologies in software applications

    PubMed Central

    Whetzel, Patricia L.; Noy, Natalya F.; Shah, Nigam H.; Alexander, Paul R.; Nyulas, Csongor; Tudorache, Tania; Musen, Mark A.

    2011-01-01

    The National Center for Biomedical Ontology (NCBO) is one of the National Centers for Biomedical Computing funded under the NIH Roadmap Initiative. Contributing to the national computing infrastructure, NCBO has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies (http://bioportal.bioontology.org) via the NCBO Web services. BioPortal enables community participation in the evaluation and evolution of ontology content by providing features to add mappings between terms, to add comments linked to specific ontology terms and to provide ontology reviews. The NCBO Web services (http://www.bioontology.org/wiki/index.php/NCBO_REST_services) enable this functionality and provide a uniform mechanism to access ontologies from a variety of knowledge representation formats, such as Web Ontology Language (OWL) and Open Biological and Biomedical Ontologies (OBO) format. The Web services provide multi-layered access to the ontology content, from getting all terms in an ontology to retrieving metadata about a term. Users can easily incorporate the NCBO Web services into software applications to generate semantically aware applications and to facilitate structured data collection. PMID:21672956

  6. Global polar geospatial information service retrieval based on search engine and ontology reasoning

    USGS Publications Warehouse

    Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang

    2007-01-01

    In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

  7. Towards a Cross-domain Infrastructure to Support Electronic Identification and Capability Lookup for Cross-border ePrescription/Patient Summary Services.

    PubMed

    Katehakis, Dimitrios G; Masi, Massimiliano; Wisniewski, Francois; Bittins, Sören

    2016-01-01

    Seamless patient identification, as well as locating capabilities of remote services, are considered to be key enablers for large scale deployment of facilities to support the delivery of cross-border healthcare. This work highlights challenges investigated within the context of the Electronic Simple European Networked Services (e-SENS) large scale pilot (LSP) project, aiming to assist the deployment of cross-border, digital, public services through generic, re-usable technical components or Building Blocks (BBs). Through the case for the cross-border ePrescription/Patient Summary (eP/PS) service the paper demonstrates how experience coming from other domains, in regard to electronic identification (eID) and capability lookup, can be utilized in trying to raise technology readiness levels in disease diagnosis and treatment. The need for consolidating the existing outcomes of non-health specific BBs is examined, together with related issues that need to be resolved, for improving technical certainty and making it easier for citizens who travel to use innovative eHealth services, and potentially share personal health records (PHRs) with other providers abroad, in a regulated manner. PMID:27225571

  8. Towards a Formal Representation of Processes and Objects Regarding the Delivery of Telehealth Services: The Telehealth Ontology (TEON).

    PubMed

    Santana, Filipe; Schulz, Stefan; Campos, Amadeu; Novaes, Magdala A

    2015-01-01

    This study introduces ontological aspects concerning the Telehealth Ontology (TEON), an ontology that represents formal-ontological content concerning the delivery of telehealth services. TEON formally represents the main services, actors and other entity types relevant to telehealth service delivery. TEON uses the upper level ontology BioTopLite2 and reuses content from the Ontology for Biomedical Investigations (OBI). The services embedded in telehealth services are considered as essential as the common services provided by the health-related practices. We envision TEON as a service to support the development of telehealth systems. TEON might also enable the integration of heterogeneous telehealth systems, and provide a base to automatize the processing of telehealth-related content. PMID:26262407

  9. The Design and Engineering of Mobile Data Services: Developing an Ontology Based on Business Model Thinking

    NASA Astrophysics Data System (ADS)

    Al-Debei, Mutaz M.; Fitzgerald, Guy

    This paper addresses the design and engineering problem related to mobile data services. The aim of the research is to inform and advise mobile service design and engineering by looking at this issue from a rigorous and holistic perspective. To this aim, this paper develops an ontology based on business model thinking. The developed ontology identifies four primary dimensions in designing business models of mobile data services: value proposition, value network, value architecture, and value finance. Within these dimensions, 15 key design concepts are identified along with their interrelationships and rules in the telecommunication service business model domain and unambiguous semantics are produced. The developed ontology is of value to academics and practitioners alike, particularly those interested in strategic-oriented IS/IT and business developments in telecommunications. Employing the developed ontology would systemize mobile service engineering functions and make them more manageable, effective, and creative. The research approach to building the mobile service business model ontology essentially follows the design science paradigm. Within this paradigm, we incorporate a number of different research methods, so the employed methodology might be better characterized as a pluralist approach.

  10. The Semantic Retrieval of Spatial Data Service Based on Ontology in SIG

    NASA Astrophysics Data System (ADS)

    Sun, S.; Liu, D.; Li, G.; Yu, W.

    2011-08-01

    The research of SIG (Spatial Information Grid) mainly solves the problem of how to connect different computing resources, so that users can use all the resources in the Grid transparently and seamlessly. In SIG, spatial data service is described in some kinds of specifications, which use different meta-information of each kind of services. This kind of standardization cannot resolve the problem of semantic heterogeneity, which may limit user to obtain the required resources. This paper tries to solve two kinds of semantic heterogeneities (name heterogeneity and structure heterogeneity) in spatial data service retrieval based on ontology, and also, based on the hierarchical subsumption relationship among concept in ontology, the query words can be extended and more resource can be matched and found for user. These applications of ontology in spatial data resource retrieval can help to improve the capability of keyword matching, and find more related resources.

  11. An Ontological Consideration on Essential Properties of the Notion of ``Service"

    NASA Astrophysics Data System (ADS)

    Sumita, Kouhei; Kitamura, Yoshinobu; Sasajima, Munehiko; Takfuji, Sunao; Mizoguchi, Riichiro

    Although many definitions of services have been proposed in Service Science and Service Engineering, essentialities of the notion of ``service" remain unclear. Especially, some existing definitions of service are similar to the definition of function of artifacts, and there is no clear distinction between them. Thus, aiming at an ontological conceptualization of service, we have made an ontological investigation into the distinction between service and artifact function. In this article, we reveal essential properties of service and propose a model and a definition of service. Firstly, we extract 42 properties of service from 15 articles in different disciplines in order to find out fundamental concepts of service. Then we show that the notion of function shares the extracted foundational concepts of service and thus point out the necessity of the distinction between them. Secondly, we propose a multi-layered model of services, which is based on the conceptualization of goal-oriented effects at the base-level and at the upper-level. Thirdly, based on the model, we clarify essential properties of service which can distinguish artifact function. The conceptualization of upper-effects (upper-service) enables us to show that upper-services include various effects such as sales and manufacturing. Lastly, we propose a definition of the notion of service based on the essential properties and show its validity using some examples.

  12. Research of three level match method about semantic web service based on ontology

    NASA Astrophysics Data System (ADS)

    Xiao, Jie; Cai, Fang

    2011-10-01

    An important step of Web service Application is the discovery of useful services. Keywords are used in service discovery in traditional technology like UDDI and WSDL, with the disadvantage of user intervention, lack of semantic description and low accuracy. To cope with these problems, OWL-S is introduced and extended with QoS attributes to describe the attribute and functions of Web Services. A three-level service matching algorithm based on ontology and QOS in proposed in this paper. Our algorithm can match web service by utilizing the service profile, QoS parameters together with input and output of the service. Simulation results shows that it greatly enhanced the speed of service matching while high accuracy is also guaranteed.

  13. Using Ontologies to Formalize Services Specifications in Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Breitman, Karin Koogan; Filho, Aluizio Haendchen; Haeusler, Edward Hermann

    2004-01-01

    One key issue in multi-agent systems (MAS) is their ability to interact and exchange information autonomously across applications. To secure agent interoperability, designers must rely on a communication protocol that allows software agents to exchange meaningful information. In this paper we propose using ontologies as such communication protocol. Ontologies capture the semantics of the operations and services provided by agents, allowing interoperability and information exchange in a MAS. Ontologies are a formal, machine processable, representation that allows to capture the semantics of a domain and, to derive meaningful information by way of logical inference. In our proposal we use a formal knowledge representation language (OWL) that translates into Description Logics (a subset of first order logic), thus eliminating ambiguities and providing a solid base for machine based inference. The main contribution of this approach is to make the requirements explicit, centralize the specification in a single document (the ontology itself), at the same that it provides a formal, unambiguous representation that can be processed by automated inference machines.

  14. OLSVis: an animated, interactive visual browser for bio-ontologies

    PubMed Central

    2012-01-01

    Background More than one million terms from biomedical ontologies and controlled vocabularies are available through the Ontology Lookup Service (OLS). Although OLS provides ample possibility for querying and browsing terms, the visualization of parts of the ontology graphs is rather limited and inflexible. Results We created the OLSVis web application, a visualiser for browsing all ontologies available in the OLS database. OLSVis shows customisable subgraphs of the OLS ontologies. Subgraphs are animated via a real-time force-based layout algorithm which is fully interactive: each time the user makes a change, e.g. browsing to a new term, hiding, adding, or dragging terms, the algorithm performs smooth and only essential reorganisations of the graph. This assures an optimal viewing experience, because subsequent screen layouts are not grossly altered, and users can easily navigate through the graph. URL: http://ols.wordvis.com Conclusions The OLSVis web application provides a user-friendly tool to visualise ontologies from the OLS repository. It broadens the possibilities to investigate and select ontology subgraphs through a smooth visualisation method. PMID:22646023

  15. Reliability Prediction of Ontology-Based Service Compositions Using Petri Net and Time Series Models

    PubMed Central

    Li, Jia; Xia, Yunni; Luo, Xin

    2014-01-01

    OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy. PMID:24688429

  16. Case-based classification alternatives to ontologies for automated web service discovery and integration

    NASA Astrophysics Data System (ADS)

    Ladner, Roy; Warner, Elizabeth; Petry, Fred; Gupta, Kalyan Moy; Moore, Philip; Aha, David W.; Shaw, Kevin

    2006-05-01

    Web Services are becoming the standard technology used to share data for many Navy and other DoD operations. Since Web Services technologies provide for discoverable, self-describing services that conform to common standards, this paradigm holds the promise of an automated capability to obtain and integrate data. However, automated integration of applications to access and retrieve data from heterogeneous sources in a distributed system such as the Internet poses many difficulties. Assimilation of data from Web-based sources means that differences in schema and terminology prevent simple querying and retrieval of data. Thus, machine understanding of the Web Services interface is necessary for automated selection and invocation of the correct service. Service availability is also an issue that needs to be resolved. There have been many advances on ontologies to help resolve these difficulties to support the goal of sharing knowledge for various domains of interest. In this paper we examine the use of case-based classification as an alternative/supplement to using ontologies for resolving several questions related to knowledge sharing. While ontologies encompass a formal definition of a domain of interest, case-based reasoning is a problem solving methodology that retrieves and reuses decisions from stored cases to solve new problems, and case-based classification involves applying this methodology to classification tasks. Our approach generalizes well in sparse data, which characterizes our Web Services application. We present our study as it relates to our work on development of the Advanced MetOc Broker, whose objective is the automated application integration of meteorological and oceanographic (MetOc) Web Services.

  17. An ontology-based collaborative service framework for agricultural information

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In recent years, China has developed modern agriculture energetically. An effective information framework is an important way to provide farms with agricultural information services and improve farmer's production technology and their income. The mountain areas in central China are dominated by agri...

  18. Process model-based atomic service discovery and composition of composite semantic web services using web ontology language for services (OWL-S)

    NASA Astrophysics Data System (ADS)

    Paulraj, D.; Swamynathan, S.; Madhaiyan, M.

    2012-11-01

    Web Service composition has become indispensable as a single web service cannot satisfy complex functional requirements. Composition of services has received much interest to support business-to-business (B2B) or enterprise application integration. An important component of the service composition is the discovery of relevant services. In Semantic Web Services (SWS), service discovery is generally achieved by using service profile of Ontology Web Languages for Services (OWL-S). The profile of the service is a derived and concise description but not a functional part of the service. The information contained in the service profile is sufficient for atomic service discovery, but it is not sufficient for the discovery of composite semantic web services (CSWS). The purpose of this article is two-fold: first to prove that the process model is a better choice than the service profile for service discovery. Second, to facilitate the composition of inter-organisational CSWS by proposing a new composition method which uses process ontology. The proposed service composition approach uses an algorithm which performs a fine grained match at the level of atomic process rather than at the level of the entire service in a composite semantic web service. Many works carried out in this area have proposed solutions only for the composition of atomic services and this article proposes a solution for the composition of composite semantic web services.

  19. Design of Ontology-Based Sharing Mechanism for Web Services Recommendation Learning Environment

    NASA Astrophysics Data System (ADS)

    Chen, Hong-Ren

    The number of digital learning websites is growing as a result of advances in computer technology and new techniques in web page creation. These sites contain a wide variety of information but may be a source of confusion to learners who fail to find the information they are seeking. This has led to the concept of recommendation services to help learners acquire information and learning resources that suit their requirements. Learning content like this cannot be reused by other digital learning websites. A successful recommendation service that satisfies a certain learner must cooperate with many other digital learning objects so that it can achieve the required relevance. The study proposes using the theory of knowledge construction in ontology to make the sharing and reuse of digital learning resources possible. The learning recommendation system is accompanied by the recommendation of appropriate teaching materials to help learners enhance their learning abilities. A variety of diverse learning components scattered across the Internet can be organized through an ontological process so that learners can use information by storing, sharing, and reusing it.

  20. An ontology-based semantic configuration approach to constructing Data as a Service for enterprises

    NASA Astrophysics Data System (ADS)

    Cai, Hongming; Xie, Cheng; Jiang, Lihong; Fang, Lu; Huang, Chenxi

    2016-03-01

    To align business strategies with IT systems, enterprises should rapidly implement new applications based on existing information with complex associations to adapt to the continually changing external business environment. Thus, Data as a Service (DaaS) has become an enabling technology for enterprise through information integration and the configuration of existing distributed enterprise systems and heterogonous data sources. However, business modelling, system configuration and model alignment face challenges at the design and execution stages. To provide a comprehensive solution to facilitate data-centric application design in a highly complex and large-scale situation, a configurable ontology-based service integrated platform (COSIP) is proposed to support business modelling, system configuration and execution management. First, a meta-resource model is constructed and used to describe and encapsulate information resources by way of multi-view business modelling. Then, based on ontologies, three semantic configuration patterns, namely composite resource configuration, business scene configuration and runtime environment configuration, are designed to systematically connect business goals with executable applications. Finally, a software architecture based on model-view-controller (MVC) is provided and used to assemble components for software implementation. The result of the case study demonstrates that the proposed approach provides a flexible method of implementing data-centric applications.

  1. Persistent identifiers for web service requests relying on a provenance ontology design pattern

    NASA Astrophysics Data System (ADS)

    Car, Nicholas; Wang, Jingbo; Wyborn, Lesley; Si, Wei

    2016-04-01

    Delivering provenance information for datasets produced from static inputs is relatively straightforward: we represent the processing actions and data flow using provenance ontologies and link to stored copies of the inputs stored in repositories. If appropriate detail is given, the provenance information can then describe what actions have occurred (transparency) and enable reproducibility. When web service-generated data is used by a process to create a dataset instead of a static inputs, we need to use sophisticated provenance representations of the web service request as we can no longer just link to data stored in a repository. A graph-based provenance representation, such as the W3C's PROV standard, can be used to model the web service request as a single conceptual dataset and also as a small workflow with a number of components within the same provenance report. This dual representation does more than just allow simplified or detailed views of a dataset's production to be used where appropriate. It also allow persistent identifiers to be assigned to instances of a web service requests, thus enabling one form of dynamic data citation, and for those identifiers to resolve to whatever level of detail implementers think appropriate in order for that web service request to be reproduced. In this presentation we detail our reasoning in representing web service requests as small workflows. In outline, this stems from the idea that web service requests are perdurant things and in order to most easily persist knowledge of them for provenance, we should represent them as a nexus of relationships between endurant things, such as datasets and knowledge of particular system types, as these endurant things are far easier to persist. We also describe the ontology design pattern that we use to represent workflows in general and how we apply it to different types of web service requests. We give examples of specific web service requests instances that were made by systems

  2. Ontology-aided annotation, visualization, and generalization of geological time-scale information from online geological map services

    NASA Astrophysics Data System (ADS)

    Ma, Xiaogang; Carranza, Emmanuel John M.; Wu, Chonglong; van der Meer, Freek D.

    2012-03-01

    Geological maps are increasingly published and shared online, whereas tools and services supporting information retrieval and knowledge discovery are underdeveloped. In this study, we developed an ontology of geological time scale by using a Resource Description Framework model to represent the ordinal hierarchical structure of the geological time scale and to encode collected annotations of geological time scale concepts. We also developed an animated graphical view of the developed ontology, and functions for interactions between the ontology, the animation and online geological maps published as layers of OGC Web Map Service. The featured functions include automatic annotations for geological time concepts recognized from a geological map, changing layouts in the animation to highlight a concept, showing legends of geological time contents in an online map with the animation, and filtering out and generalizing geological time features in an online map by operating the map legend shown in the animation. We set up a pilot system and carried out a user survey to test and evaluate the usability and usefulness of the developed ontology, animation and interactive functions. Results of the pilot system and the user survey demonstrate that our works enhance features of online geological map services and they are helpful for users to understand and to explore geological time contents and features, respectively, of a geological map.

  3. The @neurIST ontology of intracranial aneurysms: providing terminological services for an integrated IT infrastructure.

    PubMed

    Boeker, Martin; Stenzhorn, Holger; Kumpf, Kai; Bijlenga, Philippe; Schulz, Stefan; Hanser, Susanne

    2007-01-01

    The @neurIST ontology is currently under development within the scope of the European project @neurIST intended to serve as a module in a complex architecture aiming at providing a better understanding and management of intracranial aneurysms and subarachnoid hemorrhages. Due to the integrative structure of the project the ontology needs to represent entities from various disciplines on a large spatial and temporal scale. Initial term acquisition was performed by exploiting a database scaffold, literature analysis and communications with domain experts. The ontology design is based on the DOLCE upper ontology and other existing domain ontologies were linked or partly included whenever appropriate (e.g., the FMA for anatomical entities and the UMLS for definitions and lexical information). About 2300 predominantly medical entities were represented but also a multitude of biomolecular, epidemiological, and hemodynamic entities. The usage of the ontology in the project comprises terminological control, text mining, annotation, and data mediation. PMID:18693797

  4. "You Call This Service?": A Civic Ontology Approach to Evaluating Service-Learning in Diverse Communities

    ERIC Educational Resources Information Center

    Marichal, Jose

    2010-01-01

    This article considers the impact of service-learning in diverse communities on student civic development. A key debate in the literature is whether service-learning in diverse communities fosters student moral/cognitive development or reinforces preexisting stereotypes. This debate has significant implications for student's future civic…

  5. Performing ontology.

    PubMed

    Aspers, Patrik

    2015-06-01

    Ontology, and in particular, the so-called ontological turn, is the topic of a recent themed issue of Social Studies of Science (Volume 43, Issue 3, 2013). Ontology, or metaphysics, is in philosophy concerned with what there is, how it is, and forms of being. But to what is the science and technology studies researcher turning when he or she talks of ontology? It is argued that it is unclear what is gained by arguing that ontology also refers to constructed elements. The 'ontological turn' comes with the risk of creating a pseudo-debate or pseudo-activity, in which energy is used for no end, at the expense of empirical studies. This text rebuts the idea of an ontological turn as foreshadowed in the texts of the themed issue. It argues that there is no fundamental qualitative difference between the ontological turn and what we know as constructivism. PMID:26477201

  6. Quantum ontologies

    SciTech Connect

    Stapp, H.P.

    1988-12-01

    Quantum ontologies are conceptions of the constitution of the universe that are compatible with quantum theory. The ontological orientation is contrasted to the pragmatic orientation of science, and reasons are given for considering quantum ontologies both within science, and in broader contexts. The principal quantum ontologies are described and evaluated. Invited paper at conference: Bell's Theorem, Quantum Theory, and Conceptions of the Universe, George Mason University, October 20-21, 1988. 16 refs.

  7. DEDUCE Clinical Text: An Ontology-based Module to Support Self-Service Clinical Notes Exploration and Cohort Development.

    PubMed

    Roth, Christopher; Rusincovitch, Shelley A; Horvath, Monica M; Brinson, Stephanie; Evans, Steve; Shang, Howard C; Ferranti, Jeffrey M

    2013-01-01

    Large amounts of information, as well as opportunities for informing research, education, and operations, are contained within clinical text such as radiology reports and pathology reports. However, this content is less accessible and harder to leverage than structured, discrete data. We report on an extension to the Duke Enterprise Data Unified Content Explorer (DEDUCE), a self-service query tool developed to provide clinicians and researchers with access to data within the Duke Medicine Enterprise Data Warehouse (EDW). The DEDUCE Clinical Text module supports ontology-based text searching, enhanced filtering capabilities based on document attributes, and integration of clinical text with structured data and cohort development. The module is implemented with open-source tools extensible to other institutions, including a Java-based search engine (Apache Solr) with complementary full-text indexing library (Lucene) employed with a negation engine (NegEx) modified by clinical users to include to local domain-specific negation phrases. PMID:24303270

  8. Leveraging biomedical ontologies and annotation services to organize microbiome data from Mammalian hosts.

    PubMed

    Sarkar, Indra Neil

    2010-01-01

    A better understanding of commensal microbiotic communities ("microbiomes") may provide valuable insights to human health. Towards this goal, an essential step may be the development of approaches to organize data that can enable comparative hypotheses across mammalian microbiomes. The present study explores the feasibility of using existing biomedical informatics resources - especially focusing on those available at the National Center for Biomedical Ontology - to organize microbiome data contained within large sequence repositories, such as GenBank. The results indicate that the Foundational Model of Anatomy and SNOMED CT can be used to organize greater than 90% of the bacterial organisms associated with 10 domesticated mammalian species. The promising findings suggest that the current biomedical informatics infrastructure may be used towards the organizing of microbiome data beyond humans. Furthermore, the results identify key concepts that might be organized into a semantic structure for incorporation into subsequent annotations that could facilitate comparative biomedical hypotheses pertaining to human health. PMID:21347072

  9. Design of a Golf Swing Injury Detection and Evaluation open service platform with Ontology-oriented clustering case-based reasoning mechanism.

    PubMed

    Ku, Hao-Hsiang

    2015-01-01

    Nowadays, people can easily use a smartphone to get wanted information and requested services. Hence, this study designs and proposes a Golf Swing Injury Detection and Evaluation open service platform with Ontology-oritened clustering case-based reasoning mechanism, which is called GoSIDE, based on Arduino and Open Service Gateway initative (OSGi). GoSIDE is a three-tier architecture, which is composed of Mobile Users, Application Servers and a Cloud-based Digital Convergence Server. A mobile user is with a smartphone and Kinect sensors to detect the user's Golf swing actions and to interact with iDTV. An application server is with Intelligent Golf Swing Posture Analysis Model (iGoSPAM) to check a user's Golf swing actions and to alter this user when he is with error actions. Cloud-based Digital Convergence Server is with Ontology-oriented Clustering Case-based Reasoning (CBR) for Quality of Experiences (OCC4QoE), which is designed to provide QoE services by QoE-based Ontology strategies, rules and events for this user. Furthermore, GoSIDE will automatically trigger OCC4QoE and deliver popular rules for a new user. Experiment results illustrate that GoSIDE can provide appropriate detections for Golfers. Finally, GoSIDE can be a reference model for researchers and engineers. PMID:26444809

  10. A piecewise lookup table for calculating nonbonded pairwise atomic interactions.

    PubMed

    Luo, Jinping; Liu, Lijun; Su, Peng; Duan, Pengbo; Lu, Daihui

    2015-11-01

    A critical challenge for molecular dynamics simulations of chemical or biological systems is to improve the calculation efficiency while retaining sufficient accuracy. The main bottleneck in improving the efficiency is the evaluation of nonbonded pairwise interactions. We propose a new piecewise lookup table method for rapid and accurate calculation of interatomic nonbonded pairwise interactions. The piecewise lookup table allows nonuniform assignment of table nodes according to the slope of the potential function and the pair interaction distribution. The proposed method assigns the nodes more reasonably than in general lookup tables, and thus improves the accuracy while requiring fewer nodes. To obtain the same level of accuracy, our piecewise lookup table accelerates the calculation via the efficient usage of cache memory. This new method is straightforward to implement and should be broadly applicable. Graphical Abstract Illustration of piecewise lookup table method. PMID:26481475

  11. Tool support for software lookup table optimization

    PubMed Central

    Strout, Michelle Mills; Bieman, James M.

    2012-01-01

    A number of scientific applications are performance-limited by expressions that repeatedly call costly elementary functions. Lookup table (LUT) optimization accelerates the evaluation of such functions by reusing previously computed results. LUT methods can speed up applications that tolerate an approximation of function results, thereby achieving a high level of fuzzy reuse. One problem with LUT optimization is the difficulty of controlling the tradeoff between performance and accuracy. The current practice of manual LUT optimization adds programming effort by requiring extensive experimentation to make this tradeoff, and such hand tuning can obfuscate algorithms. In this paper we describe a methodology and tool implementation to improve the application of software LUT optimization. Our Mesa tool implements source-to-source transformations for C or C++ code to automate the tedious and error-prone aspects of LUT generation such as domain profiling, error analysis, and code generation. We evaluate Mesa with five scientific applications. Our results show a performance improvement of 3.0 × and 6.9 × for two molecular biology algorithms, 1.4 × for a molecular dynamics program, 2.1 × to 2.8 × for a neural network application, and 4.6 × for a hydrology calculation. We find that Mesa enables LUT optimization with more control over accuracy and less effort than manual approaches. PMID:24532963

  12. Tool support for software lookup table optimization.

    PubMed

    Wilcox, Chris; Strout, Michelle Mills; Bieman, James M

    2011-12-01

    A number of scientific applications are performance-limited by expressions that repeatedly call costly elementary functions. Lookup table (LUT) optimization accelerates the evaluation of such functions by reusing previously computed results. LUT methods can speed up applications that tolerate an approximation of function results, thereby achieving a high level of fuzzy reuse. One problem with LUT optimization is the difficulty of controlling the tradeoff between performance and accuracy. The current practice of manual LUT optimization adds programming effort by requiring extensive experimentation to make this tradeoff, and such hand tuning can obfuscate algorithms. In this paper we describe a methodology and tool implementation to improve the application of software LUT optimization. Our Mesa tool implements source-to-source transformations for C or C++ code to automate the tedious and error-prone aspects of LUT generation such as domain profiling, error analysis, and code generation. We evaluate Mesa with five scientific applications. Our results show a performance improvement of 3.0 × and 6.9 × for two molecular biology algorithms, 1.4 × for a molecular dynamics program, 2.1 × to 2.8 × for a neural network application, and 4.6 × for a hydrology calculation. We find that Mesa enables LUT optimization with more control over accuracy and less effort than manual approaches. PMID:24532963

  13. Tool Support for Software Lookup Table Optimization

    DOE PAGESBeta

    Wilcox, Chris; Strout, Michelle Mills; Bieman, James M.

    2011-01-01

    A number of scientific applications are performance-limited by expressions that repeatedly call costly elementary functions. Lookup table (LUT) optimization accelerates the evaluation of such functions by reusing previously computed results. LUT methods can speed up applications that tolerate an approximation of function results, thereby achieving a high level of fuzzy reuse. One problem with LUT optimization is the difficulty of controlling the tradeoff between performance and accuracy. The current practice of manual LUT optimization adds programming effort by requiring extensive experimentation to make this tradeoff, and such hand tuning can obfuscate algorithms. In this paper we describe a methodology andmore » tool implementation to improve the application of software LUT optimization. Our Mesa tool implements source-to-source transformations for C or C++ code to automate the tedious and error-prone aspects of LUT generation such as domain profiling, error analysis, and code generation. We evaluate Mesa with five scientific applications. Our results show a performance improvement of 3.0× and 6.9× for two molecular biology algorithms, 1.4× for a molecular dynamics program, 2.1× to 2.8× for a neural network application, and 4.6× for a hydrology calculation. We find that Mesa enables LUT optimization with more control over accuracy and less effort than manual approaches.« less

  14. Extending netCDF and CF conventions to support enhanced Earth Observation Ontology services: the Prod-Trees project

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo; Valentin, Bernard; Koubarakis, Manolis; Nativi, Stefano

    2013-04-01

    Access to Earth Observation products remains not at all straightforward for end users in most domains. Semantically-enabled search engines, generally accessible through Web portals, have been developed. They allow searching for products by selecting application-specific terms and specifying basic geographical and temporal filtering criteria. Although this mostly suits the needs of the general public, the scientific communities require more advanced and controlled means to find products. Ranges of validity, traceability (e.g. origin, applied algorithms), accuracy, uncertainty, are concepts that are typically taken into account in research activities. The Prod-Trees (Enriching Earth Observation Ontology Services using Product Trees) project will enhance the CF-netCDF product format and vocabulary to allow storing metadata that better describe the products, and in particular EO products. The project will bring a standardized solution that permits annotating EO products in such a manner that official and third-party software libraries and tools will be able to search for products using advanced tags and controlled parameter names. Annotated EO products will be automatically supported by all the compatible software. Because the entire product information will come from the annotations and the standards, there will be no need for integrating extra components and data structures that have not been standardized. In the course of the project, the most important and popular open-source software libraries and tools will be extended to support the proposed extensions of CF-netCDF. The result will be provided back to the respective owners and maintainers for ensuring the best dissemination and adoption of the extended format. The project, funded by ESA, has started in December 2012 and will end in May 2014. It is coordinated by Space Applications Services, and the Consortium includes CNR-IIA and the National and Kapodistrian University of Athens. The first activities included

  15. The Ontology for Biomedical Investigations

    PubMed Central

    Bandrowski, Anita; Brinkman, Ryan; Brochhausen, Mathias; Brush, Matthew H.; Chibucos, Marcus C.; Clancy, Kevin; Courtot, Mélanie; Derom, Dirk; Dumontier, Michel; Fan, Liju; Fostel, Jennifer; Fragoso, Gilberto; Gibson, Frank; Gonzalez-Beltran, Alejandra; Haendel, Melissa A.; He, Yongqun; Heiskanen, Mervi; Hernandez-Boussard, Tina; Jensen, Mark; Lin, Yu; Lister, Allyson L.; Lord, Phillip; Malone, James; Manduchi, Elisabetta; McGee, Monnie; Morrison, Norman; Overton, James A.; Parkinson, Helen; Peters, Bjoern; Rocca-Serra, Philippe; Ruttenberg, Alan; Sansone, Susanna-Assunta; Scheuermann, Richard H.; Schober, Daniel; Smith, Barry; Soldatova, Larisa N.; Stoeckert, Christian J.; Taylor, Chris F.; Torniai, Carlo; Turner, Jessica A.; Vita, Randi; Whetzel, Patricia L.; Zheng, Jie

    2016-01-01

    The Ontology for Biomedical Investigations (OBI) is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO) project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI) and Phenotype Attribute and Trait Ontology (PATO) without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology) and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT)). The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource (http://obi-ontology.org) providing details on the people, policies, and issues being addressed

  16. The Ontology for Biomedical Investigations.

    PubMed

    Bandrowski, Anita; Brinkman, Ryan; Brochhausen, Mathias; Brush, Matthew H; Bug, Bill; Chibucos, Marcus C; Clancy, Kevin; Courtot, Mélanie; Derom, Dirk; Dumontier, Michel; Fan, Liju; Fostel, Jennifer; Fragoso, Gilberto; Gibson, Frank; Gonzalez-Beltran, Alejandra; Haendel, Melissa A; He, Yongqun; Heiskanen, Mervi; Hernandez-Boussard, Tina; Jensen, Mark; Lin, Yu; Lister, Allyson L; Lord, Phillip; Malone, James; Manduchi, Elisabetta; McGee, Monnie; Morrison, Norman; Overton, James A; Parkinson, Helen; Peters, Bjoern; Rocca-Serra, Philippe; Ruttenberg, Alan; Sansone, Susanna-Assunta; Scheuermann, Richard H; Schober, Daniel; Smith, Barry; Soldatova, Larisa N; Stoeckert, Christian J; Taylor, Chris F; Torniai, Carlo; Turner, Jessica A; Vita, Randi; Whetzel, Patricia L; Zheng, Jie

    2016-01-01

    The Ontology for Biomedical Investigations (OBI) is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO) project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI) and Phenotype Attribute and Trait Ontology (PATO) without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology) and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT)). The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource (http://obi-ontology.org) providing details on the people, policies, and issues being addressed

  17. Ontology Research and Development. Part 1-A Review of Ontology Generation.

    ERIC Educational Resources Information Center

    Ding, Ying; Foo, Schubert

    2002-01-01

    Discusses the role of ontology in knowledge representation, including enabling content-based access, interoperability, communications, and new levels of service on the Semantic Web; reviews current ontology generation studies and projects as well as problems facing such research; and discusses ontology mapping, information extraction, natural…

  18. A Pilot Ontology for Healthcare Quality Indicators.

    PubMed

    White, Pam; Roudsari, Abdul

    2015-01-01

    Computerisation of quality indicators for the English National Health Service currently relies primarily on queries and clinical coding, with little use of ontologies. We created a searchable ontology for a diverse set of healthcare quality indicators. We investigated attributes and relationships in a set of 222 quality indicators, categorised by clinical pathway, inclusion and exclusion criteria and US Institute of Medicine purpose. Our pilot ontology could reduce duplication of effort in healthcare quality monitoring. PMID:26262409

  19. A Table Look-Up Parser in Online ILTS Applications

    ERIC Educational Resources Information Center

    Chen, Liang; Tokuda, Naoyuki; Hou, Pingkui

    2005-01-01

    A simple table look-up parser (TLUP) has been developed for parsing and consequently diagnosing syntactic errors in semi-free formatted learners' input sentences of an intelligent language tutoring system (ILTS). The TLUP finds a parse tree for a correct version of an input sentence, diagnoses syntactic errors of the learner by tracing and…

  20. Marine Planning and Service Platform: specific ontology based semantic search engine serving data management and sustainable development

    NASA Astrophysics Data System (ADS)

    Manzella, Giuseppe M. R.; Bartolini, Andrea; Bustaffa, Franco; D'Angelo, Paolo; De Mattei, Maurizio; Frontini, Francesca; Maltese, Maurizio; Medone, Daniele; Monachini, Monica; Novellino, Antonio; Spada, Andrea

    2016-04-01

    The MAPS (Marine Planning and Service Platform) project is aiming at building a computer platform supporting a Marine Information and Knowledge System. One of the main objective of the project is to develop a repository that should gather, classify and structure marine scientific literature and data thus guaranteeing their accessibility to researchers and institutions by means of standard protocols. In oceanography the cost related to data collection is very high and the new paradigm is based on the concept to collect once and re-use many times (for re-analysis, marine environment assessment, studies on trends, etc). This concept requires the access to quality controlled data and to information that is provided in reports (grey literature) and/or in relevant scientific literature. Hence, creation of new technology is needed by integrating several disciplines such as data management, information systems, knowledge management. In one of the most important EC projects on data management, namely SeaDataNet (www.seadatanet.org), an initial example of knowledge management is provided through the Common Data Index, that is providing links to data and (eventually) to papers. There are efforts to develop search engines to find author's contributions to scientific literature or publications. This implies the use of persistent identifiers (such as DOI), as is done in ORCID. However very few efforts are dedicated to link publications to the data cited or used or that can be of importance for the published studies. This is the objective of MAPS. Full-text technologies are often unsuccessful since they assume the presence of specific keywords in the text; in order to fix this problem, the MAPS project suggests to use different semantic technologies for retrieving the text and data and thus getting much more complying results. The main parts of our design of the search engine are: • Syntactic parser - This module is responsible for the extraction of "rich words" from the text

  1. Marine Planning and Service Platform: specific ontology based semantic search engine serving data management and sustainable development

    NASA Astrophysics Data System (ADS)

    Manzella, Giuseppe M. R.; Bartolini, Andrea; Bustaffa, Franco; D'Angelo, Paolo; De Mattei, Maurizio; Frontini, Francesca; Maltese, Maurizio; Medone, Daniele; Monachini, Monica; Novellino, Antonio; Spada, Andrea

    2016-04-01

    The MAPS (Marine Planning and Service Platform) project is aiming at building a computer platform supporting a Marine Information and Knowledge System. One of the main objective of the project is to develop a repository that should gather, classify and structure marine scientific literature and data thus guaranteeing their accessibility to researchers and institutions by means of standard protocols. In oceanography the cost related to data collection is very high and the new paradigm is based on the concept to collect once and re-use many times (for re-analysis, marine environment assessment, studies on trends, etc). This concept requires the access to quality controlled data and to information that is provided in reports (grey literature) and/or in relevant scientific literature. Hence, creation of new technology is needed by integrating several disciplines such as data management, information systems, knowledge management. In one of the most important EC projects on data management, namely SeaDataNet (www.seadatanet.org), an initial example of knowledge management is provided through the Common Data Index, that is providing links to data and (eventually) to papers. There are efforts to develop search engines to find author's contributions to scientific literature or publications. This implies the use of persistent identifiers (such as DOI), as is done in ORCID. However very few efforts are dedicated to link publications to the data cited or used or that can be of importance for the published studies. This is the objective of MAPS. Full-text technologies are often unsuccessful since they assume the presence of specific keywords in the text; in order to fix this problem, the MAPS project suggests to use different semantic technologies for retrieving the text and data and thus getting much more complying results. The main parts of our design of the search engine are: • Syntactic parser - This module is responsible for the extraction of "rich words" from the text

  2. Fast Pixel Buffer For Processing With Lookup Tables

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.

    1992-01-01

    Proposed scheme for buffering data on intensities of picture elements (pixels) of image increases rate or processing beyond that attainable when data read, one pixel at time, from main image memory. Scheme applied in design of specialized image-processing circuitry. Intended to optimize performance of processor in which electronic equivalent of address-lookup table used to address those pixels in main image memory required for processing.

  3. Use of the CIM Ontology

    SciTech Connect

    Neumann, Scott; Britton, Jay; Devos, Arnold N.; Widergren, Steven E.

    2006-02-08

    There are many uses for the Common Information Model (CIM), an ontology that is being standardized through Technical Committee 57 of the International Electrotechnical Commission (IEC TC57). The most common uses to date have included application modeling, information exchanges, information management and systems integration. As one should expect, there are many issues that become apparent when the CIM ontology is applied to any one use. Some of these issues are shortcomings within the current draft of the CIM, and others are a consequence of the different ways in which the CIM can be applied using different technologies. As the CIM ontology will and should evolve, there are several dangers that need to be recognized. One is overall consistency and impact upon applications when extending the CIM for a specific need. Another is that a tight coupling of the CIM to specific technologies could limit the value of the CIM in the longer term as an ontology, which becomes a larger issue over time as new technologies emerge. The integration of systems is one specific area of interest for application of the CIM ontology. This is an area dominated by the use of XML for the definition of messages. While this is certainly true when using Enterprise Application Integration (EAI) products, it is even more true with the movement towards the use of Web Services (WS), Service-Oriented Architectures (SOA) and Enterprise Service Buses (ESB) for integration. This general IT industry trend is consistent with trends seen within the IEC TC57 scope of power system management and associated information exchange. The challenge for TC57 is how to best leverage the CIM ontology using the various XML technologies and standards for integration. This paper will provide examples of how the CIM ontology is used and describe some specific issues that should be addressed within the CIM in order to increase its usefulness as an ontology. It will also describe some of the issues and challenges that will

  4. Simple Ontology Format (SOFT)

    SciTech Connect

    Sorokine, Alexandre

    2011-10-01

    Simple Ontology Format (SOFT) library and file format specification provides a set of simple tools for developing and maintaining ontologies. The library, implemented as a perl module, supports parsing and verification of the files in SOFt format, operations with ontologies (adding, removing, or filtering of entities), and converting of ontologies into other formats. SOFT allows users to quickly create ontologies using only a basic text editor, verify it, and portray it in a graph layout system using customized styles.

  5. Datamining with Ontologies.

    PubMed

    Hoehndorf, Robert; Gkoutos, Georgios V; Schofield, Paul N

    2016-01-01

    The use of ontologies has increased rapidly over the past decade and they now provide a key component of most major databases in biology and biomedicine. Consequently, datamining over these databases benefits from considering the specific structure and content of ontologies, and several methods have been developed to use ontologies in datamining applications. Here, we discuss the principles of ontology structure, and datamining methods that rely on ontologies. The impact of these methods in the biological and biomedical sciences has been profound and is likely to increase as more datasets are becoming available using common, shared ontologies. PMID:27115643

  6. Research on the complex network of the UNSPSC ontology

    NASA Astrophysics Data System (ADS)

    Xu, Yingying; Zou, Shengrong; Gu, Aihua; Wei, Li; Zhou, Ta

    The UNSPSC ontology mainly applies to the classification system of the e-business and governments buying the worldwide products and services, and supports the logic structure of classification of the products and services. In this paper, the related technologies of the complex network were applied to analyzing the structure of the ontology. The concept of the ontology was corresponding to the node of the complex network, and the relationship of the ontology concept was corresponding to the edge of the complex network. With existing methods of analysis and performance indicators in the complex network, analyzing the degree distribution and community of the ontology, and the research will help evaluate the concept of the ontology, classify the concept of the ontology and improve the efficiency of semantic matching.

  7. Multiple Lookup Table-Based AES Encryption Algorithm Implementation

    NASA Astrophysics Data System (ADS)

    Gong, Jin; Liu, Wenyi; Zhang, Huixin

    Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.

  8. THE PLANT ONTOLOGY CONSORTIUM AND PLANT ONTOLOGIES

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The goal of the Plant OntologyTM Consortium is to produce structured controlled vocabularies, arranged in ontologies, that can be applied to plant-based database information even as knowledge of the biology of the relevant plant taxa (e.g., development, anatomy, morphology, genomics, proteomics) is ...

  9. Assessment Applications of Ontologies.

    ERIC Educational Resources Information Center

    Chung, Gregory K. W. K.; Niemi, David; Bewley, William L.

    This paper discusses the use of ontologies and their applications to assessment. An ontology provides a shared and common understanding of a domain that can be communicated among people and computational systems. The ontology captures one or more experts' conceptual representation of a domain expressed in terms of concepts and the relationships…

  10. SWEET- An Upper Level Ontology for Earth System Science

    NASA Astrophysics Data System (ADS)

    Raskin, R.

    2005-12-01

    The Semantic Web for Earth and Environmental Terminology (SWEET) provides a set of upper-level ontologies constituting a concept space of Earth system science. These ontologies can be used, mapped, or extended by developers of specialized domain ontologies. SWEET components are being adopted within a diverse range of applications, including: the Geosciences Network (GEON), the Marine Metadata Initiative (MMI), the Virtual Solar Terrestrial Observatory (VSTO), and the Earth Science Markup Language (ESML). SWEET includes 12 ontologies, decomposed into component parts that can be reassembled to meet the needs of user communities. For example, the Property ontology terms (e.g., temperature, pressure) can be associated with measurable (observable) quantities of a dataset. The Substance ontology provides representations of the substance in which a property is being measured (e.g., air, water, rock). The Earth Realm ontology provides representations for the environmental regions of the Earth (e.g., atmospheric boundary layer, ocean mixed layer). The Data and Service ontology enables representations of how data are captured, stored, and accessed. The Numerics ontology entries represent 2-D and 3-D objects or spatial/temporal entities and relations. The Human Activities ontology captures the human side or applications of Earth science. The Phenomena ontology describes major geophysical or geophysical-related events. All of the ontologies are written in the OWL-DL language to give domain specialists a starting vocabulary, over which layers, synonyms, or extensions can be applied.

  11. A Pipelined IP Address Lookup Module for 100 Gbps Line Rates and beyond

    NASA Astrophysics Data System (ADS)

    Teuchert, Domenic; Hauger, Simon

    New Internet services and technologies call for higher packet switching capacities in the core network. Thus, a performance bottleneck arises at the backbone routers, as forwarding of Internet Protocol (IP) packets requires to search the most specific entry in a forwarding table that contains up to several hundred thousand address prefixes. The Tree Bitmap algorithm provides a well-balanced solution in respect of storage needs as well as of search and update complexity. In this paper, we present a pipelined lookup module based on this algorithm, which allows for an easy adaption to diverse protocol and hardware constraints. We determined the pipelining degree required to achieve the throughput for a 100 Gbps router line card by analyzing a representative sub-unit for various configured sizes. The module supports IPv4 and IPv6 configurations providing this throughput, as we determined the performance of our design to achieve a processing rate of 178 million packets per second.

  12. A "lookup table" schema for synthetic biological patterning.

    PubMed

    Reitz, Frederick B

    2012-05-01

    A schema is proposed by which the three-dimensional structure and temporal development of a biological organism might be encoded and implemented via a genetic "lookup table". In the schema, diffusive morphogen gradients and/or the global concentration of a quickly diffusing signal index sets of kinase genes having promoters with logarithmically diminished affinity for the signal. Specificity of indexing is enhanced via concomitant expression of phosphatases undoing phosphorylation by "neighboring" kinases of greater affinity. Combinations of thus-selected kinases in turn jointly activate, via multiple phosphorylation, a particular enzyme from a virtual, multi-dimensional array thereof, at locations and times specified within the "lookup table". In principle, such a scheme could be employed to specify arbitrary gross anatomy, surface pigmentation, and/or developmental sequencing, extending the burgeoning toolset of the nascent field of synthetic morphology. A model of two-dimensional surface coloration using this scheme is specified, and LabVIEW software for its exploration is described and made available. PMID:22350667

  13. Efficient Lookup Table Retrievals of Gas Abundance from CRISM Spectra

    NASA Astrophysics Data System (ADS)

    Toigo, A. D.; Smith, M. D.; Seelos, F. P.; CRISM Science; Operations Teams

    2011-12-01

    The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) instrument on the Mars Reconnaissance Orbiter (MRO) spacecraft has been collecting spectra in the visible to near-infrared on Mars for over 5 years (almost 3 Martian years). Observations consist of image cubes, with two main spectral samplings (approximately 70 and 550 spectral channels) and two main imaging resolutions (approximately 20 and 200 m/pixel). We present retrievals of gas abundances, specifically CO2, H2O, and CO, from spectra collected in all observation modes. The retrievals are efficiently performed using a lookup table, where the strength of gas absorption features are pre-calculated for an N-dimensional discrete grid of known input parameters (season, location, environment, viewing geometry, etc.) and the one unknown parameter to be retrieved (gas abundance). A reverse interpolation in the lookup table is used to match the observed strength of the gas absorption to the gas abundance. This algorithm is extremely fast compared to traditional radiative transfer computations that seek to recursively fit calculated results to an observed spectral feature, and can therefore be applied on a pixel-by-pixel basis to the tens of thousands of CRISM images, to examine cross-scene structure as well as to produce climatological averages.

  14. An Ontology of Therapies

    NASA Astrophysics Data System (ADS)

    Eccher, Claudio; Ferro, Antonella; Pisanelli, Domenico M.

    Ontologies are the essential glue to build interoperable systems and the talk of the day in the medical community. In this paper we present the ontology of medical therapies developed in the course of the Oncocure project, aimed at building a guideline based decision support integrated with a legacy Electronic Patient Record (EPR). The therapy ontology is based upon the DOLCE top level ontology. It is our opinion that our ontology, besides constituting a model capturing the precise meaning of therapy-related concepts, can serve for several practical purposes: interfacing automatic support systems with a legacy EPR, allowing the automatic data analysis, and controlling possible medical errors made during EPR data input.

  15. Simple Ontology Format (SOFT)

    Energy Science and Technology Software Center (ESTSC)

    2011-10-01

    Simple Ontology Format (SOFT) library and file format specification provides a set of simple tools for developing and maintaining ontologies. The library, implemented as a perl module, supports parsing and verification of the files in SOFt format, operations with ontologies (adding, removing, or filtering of entities), and converting of ontologies into other formats. SOFT allows users to quickly create ontologies using only a basic text editor, verify it, and portray it in a graph layoutmore » system using customized styles.« less

  16. Bringing Ontology to the Gene Ontology

    PubMed Central

    Andersen, William

    2003-01-01

    We present an analysis of some considerations involved in expressing the Gene Ontology (GO) as a machine-processible ontology, reflecting principles of formal ontology. GO is a controlled vocabulary that is intended to facilitate communication between biologists by standardizing usage of terms in database annotations. Making such controlled vocabularies maximally useful in support of bioinformatics applications requires explicating in machine-processible form the implicit background information that enables human users to interpret the meaning of the vocabulary terms. In the case of GO, this process would involve rendering the meanings of GO into a formal (logical) language with the help of domain experts, and adding additional information required to support the chosen formalization. A controlled vocabulary augmented in these ways is commonly called an ontology. In this paper, we make a modest exploration to determine the ontological requirements for this extended version of GO. Using the terms within the three GO hierarchies (molecular function, biological process and cellular component), we investigate the facility with which GO concepts can be ontologized, using available tools from the philosophical and ontological engineering literature. PMID:18629099

  17. Ontology Languages and Engineering

    NASA Astrophysics Data System (ADS)

    Horrocks, Ian

    Ontologies and ontology based systems are rapidly becoming mainstream technologies, with RDF and OWL now being deployed in diverse application domains, and with major technology vendors starting to augment their existing systems with ontological reasoning. For example, Oracle Inc. recently enhanced its well-known database management system with modules that use RDF/OWL ontologies to support "semantic data management", and their product brochure lists numerous application areas that can benefit from this technology, including Enterprise Information Integration, KnowledgeMining, Finance, Compliance Management and Life Science Research. The design of the high quality ontologies needed to support such applications is, however, still extremely challenging. In this talk I will describe the design of OWL, show how it facilitates the development of ontology engineering tools, describe the increasingly wide range of available tools, and explain how such tools can be used to support the entire design, deployment and maintenance ontology life-cycle.

  18. Table look-up approach to pattern recognition.

    NASA Technical Reports Server (NTRS)

    Eppler, W. G.; Helmke, C. A.; Evans, R. H.

    1971-01-01

    The table look-up approach is based on prestoring in fast, random-access, core memory the desired answer (e.g., crop-type) for all combinations of multispectral scanner outputs from selected channels. Specifically, each set of measurements from a given point on the ground is interpreted as that address in core memory where the answer can be retrieved. Substituting the simple retrieval operation for the length y computations required by the conventional approach offers two advantages: (1) the processing time is reduced by more than an order of magnitude; (2) the multispectral scanner data can be processed by computers having minimal sophistication, complexity, and cost. These two advantages may make it possible to use an onboard computer to perform the classification function in flight.

  19. Cache directory look-up re-use as conflict check mechanism for speculative memory requests

    DOEpatents

    Ohmacht, Martin

    2013-09-10

    In a cache memory, energy and other efficiencies can be realized by saving a result of a cache directory lookup for sequential accesses to a same memory address. Where the cache is a point of coherence for speculative execution in a multiprocessor system, with directory lookups serving as the point of conflict detection, such saving becomes particularly advantageous.

  20. Table-lookup algorithms for elementary functions and their error analysis

    SciTech Connect

    Tang, Ping Tak Peter.

    1991-01-01

    Table-lookup algorithms for calculating elementary functions offer superior speed and accuracy when compared with more traditional algorithms. With careful design, we show that it is feasible to implement table-lookup algorithms in hardware. Furthermore, we present a uniform approach to carry out tight error analysis for such implementations. 7 refs.

  1. A hierarchical P2P overlay network for interest-based media contents lookup

    NASA Astrophysics Data System (ADS)

    Lee, HyunRyong; Kim, JongWon

    2006-10-01

    We propose a P2P (peer-to-peer) overlay architecture, called IGN (interest grouping network), for contents lookup in the DHC (digital home community), which aims to provide a formalized home-network-extended construction of current P2P file sharing community. The IGN utilizes the Chord and de Bruijn graph for its hierarchical overlay network construction. By combining two schemes and by inheriting its features, the IGN efficiently supports contents lookup. More specifically, by introducing metadata-based lookup keyword, the IGN offers detailed contents lookup that can reflect the user interests. Moreover, the IGN tries to reflect home network environments of DHC by utilizing HG (home gateway) of each home network as a participating node of the IGN. Through experimental and analysis results, we show that the IGN is more efficient than Chord, a well-known DHT (distributed hash table)-based lookup protocol.

  2. A fast IPv6 route lookup scheme for high-speed optical link

    NASA Astrophysics Data System (ADS)

    Yao, Xingmiao; Li, Lemin

    2004-05-01

    A fast IPv6 route lookup scheme implemented by hardware is proposed in this paper. It supports a fast IP address lookup and can insert and delete the prefixes effectively. A novel compressed multibit trie algorithm that decreases the memory space occupied and the average searching time is applied. The scheme proposed in this paper is superior to other IPV6 route lookup ones, for example, by using SRAM pipeline, a lookup speed of 125 x 106 per second can be realized to satisfy 40Gbps optical link rate with only 1.9Mbyte consumption of memory space. As there is no actual IPv6 route prefix, we generate various simulation databases in which prefix length distribution is different. Simulation results show that our scheme has reasonable lookup time, memory space for all the prefix length distribution.

  3. Kuhn's Ontological Relativism.

    ERIC Educational Resources Information Center

    Sankey, Howard

    2000-01-01

    Discusses Kuhn's model of scientific theory change. Documents Kuhn's move away from conceptual relativism and rational relativism. Provides an analysis of his present ontological form of relativism. (CCM)

  4. Ontology Sparse Vector Learning Algorithm for Ontology Similarity Measuring and Ontology Mapping via ADAL Technology

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Zhu, Linli; Wang, Kaiyun

    2015-12-01

    Ontology, a model of knowledge representation and storage, has had extensive applications in pharmaceutics, social science, chemistry and biology. In the age of “big data”, the constructed concepts are often represented as higher-dimensional data by scholars, and thus the sparse learning techniques are introduced into ontology algorithms. In this paper, based on the alternating direction augmented Lagrangian method, we present an ontology optimization algorithm for ontological sparse vector learning, and a fast version of such ontology technologies. The optimal sparse vector is obtained by an iterative procedure, and the ontology function is then obtained from the sparse vector. Four simulation experiments show that our ontological sparse vector learning model has a higher precision ratio on plant ontology, humanoid robotics ontology, biology ontology and physics education ontology data for similarity measuring and ontology mapping applications.

  5. The geographical ontology, LDAP, and the space information semantic grid

    NASA Astrophysics Data System (ADS)

    Cui, Wei; Li, Deren

    2005-10-01

    The research purpose is to discuss the development trend and theory of the semantic integration and interoperability of Geography Information Systems on the network ages and to point out that the geography ontology is the foregone conclusion of the development of the semantic-based integration and interoperability of Geography Information Systems. After analyzing the effect by using the various new technologies, the paper proposes new idea for the family of the ontology class based on the GIS knowledge built here. They are the basic ontology, the domain ontology and the application ontology and are very useful for the sharing and transferring of the semantic information between the complicated distributed systems and object abstracting. The main contributions of the paper are as follows: 1) For the first time taking the ontology and LDAP (Lightweight Directory Access Protocol) in creating and optimizing the architecture of Spatial Information Gird and accelerating the fusion of Geography Information System and other domain's information systems. 2) For the first time, introducing a hybrid method to build geography ontology. This hybrid method mixes the excellence of the independent domain expert and data mining. It improves the efficiency of the method of the domain expert and builds ontology semi-automatically. 3) For the first time, implementing the many-to-many relationship of integration ontology system by LDAP's reference and creating ontology-based virtual organization that could provide transparent service to guests.

  6. The Ontology of Disaster.

    ERIC Educational Resources Information Center

    Thompson, Neil

    1995-01-01

    Explores some key existential or ontological concepts to show their applicability to the complex area of disaster impact as it relates to health and social welfare practice. Draws on existentialist philosophy, particularly that of John Paul Sartre, and introduces some key ontological concepts to show how they specifically apply to the experience…

  7. Constructive Ontology Engineering

    ERIC Educational Resources Information Center

    Sousan, William L.

    2010-01-01

    The proliferation of the Semantic Web depends on ontologies for knowledge sharing, semantic annotation, data fusion, and descriptions of data for machine interpretation. However, ontologies are difficult to create and maintain. In addition, their structure and content may vary depending on the application and domain. Several methods described in…

  8. Development of an Adolescent Depression Ontology for Analyzing Social Data.

    PubMed

    Jung, Hyesil; Park, Hyeoun-Ae; Song, Tae-Min; Jeon, Eunjoo; Kim, Ae Ran; Lee, Joo Yun

    2015-01-01

    Depression in adolescence is associated with significant suicidality. Therefore, it is important to detect the risk for depression and provide timely care to adolescents. This study aims to develop an ontology for collecting and analyzing social media data about adolescent depression. This ontology was developed using the 'ontology development 101'. The important terms were extracted from several clinical practice guidelines and postings on Social Network Service. We extracted 777 terms, which were categorized into 'risk factors', 'sign and symptoms', 'screening', 'diagnosis', 'treatment', and 'prevention'. An ontology developed in this study can be used as a framework to understand adolescent depression using unstructured data from social media. PMID:26262398

  9. High speed lookup table approach to radiometric calibration of multispectral image data

    NASA Technical Reports Server (NTRS)

    Kelly, W. L., IV; Meredith, B. D.; Howle, W. M.

    1980-01-01

    A concept for performing radiometric correction of multispectral image data onboard a spacecraft at very high data rates is presented and demonstrated. This concept utilized a lookup table approach, implemented in hardware, to convert the raw sensor data into the desired corrected output data. The digital lookup table memory was interfaced to a microprocessor to allow the data correction function to be completely programmable. Sensor data was processed with this approach at rates equal to the access time of the lookup table memory. This concept offers flexible high speed data processing for a wide range of applications and will benefit from the continuing improvements in performance of digital memories.

  10. Progressive halftone watermarking using multilayer table lookup strategy.

    PubMed

    Guo, Jing-Ming; Lai, Guo-Hung; Wong, Koksheik; Chang, Li-Chung

    2015-07-01

    In this paper, a halftoning-based multilayer watermarking of low computational complexity is proposed. An additional data-hiding technique is also employed to embed multiple watermarks into the watermark to be embedded to improve the security and embedding capacity. At the encoder, the efficient direct binary search method is employed to generate 256 reference tables to ensure the output is in halftone format. Subsequently, watermarks are embedded by a set of optimized compressed tables with various textural angles for table lookup. At the decoder, the least mean square metric is considered to increase the differences among those generated phenotypes of the embedding angles and reduce the required number of dimensions for each angle. Finally, the naïve Bayes classifier is employed to collect the possibilities of multilayer information for classifying the associated angles to extract the embedded watermarks. These decoded watermarks can be further overlapped for retrieving the additional hidden-layer watermarks. Experimental results show that the proposed method requires only 8.4 ms for embedding a watermark into an image of size 512×512 , under the 32-bit Windows 7 platform running on 4GB RAM, Intel core i7 Sandy Bridge with 4GB RAM and IDE Visual Studio 2010. Finally, only 2 MB is required to store the proposed compressed reference table. PMID:25576570

  11. 3-D lookup: Fast protein structure database searches

    SciTech Connect

    Holm. L.; Sander, C.

    1995-12-31

    There are far fewer classes of three-dimensional protein folds than sequence families but the problem of detecting three-dimensional similarities is NP-complete. We present a novel heuristic for identifying 3-D similarities between a query structure and the database of known protein structures. Many methods for structure alignment use a bottom-up approach, identifying first local matches and then solving a combinatorial problem in building up larger clusters of matching substructures. Here the top-down approach is to start with the global comparison and select a rough superimposition using a fast 3-D lookup of secondary structure motifs. The superimposition is then extended to an alignment of C{sup {alpha}} atoms by an iterative dynamic programming step. An all-against-all comparison of 385-representative proteins (150,000 pair comparisons) took 1 day of computer time on a single R8000 processor. In other words, one query structure is scanned against the database in a matter of minutes. The method is rated at 90% reliability at capturing statistically significant similarities. It is useful as a rapid preprocessor to a comprehensive protein structure database search system.

  12. Effect on Lookup Aids on Mature Readers' Recall of Technical Text.

    ERIC Educational Resources Information Center

    Blohm, Paul J.

    1987-01-01

    Concludes that despite the potential disruption to the flow of understanding, readers' use of lookups is a necessary and appropriate fixup activity when reading only is inadequate for remedying text confusions. (FL)

  13. Application of Ontologies for Big Earth Data

    NASA Astrophysics Data System (ADS)

    Huang, T.; Chang, G.; Armstrong, E. M.; Boening, C.

    2014-12-01

    Connected data is smarter data! Earth Science research infrastructure must do more than just being able to support temporal, geospatial discovery of satellite data. As the Earth Science data archives continue to expand across NASA data centers, the research communities are demanding smarter data services. A successful research infrastructure must be able to present researchers the complete picture, that is, datasets with linked citations, related interdisciplinary data, imageries, current events, social media discussions, and scientific data tools that are relevant to the particular dataset. The popular Semantic Web for Earth and Environmental Terminology (SWEET) ontologies is a collection of ontologies and concepts designed to improve discovery and application of Earth Science data. The SWEET ontologies collection was initially developed to capture the relationships between keywords in the NASA Global Change Master Directory (GCMD). Over the years this popular ontologies collection has expanded to cover over 200 ontologies and 6000 concepts to enable scalable classification of Earth system science concepts and Space science. This presentation discusses the semantic web technologies as the enabling technology for data-intensive science. We will discuss the application of the SWEET ontologies as a critical component in knowledge-driven research infrastructure for some of the recent projects, which include the DARPA Ontological System for Context Artifact and Resources (OSCAR), 2013 NASA ACCESS Virtual Quality Screening Service (VQSS), and the 2013 NASA Sea Level Change Portal (SLCP) projects. The presentation will also discuss the benefits in using semantic web technologies in developing research infrastructure for Big Earth Science Data in an attempt to "accommodate all domains and provide the necessary glue for information to be cross-linked, correlated, and discovered in a semantically rich manner." [1] [1] Savas Parastatidis: A platform for all that we know

  14. Dynamic Generation of Reduced Ontologies to Support Resource Constraints of Mobile Devices

    ERIC Educational Resources Information Center

    Schrimpsher, Dan

    2011-01-01

    As Web Services and the Semantic Web become more important, enabling technologies such as web service ontologies will grow larger. At the same time, use of mobile devices to access web services has doubled in the last year. The ability of these resource constrained devices to download and reason across these ontologies to support service discovery…

  15. Data mining for ontology development.

    SciTech Connect

    Davidson, George S.; Strasburg, Jana; Stampf, David; Neymotin,Lev; Czajkowski, Carl; Shine, Eugene; Bollinger, James; Ghosh, Vinita; Sorokine, Alexandre; Ferrell, Regina; Ward, Richard; Schoenwald, David Alan

    2010-06-01

    A multi-laboratory ontology construction effort during the summer and fall of 2009 prototyped an ontology for counterfeit semiconductor manufacturing. This effort included an ontology development team and an ontology validation methods team. Here the third team of the Ontology Project, the Data Analysis (DA) team reports on their approaches, the tools they used, and results for mining literature for terminology pertinent to counterfeit semiconductor manufacturing. A discussion of the value of ontology-based analysis is presented, with insights drawn from other ontology-based methods regularly used in the analysis of genomic experiments. Finally, suggestions for future work are offered.

  16. A Method for Evaluating and Standardizing Ontologies

    ERIC Educational Resources Information Center

    Seyed, Ali Patrice

    2012-01-01

    The Open Biomedical Ontology (OBO) Foundry initiative is a collaborative effort for developing interoperable, science-based ontologies. The Basic Formal Ontology (BFO) serves as the upper ontology for the domain-level ontologies of OBO. BFO is an upper ontology of types as conceived by defenders of realism. Among the ontologies developed for OBO…

  17. Lookup Tables Versus Stacked Rasch Analysis in Comparing Pre- and Postintervention Adult Strabismus-20 Data

    PubMed Central

    Leske, David A.; Hatt, Sarah R.; Liebermann, Laura; Holmes, Jonathan M.

    2016-01-01

    Purpose We compare two methods of analysis for Rasch scoring pre- to postintervention data: Rasch lookup table versus de novo stacked Rasch analysis using the Adult Strabismus-20 (AS-20). Methods One hundred forty-seven subjects completed the AS-20 questionnaire prior to surgery and 6 weeks postoperatively. Subjects were classified 6 weeks postoperatively as “success,” “partial success,” or “failure” based on angle and diplopia status. Postoperative change in AS-20 scores was compared for all four AS-20 domains (self-perception, interactions, reading function, and general function) overall and by success status using two methods: (1) applying historical Rasch threshold measures from lookup tables and (2) performing a stacked de novo Rasch analysis. Change was assessed by analyzing effect size, improvement exceeding 95% limits of agreement (LOA), and score distributions. Results Effect sizes were similar for all AS-20 domains whether obtained from lookup tables or stacked analysis. Similar proportions exceeded 95% LOAs using lookup tables versus stacked analysis. Improvement in median score was observed for all AS-20 domains using lookup tables and stacked analysis (P < 0.0001 for all comparisons). Conclusions The Rasch-scored AS-20 is a responsive and valid instrument designed to measure strabismus-specific health-related quality of life. When analyzing pre- to postoperative change in AS-20 scores, Rasch lookup tables and de novo stacked Rasch analysis yield essentially the same results. Translational Relevance We describe a practical application of lookup tables, allowing the clinician or researcher to score the Rasch-calibrated AS-20 questionnaire without specialized software. PMID:26933524

  18. The neurological disease ontology

    PubMed Central

    2013-01-01

    Background We are developing the Neurological Disease Ontology (ND) to provide a framework to enable representation of aspects of neurological diseases that are relevant to their treatment and study. ND is a representational tool that addresses the need for unambiguous annotation, storage, and retrieval of data associated with the treatment and study of neurological diseases. ND is being developed in compliance with the Open Biomedical Ontology Foundry principles and builds upon the paradigm established by the Ontology for General Medical Science (OGMS) for the representation of entities in the domain of disease and medical practice. Initial applications of ND will include the annotation and analysis of large data sets and patient records for Alzheimer’s disease, multiple sclerosis, and stroke. Description ND is implemented in OWL 2 and currently has more than 450 terms that refer to and describe various aspects of neurological diseases. ND directly imports the development version of OGMS, which uses BFO 2. Term development in ND has primarily extended the OGMS terms ‘disease’, ‘diagnosis’, ‘disease course’, and ‘disorder’. We have imported and utilize over 700 classes from related ontology efforts including the Foundational Model of Anatomy, Ontology for Biomedical Investigations, and Protein Ontology. ND terms are annotated with ontology metadata such as a label (term name), term editors, textual definition, definition source, curation status, and alternative terms (synonyms). Many terms have logical definitions in addition to these annotations. Current development has focused on the establishment of the upper-level structure of the ND hierarchy, as well as on the representation of Alzheimer’s disease, multiple sclerosis, and stroke. The ontology is available as a version-controlled file at http://code.google.com/p/neurological-disease-ontology along with a discussion list and an issue tracker. Conclusion ND seeks to provide a formal

  19. Open Biomedical Ontology-based Medline exploration

    PubMed Central

    Xuan, Weijian; Dai, Manhong; Mirel, Barbara; Song, Jean; Athey, Brian; Watson, Stanley J; Meng, Fan

    2009-01-01

    Background Effective Medline database exploration is critical for the understanding of high throughput experimental results and the development of novel hypotheses about the mechanisms underlying the targeted biological processes. While existing solutions enhance Medline exploration through different approaches such as document clustering, network presentations of underlying conceptual relationships and the mapping of search results to MeSH and Gene Ontology trees, we believe the use of multiple ontologies from the Open Biomedical Ontology can greatly help researchers to explore literature from different perspectives as well as to quickly locate the most relevant Medline records for further investigation. Results We developed an ontology-based interactive Medline exploration solution called PubOnto to enable the interactive exploration and filtering of search results through the use of multiple ontologies from the OBO foundry. The PubOnto program is a rich internet application based on the FLEX platform. It contains a number of interactive tools, visualization capabilities, an open service architecture, and a customizable user interface. It is freely accessible at: . PMID:19426463

  20. A RESTful way to Manage Ontologies

    NASA Astrophysics Data System (ADS)

    Lowry, R. K.; Lawrence, B. N.

    2009-04-01

    In 2005 BODC implemented the first version of a vocabulary server developed as a contribution to the NERC DataGrid project. Vocabularies were managed within an RDBMS environment and accessed through a SOAP Web Service API. This was designed as a database query interface with operations targeted at designated database fields and results returned as strings. At the end of 2007 a new version of the server was released capable of serving thesauri and ontologies as well as vocabularies. The SOAP API functionality was enhanced and the output format changed to XML. In addition, a pseudo-RESTful query interface was developed directly addressing terms and lists by URLs. This is in full operational use by projects such as SeaDataNet and will run for the foreseeable future. However, operational experience has exposed shortcomings in both the API and its document payload. Other ontology servers, notably at MMI and CSIRO, are coming on-line making now the time to unify ontology management. This paper presents a RESTful API and payload document schema. It is based on the lessons learned in four years of operational vocabulary serving, provides full ontology management functionality and has the potential to form the basis for an interoperable network of distributed ontologies.

  1. Implementation of an advanced table look-up classifier for large area land-use classification

    NASA Technical Reports Server (NTRS)

    Jones, C.

    1974-01-01

    Software employing Eppler's improved table look-up approach to pattern recognition has been developed, and results from this software are presented. The look-up table for each class is a computer representation of a hyperellipsoid in four dimensional space. During implementation of the software Eppler's look-up procedure was modified to include multiple ranges in order to accommodate hollow regions in the ellipsoids. In a typical ERTS classification run less than 6000 36-bit computer words were required to store tables for 24 classes. Classification results from the improved table look-up are identical with those produced by the conventional method, i.e., by calculation of the maximum likelihood decision rule at the moment of classification. With the new look-up approach an entire ERTS MSS frame can be classified into 24 classes in 1.3 hours, compared to 22.5 hours required by the conventional method. The new software is coded completely in FORTRAN to facilitate transfer to other digital computers.

  2. Mechanisms in biomedical ontology

    PubMed Central

    2012-01-01

    The concept of a mechanism has become a standard proposal for explanations in biology. It has been claimed that mechanistic explanations are appropriate for systems biology, because they occupy a middle ground between strict reductionism and holism. Because of their importance in the field a formal ontological description of mechanisms is desirable. The standard philosophical accounts of mechanisms are often ambiguous and lack the clarity that can be provided by a formal-ontological framework. The goal of this paper is to clarify some of these ambiguities and suggest such a framework for mechanisms. Taking some hints from an "ontology of devices" I suggest as a general approach for this task the introduction of functional kinds and functional parts by which the particular relations between a mechanism and its components can be captured. PMID:23046727

  3. Ontologies for molecular biology.

    PubMed

    Schulze-Kremer, S

    1998-01-01

    Molecular biology has a communication problem. There are many databases using their own labels and categories for storing data objects and some using identical labels and categories but with a different meaning. A prominent example is the concept "gene" which is used with different semantics by major international genomic databases. Ontologies are one means to provide a semantic repository to systematically order relevant concepts in molecular biology and to bridge the different notions in various databases by explicitly specifying the meaning of and relation between the fundamental concepts in an application domain. Here, the upper level and a database branch of a prospective ontology for molecular biology (OMB) is presented and compared to other ontologies with respect to suitability for molecular biology (http:/(/)igd.rz-berlin.mpg.de/approximately www/oe/mbo.html). PMID:9697223

  4. Ontological engineering versus metaphysics

    NASA Astrophysics Data System (ADS)

    Tataj, Emanuel; Tomanek, Roman; Mulawka, Jan

    2011-10-01

    It has been recognized that ontologies are a semantic version of world wide web and can be found in knowledge-based systems. A recent time survey of this field also suggest that practical artificial intelligence systems may be motivated by this research. Especially strong artificial intelligence as well as concept of homo computer can also benefit from their use. The main objective of this contribution is to present and review already created ontologies and identify the main advantages which derive such approach for knowledge management systems. We would like to present what ontological engineering borrows from metaphysics and what a feedback it can provide to natural language processing, simulations and modelling. The potential topics of further development from philosophical point of view is also underlined.

  5. IMGT-ONTOLOGY 2012

    PubMed Central

    Giudicelli, Véronique; Lefranc, Marie-Paule

    2012-01-01

    Immunogenetics is the science that studies the genetics of the immune system and immune responses. Owing to the complexity and diversity of the immune repertoire, immunogenetics represents one of the greatest challenges for data interpretation: a large biological expertise, a considerable effort of standardization and the elaboration of an efficient system for the management of the related knowledge were required. IMGT®, the international ImMunoGeneTics information system® (http://www.imgt.org) has reached that goal through the building of a unique ontology, IMGT-ONTOLOGY, which represents the first ontology for the formal representation of knowledge in immunogenetics and immunoinformatics. IMGT-ONTOLOGY manages the immunogenetics knowledge through diverse facets that rely on the seven axioms of the Formal IMGT-ONTOLOGY or IMGT-Kaleidoscope: “IDENTIFICATION,” “DESCRIPTION,” “CLASSIFICATION,” “NUMEROTATION,” “LOCALIZATION,” “ORIENTATION,” and “OBTENTION.” The concepts of identification, description, classification, and numerotation generated from the axioms led to the elaboration of the IMGT® standards that constitute the IMGT Scientific chart: IMGT® standardized keywords (concepts of identification), IMGT® standardized labels (concepts of description), IMGT® standardized gene and allele nomenclature (concepts of classification) and IMGT unique numbering and IMGT Collier de Perles (concepts of numerotation). IMGT-ONTOLOGY has become the global reference in immunogenetics and immunoinformatics for the knowledge representation of immunoglobulins (IG) or antibodies, T cell receptors (TR), and major histocompatibility (MH) proteins of humans and other vertebrates, proteins of the immunoglobulin superfamily (IgSF) and MH superfamily (MhSF), related proteins of the immune system (RPI) of vertebrates and invertebrates, therapeutic monoclonal antibodies (mAbs), fusion proteins for immune applications (FPIA), and composite proteins for

  6. Ontology development for Sufism domain

    NASA Astrophysics Data System (ADS)

    Iqbal, Rizwan

    2012-01-01

    Domain ontology is a descriptive representation of any particular domain which in detail describes the concepts in a domain, the relationships among those concepts and organizes them in a hierarchal manner. It is also defined as a structure of knowledge, used as a means of knowledge sharing to the community. An Important aspect of using ontologies is to make information retrieval more accurate and efficient. Thousands of domain ontologies from all around the world are available online on ontology repositories. Ontology repositories like SWOOGLE currently have over 1000 ontologies covering a wide range of domains. It was found that up to date there was no ontology available covering the domain of "Sufism". This unavailability of "Sufism" domain ontology became a motivation factor for this research. This research came up with a working "Sufism" domain ontology as well a framework, design of the proposed framework focuses on the resolution to problems which were experienced while creating the "Sufism" ontology. The development and working of the "Sufism" domain ontology are covered in detail in this research. The word "Sufism" is a term which refers to Islamic mysticism. One of the reasons to choose "Sufism" for ontology creation is its global curiosity. This research has also managed to create some individuals which inherit the concepts from the "Sufism" ontology. The creation of individuals helps to demonstrate the efficient and precise retrieval of data from the "Sufism" domain ontology. The experiment of creating the "Sufism" domain ontology was carried out on a tool called Protégé. Protégé is a tool which is used for ontology creation, editing and it is open source.

  7. Ontology development for Sufism domain

    NASA Astrophysics Data System (ADS)

    Iqbal, Rizwan

    2011-12-01

    Domain ontology is a descriptive representation of any particular domain which in detail describes the concepts in a domain, the relationships among those concepts and organizes them in a hierarchal manner. It is also defined as a structure of knowledge, used as a means of knowledge sharing to the community. An Important aspect of using ontologies is to make information retrieval more accurate and efficient. Thousands of domain ontologies from all around the world are available online on ontology repositories. Ontology repositories like SWOOGLE currently have over 1000 ontologies covering a wide range of domains. It was found that up to date there was no ontology available covering the domain of "Sufism". This unavailability of "Sufism" domain ontology became a motivation factor for this research. This research came up with a working "Sufism" domain ontology as well a framework, design of the proposed framework focuses on the resolution to problems which were experienced while creating the "Sufism" ontology. The development and working of the "Sufism" domain ontology are covered in detail in this research. The word "Sufism" is a term which refers to Islamic mysticism. One of the reasons to choose "Sufism" for ontology creation is its global curiosity. This research has also managed to create some individuals which inherit the concepts from the "Sufism" ontology. The creation of individuals helps to demonstrate the efficient and precise retrieval of data from the "Sufism" domain ontology. The experiment of creating the "Sufism" domain ontology was carried out on a tool called Protégé. Protégé is a tool which is used for ontology creation, editing and it is open source.

  8. Using a Foundational Ontology for Reengineering a Software Enterprise Ontology

    NASA Astrophysics Data System (ADS)

    Perini Barcellos, Monalessa; de Almeida Falbo, Ricardo

    The knowledge about software organizations is considerably relevant to software engineers. The use of a common vocabulary for representing the useful knowledge about software organizations involved in software projects is important for several reasons, such as to support knowledge reuse and to allow communication and interoperability between tools. Domain ontologies can be used to define a common vocabulary for sharing and reuse of knowledge about some domain. Foundational ontologies can be used for evaluating and re-designing domain ontologies, giving to these real-world semantics. This paper presents an evaluating of a Software Enterprise Ontology that was reengineered using the Unified Foundation Ontology (UFO) as basis.

  9. Ontology Performance Profiling and Model Examination: First Steps

    NASA Astrophysics Data System (ADS)

    Wang, Taowei David; Parsia, Bijan

    "[Reasoner] performance can be scary, so much so, that we cannot deploy the technology in our products." - Michael Shepard. What are typical OWL users to do when their favorite reasoner never seems to return? In this paper, we present our first steps considering this problem. We describe the challenges and our approach, and present a prototype tool to help users identify reasoner performance bottlenecks with respect to their ontologies. We then describe 4 case studies on synthetic and real-world ontologies. While the anecdotal evidence suggests that the service can be useful for both ontology developers and reasoner implementors, much more is desired.

  10. POSet Ontology Categorizer

    Energy Science and Technology Software Center (ESTSC)

    2005-03-01

    POSet Ontology Categorizer (POSOC) V1.0 The POSet Ontology Categorizer (POSOC) software package provides tools for creating and mining of poset-structured ontologies, such as the Gene Ontology (GO). Given a list of weighted query items (ex.genes,proteins, and/or phrases) and one or more focus nodes, POSOC determines the ordered set of GO nodes that summarize the query, based on selections of a scoring function, pseudo-distance measure, specificity level, and cluster determination. Pseudo-distance measures provided are minimum chainmore » length, maximum chain length, average of extreme chain lengths, and average of all chain lengths. A low specificity level, such as -1 or 0, results in a general set of clusters. Increasing the specificity results in more specific results in more specific and lighter clusters. POSOC cluster results can be compared agaist known results by calculations of precision, recall, and f-score for graph neighborhood relationships. This tool has been used in understanding the function of a set of genes, finding similar genes, and annotating new proteins. The POSOC software consists of a set of Java interfaces, classes, and programs that run on Linux or Windows platforms. It incorporates graph classes from OpenJGraph (openjgraph.sourceforge.net).« less

  11. POSet Ontology Categorizer

    SciTech Connect

    Miniszewski, Sue M.

    2005-03-01

    POSet Ontology Categorizer (POSOC) V1.0 The POSet Ontology Categorizer (POSOC) software package provides tools for creating and mining of poset-structured ontologies, such as the Gene Ontology (GO). Given a list of weighted query items (ex.genes,proteins, and/or phrases) and one or more focus nodes, POSOC determines the ordered set of GO nodes that summarize the query, based on selections of a scoring function, pseudo-distance measure, specificity level, and cluster determination. Pseudo-distance measures provided are minimum chain length, maximum chain length, average of extreme chain lengths, and average of all chain lengths. A low specificity level, such as -1 or 0, results in a general set of clusters. Increasing the specificity results in more specific results in more specific and lighter clusters. POSOC cluster results can be compared agaist known results by calculations of precision, recall, and f-score for graph neighborhood relationships. This tool has been used in understanding the function of a set of genes, finding similar genes, and annotating new proteins. The POSOC software consists of a set of Java interfaces, classes, and programs that run on Linux or Windows platforms. It incorporates graph classes from OpenJGraph (openjgraph.sourceforge.net).

  12. Dahlbeck and Pure Ontology

    ERIC Educational Resources Information Center

    Mackenzie, Jim

    2016-01-01

    This article responds to Johan Dahlbeck's "Towards a pure ontology: Children's bodies and morality" ["Educational Philosophy and Theory," vol. 46 (1), 2014, pp. 8-23 (EJ1026561)]. His arguments from Nietzsche and Spinoza do not carry the weight he supposes, and the conclusions he draws from them about pedagogy would be…

  13. The Drosophila anatomy ontology

    PubMed Central

    2013-01-01

    Background Anatomy ontologies are query-able classifications of anatomical structures. They provide a widely-used means for standardising the annotation of phenotypes and expression in both human-readable and programmatically accessible forms. They are also frequently used to group annotations in biologically meaningful ways. Accurate annotation requires clear textual definitions for terms, ideally accompanied by images. Accurate grouping and fruitful programmatic usage requires high-quality formal definitions that can be used to automate classification and check for errors. The Drosophila anatomy ontology (DAO) consists of over 8000 classes with broad coverage of Drosophila anatomy. It has been used extensively for annotation by a range of resources, but until recently it was poorly formalised and had few textual definitions. Results We have transformed the DAO into an ontology rich in formal and textual definitions in which the majority of classifications are automated and extensive error checking ensures quality. Here we present an overview of the content of the DAO, the patterns used in its formalisation, and the various uses it has been put to. Conclusions As a result of the work described here, the DAO provides a high-quality, queryable reference for the wild-type anatomy of Drosophila melanogaster and a set of terms to annotate data related to that anatomy. Extensive, well referenced textual definitions make it both a reliable and useful reference and ensure accurate use in annotation. Wide use of formal axioms allows a large proportion of classification to be automated and the use of consistency checking to eliminate errors. This increased formalisation has resulted in significant improvements to the completeness and accuracy of classification. The broad use of both formal and informal definitions make further development of the ontology sustainable and scalable. The patterns of formalisation used in the DAO are likely to be useful to developers of other

  14. Benchmarking Ontologies: Bigger or Better?

    PubMed Central

    Yao, Lixia; Divoli, Anna; Mayzus, Ilya; Evans, James A.; Rzhetsky, Andrey

    2011-01-01

    A scientific ontology is a formal representation of knowledge within a domain, typically including central concepts, their properties, and relations. With the rise of computers and high-throughput data collection, ontologies have become essential to data mining and sharing across communities in the biomedical sciences. Powerful approaches exist for testing the internal consistency of an ontology, but not for assessing the fidelity of its domain representation. We introduce a family of metrics that describe the breadth and depth with which an ontology represents its knowledge domain. We then test these metrics using (1) four of the most common medical ontologies with respect to a corpus of medical documents and (2) seven of the most popular English thesauri with respect to three corpora that sample language from medicine, news, and novels. Here we show that our approach captures the quality of ontological representation and guides efforts to narrow the breach between ontology and collective discourse within a domain. Our results also demonstrate key features of medical ontologies, English thesauri, and discourse from different domains. Medical ontologies have a small intersection, as do English thesauri. Moreover, dialects characteristic of distinct domains vary strikingly as many of the same words are used quite differently in medicine, news, and novels. As ontologies are intended to mirror the state of knowledge, our methods to tighten the fit between ontology and domain will increase their relevance for new areas of biomedical science and improve the accuracy and power of inferences computed across them. PMID:21249231

  15. 40 CFR Table Nn-2 to Subpart Hh of... - Lookup Default Values for Calculation Methodology 2 of This Subpart

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Lookup Default Values for Calculation Methodology 2 of This Subpart NN Table NN-2 to Subpart HH of Part 98 Protection of Environment ENVIRONMENTAL... Waste Landfills Pt. 98, Subpt. NN, Table NN-2 Table NN-2 to Subpart HH of Part 98—Lookup Default...

  16. 40 CFR Table Nn-2 to Subpart Hh of... - Lookup Default Values for Calculation Methodology 2 of This Subpart

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Lookup Default Values for Calculation Methodology 2 of This Subpart NN Table NN-2 to Subpart HH of Part 98 Protection of Environment ENVIRONMENTAL... Waste Landfills Pt. 98, Subpt. NN, Table NN-2 Table NN-2 to Subpart HH of Part 98—Lookup Default...

  17. Does Look-up Frequency Help Reading Comprehension of EFL Learners? Two Empirical Studies of Electronic Dictionaries

    ERIC Educational Resources Information Center

    Koyama, Toshiko; Takeuchi, Osamu

    2007-01-01

    Two empirical studies were conducted in which the differences in Japanese EFL learners' look-up behavior between hand-held electronic dictionaries (EDs) and printed dictionaries (PDs) were investigated. We focus here on the relation between learners' look-up frequency and degree of reading comprehension of the text. In the first study, a total of…

  18. Cache directory lookup reader set encoding for partial cache line speculation support

    DOEpatents

    Gara, Alan; Ohmacht, Martin

    2014-10-21

    In a multiprocessor system, with conflict checking implemented in a directory lookup of a shared cache memory, a reader set encoding permits dynamic recordation of read accesses. The reader set encoding includes an indication of a portion of a line read, for instance by indicating boundaries of read accesses. Different encodings may apply to different types of speculative execution.

  19. Simulation of the Hermes Lead Glass Calorimeter using a Look-Up Table

    SciTech Connect

    Vandenbroucke, A.; Miller, C. A.

    2006-10-27

    This contribution describes the Monte Carlo simulation of the Hermes Electromagnetic Lead-Glass Calorimeter. The simulation is based on the GEANT3 simulation package in combination with a Look-Up Table. Details of the simulation as well as a comparison with experimental data are reported.

  20. The Comparative "Look-Up" Ability of Script Readers on Television

    ERIC Educational Resources Information Center

    Austin, Henry R.; Donaghy, William C.

    1970-01-01

    Reports on the results of a number of tests designed to compare the abilities of readers to look up from their scripts as they read to a TV camera and to attempt to correlate variation in look-up ability with other silent and oral reading parameters." (Author/AA)

  1. Time and space efficient method-lookup for object-oriented programs

    SciTech Connect

    Muthukrishnan, S.; Mueller, M.

    1996-12-31

    Object-oriented languages (OOLs) are becoming increasingly popular in software development. The modular units in such languages are abstract data types called classes, comprising data and functions (or selectors in the OOL parlance); each selector has possibly multiple implementations (or methods in OOL parlance) each in a different class. These languages support reusability of code/functions by allowing a class to inherit methods from its superclass in a hierarchical arrangement of the various classes. Therefore, when a selector s is invoked in a class c, the relevant method for s inherited by c has to be determined. That is the fundamental problem of method-lookup in object-oriented programs. Since nearly every statement of such programs calls for a method-lookup, efficient support of OOLs crucially relies on the method-lookup mechanism. The challenge in implementing the method-lookup, as it turns out, is to use only a reasonable amount of table-space while keeping the query time down. Substantial research has gone into achieving improved space vs time trade-off in practice.

  2. The Dictionary Look-up Behavior of Hong Kong Students: A Large-Scale Survey.

    ERIC Educational Resources Information Center

    Fan, May Y.

    2000-01-01

    Investigates the look-up behavior of bilingualized dictionaries of Hong Kong (China) students focusing on dictionary information frequency usage and the perceptions of the usefulness of such information. Indicates that students in general make limited use of the bilingualized dictionary, while more proficient students use the dictionary more…

  3. A hybrid table look-up method for H.264/AVC coeff_token decoding

    NASA Astrophysics Data System (ADS)

    Liu, Suhua; Zhang, Yixiong; Lu, Min; Tang, Biyu

    2011-10-01

    In this paper, a hybrid table look-up method for H.264 Coeff_Token Decoding is presented. In the proposed method the probabilities of the codewords with various lengths are analyzed, and based on the statistics a hybrid look-up table is constructed. In the coeff_token decoding process, firstly, a few bits are read from the bit-stream, if a matched codeword is found in the first look-up table, the further look-up steps will be skipped. Otherwise, more bits need to be read and looked up in the second table, which is built upon the number of leading 0's before the first number one. Experimental results on the RTSM Emulation Baseboard ARM926 of RealView show that the proposed method speeds up CAVLD of H.264 by about 8% with more efficient memory utilization, when compared to the prefix-based decoding method. And compared with the pattern-search method based on hashing algorithms adopted in the newest version of FFMPEG, the proposed method reduces memory space by about 77%.

  4. Improving the dictionary lookup approach for disease normalization using enhanced dictionary and query expansion

    PubMed Central

    Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie

    2016-01-01

    The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13. Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract PMID:27504009

  5. Improving the dictionary lookup approach for disease normalization using enhanced dictionary and query expansion.

    PubMed

    Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie

    2016-01-01

    The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13.Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract. PMID:27504009

  6. Rehabilitation robotics ontology on the cloud.

    PubMed

    Dogmus, Zeynep; Papantoniou, Agis; Kilinc, Muhammed; Yildirim, Sibel A; Erdem, Esra; Patoglu, Volkan

    2013-06-01

    We introduce the first formal rehabilitation robotics ontology, called RehabRobo-Onto, to represent information about rehabilitation robots and their properties; and a software system RehabRobo-Query to facilitate access to this ontology. RehabRobo-Query is made available on the cloud, utilizing Amazon Web services, so that 1) rehabilitation robot designers around the world can add/modify information about their robots in RehabRobo-Onto, and 2) rehabilitation robot designers and physical medicine experts around the world can access the knowledge in RehabRobo-Onto by means of questions about robots, in natural language, with the guide of the intelligent userinterface of RehabRobo-Query. The ontology system consisting of RehabRobo-Onto and RehabRobo-Query is of great value to robot designers as well as physical therapists and medical doctors. On the one hand, robot designers can access various properties of the existing robots and to the related publications to further improve the state-of-the-art. On the other hand, physical therapists and medical doctors can utilize the ontology to compare rehabilitation robots and to identify the ones that serve best to cover their needs, or to evaluate the effects of various devices for targeted joint exercises on patients with specific disorders. PMID:24187234

  7. Ontology Mappings to Improve Learning Resource Search

    ERIC Educational Resources Information Center

    Gasevic, Dragan; Hatala, Marek

    2006-01-01

    This paper proposes an ontology mapping-based framework that allows searching for learning resources using multiple ontologies. The present applications of ontologies in e-learning use various ontologies (eg, domain, curriculum, context), but they do not give a solution on how to interoperate e-learning systems based on different ontologies. The…

  8. An Ontology for Software Engineering Education

    ERIC Educational Resources Information Center

    Ling, Thong Chee; Jusoh, Yusmadi Yah; Adbullah, Rusli; Alwi, Nor Hayati

    2013-01-01

    Software agents communicate using ontology. It is important to build an ontology for specific domain such as Software Engineering Education. Building an ontology from scratch is not only hard, but also incur much time and cost. This study aims to propose an ontology through adaptation of the existing ontology which is originally built based on a…

  9. A full-spectrum k-distribution look-up table for radiative transfer in nonhomogeneous gaseous media

    NASA Astrophysics Data System (ADS)

    Wang, Chaojun; Ge, Wenjun; Modest, Michael F.; He, Boshu

    2016-01-01

    A full-spectrum k-distribution (FSK) look-up table has been constructed for gas mixtures within a certain range of thermodynamic states for three species, i.e., CO2, H2O and CO. The k-distribution of a mixture is assembled directly from the summation of the linear absorption coefficients of three species. The systematic approach to generate the table, including the generation of the pressure-based absorption coefficient and the generation of the k-distribution, is discussed. To efficiently obtain accurate k-values for arbitrary thermodynamic states from tabulated values, a 6-D linear interpolation method is employed. A large number of radiative heat transfer calculations have been carried out to test the accuracy of the FSK look-up table. Results show that, using the FSK look-up table can provide excellent accuracy compared to the exact results. Without the time-consuming process of assembling k-distribution from individual species plus mixing, using the FSK look-up table can save considerable computational cost. To evaluate the accuracy as well as the efficiency of the FSK look-up table, radiative heat transfer via a scaled Sandia D Flame is calculated to compare the CPU execution time using the FSK method based on the narrow-band database, correlations, and the look-up table. Results show that the FSK look-up table can provide a computationally cheap alternative without much sacrifice in accuracy.

  10. Spectral Retrieval of Latent Heating Profiles from TRMM PR Data: Comparison of Look-Up Tables

    NASA Technical Reports Server (NTRS)

    Shige, Shoichi; Takayabu, Yukari N.; Tao, Wei-Kuo; Johnson, Daniel E.; Shie, Chung-Lin

    2003-01-01

    The primary goal of the Tropical Rainfall Measuring Mission (TRMM) is to use the information about distributions of precipitation to determine the four dimensional (i.e., temporal and spatial) patterns of latent heating over the whole tropical region. The Spectral Latent Heating (SLH) algorithm has been developed to estimate latent heating profiles for the TRMM Precipitation Radar (PR) with a cloud- resolving model (CRM). The method uses CRM- generated heating profile look-up tables for the three rain types; convective, shallow stratiform, and anvil rain (deep stratiform with a melting level). For convective and shallow stratiform regions, the look-up table refers to the precipitation top height (PTH). For anvil region, on the other hand, the look- up table refers to the precipitation rate at the melting level instead of PTH. For global applications, it is necessary to examine the universality of the look-up table. In this paper, we compare the look-up tables produced from the numerical simulations of cloud ensembles forced with the Tropical Ocean Global Atmosphere (TOGA) Coupled Atmosphere-Ocean Response Experiment (COARE) data and the GARP Atlantic Tropical Experiment (GATE) data. There are some notable differences between the TOGA-COARE table and the GATE table, especially for the convective heating. First, there is larger number of deepest convective profiles in the TOGA-COARE table than in the GATE table, mainly due to the differences in SST. Second, shallow convective heating is stronger in the TOGA COARE table than in the GATE table. This might be attributable to the difference in the strength of the low-level inversions. Third, altitudes of convective heating maxima are larger in the TOGA COARE table than in the GATE table. Levels of convective heating maxima are located just below the melting level, because warm-rain processes are prevalent in tropical oceanic convective systems. Differences in levels of convective heating maxima probably reflect

  11. The ontology of biological taxa

    PubMed Central

    Schulz, Stefan; Stenzhorn, Holger; Boeker, Martin

    2008-01-01

    Motivation: The classification of biological entities in terms of species and taxa is an important endeavor in biology. Although a large amount of statements encoded in current biomedical ontologies is taxon-dependent there is no obvious or standard way for introducing taxon information into an integrative ontology architecture, supposedly because of ongoing controversies about the ontological nature of species and taxa. Results: In this article, we discuss different approaches on how to represent biological taxa using existing standards for biomedical ontologies such as the description logic OWL DL and the Open Biomedical Ontologies Relation Ontology. We demonstrate how hidden ambiguities of the species concept can be dealt with and existing controversies can be overcome. A novel approach is to envisage taxon information as qualities that inhere in biological organisms, organism parts and populations. Availability: The presented methodology has been implemented in the domain top-level ontology BioTop, openly accessible at http://purl.org/biotop. BioTop may help to improve the logical and ontological rigor of biomedical ontologies and further provides a clear architectural principle to deal with biological taxa information. Contact: stschulz@uni-freiburg.de PMID:18586729

  12. A Distributed Look-up Architecture for Text Mining Applications using MapReduce

    PubMed Central

    Balkir, Atilla Soner; Foster, Ian; Rzhetsky, Andrey

    2011-01-01

    Text mining applications typically involve statistical models that require accessing and updating model parameters in an iterative fashion. With the growing size of the data, such models become extremely parameter rich, and naive parallel implementations fail to address the scalability problem of maintaining a distributed look-up table that maps model parameters to their values. We evaluate several existing alternatives to provide coordination among worker nodes in Hadoop [11] clusters, and suggest a new multi-layered look-up architecture that is specifically optimized for certain problem domains. Our solution exploits the power-law distribution characteristics of the phrase or n-gram counts in large corpora while utilizing a Bloom Filter [2], in-memory cache, and an HBase [12] cluster at varying levels of abstraction. PMID:25356441

  13. A Distributed Look-up Architecture for Text Mining Applications using MapReduce.

    PubMed

    Balkir, Atilla Soner; Foster, Ian; Rzhetsky, Andrey

    2011-11-01

    Text mining applications typically involve statistical models that require accessing and updating model parameters in an iterative fashion. With the growing size of the data, such models become extremely parameter rich, and naive parallel implementations fail to address the scalability problem of maintaining a distributed look-up table that maps model parameters to their values. We evaluate several existing alternatives to provide coordination among worker nodes in Hadoop [11] clusters, and suggest a new multi-layered look-up architecture that is specifically optimized for certain problem domains. Our solution exploits the power-law distribution characteristics of the phrase or n-gram counts in large corpora while utilizing a Bloom Filter [2], in-memory cache, and an HBase [12] cluster at varying levels of abstraction. PMID:25356441

  14. Decomposition, lookup, and recombination: MEG evidence for the full decomposition model of complex visual word recognition.

    PubMed

    Fruchter, Joseph; Marantz, Alec

    2015-04-01

    There is much evidence that visual recognition of morphologically complex words (e.g., teacher) proceeds via a decompositional route, first involving recognition of their component morphemes (teach + -er). According to the Full Decomposition model, after the visual decomposition stage, followed by morpheme lookup, there is a final "recombination" stage, in which the decomposed morphemes are combined and the well-formedness of the complex form is evaluated. Here, we use MEG to provide evidence for the temporally-differentiated stages of this model. First, we demonstrate an early effect of derivational family entropy, corresponding to the stem lookup stage; this is followed by a surface frequency effect, corresponding to the later recombination stage. We also demonstrate a late effect of a novel statistical measure, semantic coherence, which quantifies the gradient semantic well-formedness of complex words. Our findings illustrate the usefulness of corpus measures in investigating the component processes within visual word recognition. PMID:25797098

  15. A microprocessor-based table lookup approach for magnetic bearing linearization

    NASA Technical Reports Server (NTRS)

    Groom, N. J.; Miller, J. B.

    1981-01-01

    An approach for producing a linear transfer characteristic between force command and force output of a magnetic bearing actuator without flux biasing is presented. The approach is microprocessor based and uses a table lookup to generate drive signals for the magnetic bearing power driver. An experimental test setup used to demonstrate the feasibility of the approach is described, and test results are presented. The test setup contains bearing elements similar to those used in a laboratory model annular momentum control device.

  16. A fast chaotic cryptographic scheme with dynamic look-up table

    NASA Astrophysics Data System (ADS)

    Wong, K. W.

    2002-06-01

    We propose a fast chaotic cryptographic scheme based on iterating a logistic map. In particular, no random numbers need to be generated and the look-up table used in the cryptographic process is updated dynamically. Simulation results show that the proposed method leads to a substantial reduction in the encryption and decryption time. As a result, chaotic cryptography becomes more practical in the secure transmission of large multi-media files over public data communication network.

  17. Generation of Look-Up Tables for Dynamic Job Shop Scheduling Decision Support Tool

    NASA Astrophysics Data System (ADS)

    Oktaviandri, Muchamad; Hassan, Adnan; Mohd Shaharoun, Awaluddin

    2016-02-01

    Majority of existing scheduling techniques are based on static demand and deterministic processing time, while most job shop scheduling problem are concerned with dynamic demand and stochastic processing time. As a consequence, the solutions obtained from the traditional scheduling technique are ineffective wherever changes occur to the system. Therefore, this research intends to develop a decision support tool (DST) based on promising artificial intelligent that is able to accommodate the dynamics that regularly occur in job shop scheduling problem. The DST was designed through three phases, i.e. (i) the look-up table generation, (ii) inverse model development and (iii) integration of DST components. This paper reports the generation of look-up tables for various scenarios as a part in development of the DST. A discrete event simulation model was used to compare the performance among SPT, EDD, FCFS, S/OPN and Slack rules; the best performances measures (mean flow time, mean tardiness and mean lateness) and the job order requirement (inter-arrival time, due dates tightness and setup time ratio) which were compiled into look-up tables. The well-known 6/6/J/Cmax Problem from Muth and Thompson (1963) was used as a case study. In the future, the performance measure of various scheduling scenarios and the job order requirement will be mapped using ANN inverse model.

  18. GeoSciGraph: An Ontological Framework for EarthCube Semantic Infrastructure

    NASA Astrophysics Data System (ADS)

    Gupta, A.; Schachne, A.; Condit, C.; Valentine, D.; Richard, S.; Zaslavsky, I.

    2015-12-01

    The CINERGI (Community Inventory of EarthCube Resources for Geosciences Interoperability) project compiles an inventory of a wide variety of earth science resources including documents, catalogs, vocabularies, data models, data services, process models, information repositories, domain-specific ontologies etc. developed by research groups and data practitioners. We have developed a multidisciplinary semantic framework called GeoSciGraph semantic ingration of earth science resources. An integrated ontology is constructed with Basic Formal Ontology (BFO) as its upper ontology and currently ingests multiple component ontologies including the SWEET ontology, GeoSciML's lithology ontology, Tematres controlled vocabulary server, GeoNames, GCMD vocabularies on equipment, platforms and institutions, software ontology, CUAHSI hydrology vocabulary, the environmental ontology (ENVO) and several more. These ontologies are connected through bridging axioms; GeoSciGraph identifies lexically close terms and creates equivalence class or subclass relationships between them after human verification. GeoSciGraph allows a community to create community-specific customizations of the integrated ontology. GeoSciGraph uses the Neo4J,a graph database that can hold several billion concepts and relationships. GeoSciGraph provides a number of REST services that can be called by other software modules like the CINERGI information augmentation pipeline. 1) Vocabulary services are used to find exact and approximate terms, term categories (community-provided clusters of terms e.g., measurement-related terms or environmental material related terms), synonyms, term definitions and annotations. 2) Lexical services are used for text parsing to find entities, which can then be included into the ontology by a domain expert. 3) Graph services provide the ability to perform traversal centric operations e.g., finding paths and neighborhoods which can be used to perform ontological operations like

  19. Ontological turns, turnoffs and roundabouts.

    PubMed

    Sismondo, Sergio

    2015-06-01

    There has been much talk of an 'ontological turn' in Science and Technology Studies. This commentary explores some recent work on multiple and historical ontologies, especially articles published in this journal, against a background of constructivism. It can be tempting to read an ontological turn as based and promoting a version of perspectivism, but that is inadequate to the scholarly work and opens multiple ontologies to serious criticisms. Instead, we should read our ontological turn or turns as being about multiplicities of practices and the ways in which these practices shape the material world. Ontologies arise out of practices through which people engage with things; the practices are fundamental and the ontologies derivative. The purchase in this move comes from the elucidating power of the verbs that scholars use to analyze relations of practices and objects--which turn out to be specific cases of constructivist verbs. The difference between this ontological turn and constructivist work in Science and Technology Studies appears to be a matter of emphases found useful for different purposes. PMID:26477200

  20. Ontology through a Mindfulness Process

    ERIC Educational Resources Information Center

    Bearance, Deborah; Holmes, Kimberley

    2015-01-01

    Traditionally, when ontology is taught in a graduate studies course on social research, there is a tendency for this concept to be examined through the process of lectures and readings. Such an approach often leaves graduate students to grapple with a personal embodiment of this concept and to comprehend how ontology can ground their research.…

  1. 40 CFR Table Nn-2 to Subpart Hh of... - Lookup Default Values for Calculation Methodology 2 of This Subpart

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Municipal Solid Waste Landfills Pt. 98, Subpt. NN, Table NN-2 Table NN-2 to Subpart HH of Part 98—Lookup Default...

  2. Ontology-based geospatial data query and integration

    USGS Publications Warehouse

    Zhao, T.; Zhang, C.; Wei, M.; Peng, Z.-R.

    2008-01-01

    Geospatial data sharing is an increasingly important subject as large amount of data is produced by a variety of sources, stored in incompatible formats, and accessible through different GIS applications. Past efforts to enable sharing have produced standardized data format such as GML and data access protocols such as Web Feature Service (WFS). While these standards help enabling client applications to gain access to heterogeneous data stored in different formats from diverse sources, the usability of the access is limited due to the lack of data semantics encoded in the WFS feature types. Past research has used ontology languages to describe the semantics of geospatial data but ontology-based queries cannot be applied directly to legacy data stored in databases or shapefiles, or to feature data in WFS services. This paper presents a method to enable ontology query on spatial data available from WFS services and on data stored in databases. We do not create ontology instances explicitly and thus avoid the problems of data replication. Instead, user queries are rewritten to WFS getFeature requests and SQL queries to database. The method also has the benefits of being able to utilize existing tools of databases, WFS, and GML while enabling query based on ontology semantics. ?? 2008 Springer-Verlag Berlin Heidelberg.

  3. Solar-Terrestrial Ontology Development

    NASA Astrophysics Data System (ADS)

    McGuinness, D.; Fox, P.; Middleton, D.; Garcia, J.; Cinquni, L.; West, P.; Darnell, J. A.; Benedict, J.

    2005-12-01

    The development of an interdisciplinary virtual observatory (the Virtual Solar-Terrestrial Observatory; VSTO) as a scalable environment for searching, integrating, and analyzing databases distributed over the Internet requires a higher level of semantic interoperability than here-to-fore required by most (if not all) distributed data systems or discipline specific virtual observatories. The formalization of semantics using ontologies and their encodings for the internet (e.g. OWL - the Web Ontology Language), as well as the use of accompanying tools, such as reasoning, inference and explanation, open up both a substantial leap in options for interoperability and in the need for formal development principles to guide ontology development and use within modern, multi-tiered network data environments. In this presentation, we outline the formal methodologies we utilize in the VSTO project, the currently developed use-cases, ontologies and their relation to existing ontologies (such as SWEET).

  4. Keyword Ontology Development for Discovering Hydrologic Data

    NASA Astrophysics Data System (ADS)

    Piasecki, Michael; Hooper, Rick; Choi, Yoori

    2010-05-01

    Service (USGS) National Water Information System (NWIS) and the Environmental Protection Agency's STORET data system . In order to avoid overwhelming returns when searching for more general concepts, the ontology's upper layers (called navigation layers) cannot be used to search for data, which in turn prompts the need to identify general groupings of data such as Biological, or Chemical, or Physical data groups, which then must be further subdivided in a cascading fashion all the way to the leaf levels. This classification is not straightforward however and poses much potential for discussion. Finally, it is important to identify on the dimensionality of the ontology, i.e. does the keyword contain only the property measured (e.g., "temperature") or the medium and the property ("air temperature").

  5. Estimating attenuation and propagation of noise bands from a distant source using the lookup program and data base

    NASA Astrophysics Data System (ADS)

    White, Michael J.

    1994-10-01

    Unavoidable noise generated by military activities can disturb the surrounding community and become a source of complaint. Military planners must quickly and accurately predict noise levels at distant points from various sound sources to manage noisy operations on a daily basis. This study developed the Lookup computer program and data base to provide rapid estimates of outdoor noise levels from a variety of sound sources. Lookup accesses a data base of archived results (requiring about 5 MB disk space) from typical situations rather than performing fresh calculations for each consultation. Initial timing tests show that Lookup can predict the sound levels from a noise source at distances up to 20 km in 1 second on a DOS-compatible personal computer (PC). This report includes the Lookup program source code, and describes the required input for the program, the contents of the archival data base, and the program output. Lookup was written to compile with MS-Fortran, and will run under DOS on any IBM compatible with 640k random access memory. Lookup also conforms to ANSI 1978 standard Fortran and will run under the Unix operating system.

  6. Gene Ontology Consortium: going forward.

    PubMed

    2015-01-01

    The Gene Ontology (GO; http://www.geneontology.org) is a community-based bioinformatics resource that supplies information about gene product function using ontologies to represent biological knowledge. Here we describe improvements and expansions to several branches of the ontology, as well as updates that have allowed us to more efficiently disseminate the GO and capture feedback from the research community. The Gene Ontology Consortium (GOC) has expanded areas of the ontology such as cilia-related terms, cell-cycle terms and multicellular organism processes. We have also implemented new tools for generating ontology terms based on a set of logical rules making use of templates, and we have made efforts to increase our use of logical definitions. The GOC has a new and improved web site summarizing new developments and documentation, serving as a portal to GO data. Users can perform GO enrichment analysis, and search the GO for terms, annotations to gene products, and associated metadata across multiple species using the all-new AmiGO 2 browser. We encourage and welcome the input of the research community in all biological areas in our continued effort to improve the Gene Ontology. PMID:25428369

  7. Gene Ontology Consortium: going forward

    PubMed Central

    2015-01-01

    The Gene Ontology (GO; http://www.geneontology.org) is a community-based bioinformatics resource that supplies information about gene product function using ontologies to represent biological knowledge. Here we describe improvements and expansions to several branches of the ontology, as well as updates that have allowed us to more efficiently disseminate the GO and capture feedback from the research community. The Gene Ontology Consortium (GOC) has expanded areas of the ontology such as cilia-related terms, cell-cycle terms and multicellular organism processes. We have also implemented new tools for generating ontology terms based on a set of logical rules making use of templates, and we have made efforts to increase our use of logical definitions. The GOC has a new and improved web site summarizing new developments and documentation, serving as a portal to GO data. Users can perform GO enrichment analysis, and search the GO for terms, annotations to gene products, and associated metadata across multiple species using the all-new AmiGO 2 browser. We encourage and welcome the input of the research community in all biological areas in our continued effort to improve the Gene Ontology. PMID:25428369

  8. An ontology of scientific experiments.

    PubMed

    Soldatova, Larisa N; King, Ross D

    2006-12-22

    The formal description of experiments for efficient analysis, annotation and sharing of results is a fundamental part of the practice of science. Ontologies are required to achieve this objective. A few subject-specific ontologies of experiments currently exist. However, despite the unity of scientific experimentation, no general ontology of experiments exists. We propose the ontology EXPO to meet this need. EXPO links the SUMO (the Suggested Upper Merged Ontology) with subject-specific ontologies of experiments by formalizing the generic concepts of experimental design, methodology and results representation. EXPO is expressed in the W3C standard ontology language OWL-DL. We demonstrate the utility of EXPO and its ability to describe different experimental domains, by applying it to two experiments: one in high-energy physics and the other in phylogenetics. The use of EXPO made the goals and structure of these experiments more explicit, revealed ambiguities, and highlighted an unexpected similarity. We conclude that, EXPO is of general value in describing experiments and a step towards the formalization of science. PMID:17015305

  9. Ontology Research and Development. Part 2 - A Review of Ontology Mapping and Evolving.

    ERIC Educational Resources Information Center

    Ding, Ying; Foo, Schubert

    2002-01-01

    Reviews ontology research and development, specifically ontology mapping and evolving. Highlights include an overview of ontology mapping projects; maintaining existing ontologies and extending them as appropriate when new information or knowledge is acquired; and ontology's role and the future of the World Wide Web, or Semantic Web. (Contains 55…

  10. Ontology-Oriented Programming for Biomedical Informatics.

    PubMed

    Lamy, Jean-Baptiste

    2016-01-01

    Ontologies are now widely used in the biomedical domain. However, it is difficult to manipulate ontologies in a computer program and, consequently, it is not easy to integrate ontologies with databases or websites. Two main approaches have been proposed for accessing ontologies in a computer program: traditional API (Application Programming Interface) and ontology-oriented programming, either static or dynamic. In this paper, we will review these approaches and discuss their appropriateness for biomedical ontologies. We will also present an experience feedback about the integration of an ontology in a computer software during the VIIIP research project. Finally, we will present OwlReady, the solution we developed. PMID:27071878

  11. Approach for ontological modeling of database schema for the generation of semantic knowledge on the web

    NASA Astrophysics Data System (ADS)

    Rozeva, Anna

    2015-11-01

    Currently there is large quantity of content on web pages that is generated from relational databases. Conceptual domain models provide for the integration of heterogeneous content on semantic level. The use of ontology as conceptual model of a relational data sources makes them available to web agents and services and provides for the employment of ontological techniques for data access, navigation and reasoning. The achievement of interoperability between relational databases and ontologies enriches the web with semantic knowledge. The establishment of semantic database conceptual model based on ontology facilitates the development of data integration systems that use ontology as unified global view. Approach for generation of ontologically based conceptual model is presented. The ontology representing the database schema is obtained by matching schema elements to ontology concepts. Algorithm of the matching process is designed. Infrastructure for the inclusion of mediation between database and ontology for bridging legacy data with formal semantic meaning is presented. Implementation of the knowledge modeling approach on sample database is performed.

  12. An Ontology Infrastructure for an E-Learning Scenario

    ERIC Educational Resources Information Center

    Guo, Wen-Ying; Chen, De-Ren

    2007-01-01

    Selecting appropriate learning services for a learner from a large number of heterogeneous knowledge sources is a complex and challenging task. This article illustrates and discusses how Semantic Web technologies such as RDF [resource description framework] and ontology can be applied to e-learning systems to help the learner in selecting an…

  13. The SWAN Scientific Discourse Ontology

    PubMed Central

    Ciccarese, Paolo; Wu, Elizabeth; Kinoshita, June; Wong, Gwendolyn T.; Ocana, Marco; Ruttenberg, Alan

    2015-01-01

    SWAN (Semantic Web Application in Neuromedicine) is a project to construct a semantically-organized, community-curated, distributed knowledge base of Theory, Evidence, and Discussion in biomedicine. Unlike Wikipedia and similar approaches, SWAN’s ontology is designed to represent and foreground both harmonizing and contradictory assertions within the total community discourse. Releases of the software, content and ontology will be initially by and for the Alzheimer Disease (AD) research community, with the obvious potential for extension into other disease research areas. The Alzheimer Research Forum, a 4,000-member web community for AD researchers, will host SWAN’s initial public release, currently scheduled for late 2007. This paper presents the current version of SWAN’s ontology of scientific discourse and presents our current thinking about its evolution including extensions and alignment with related communities, projects and ontologies. PMID:18583197

  14. An improved lookup protocol model for peer-to-peer networks

    NASA Astrophysics Data System (ADS)

    Fan, Wei; Ye, Dongfen

    2011-12-01

    With the development of the peer-to-peer (P2P) technology, file sharing is becoming the hottest, fastest growing application on the Internet. Although we can benefit from different protocols separately, our research shows that if there exists a proper model, most of the seemingly different protocols can be classified to a same framework. In this paper, we propose an improved Chord arithmetic based on the binary tree for P2P networks. We perform extensive simulations to study our proposed protocol. The results show that the improved Chord reduces the average lookup path length without increasing the joining and departing complexity.

  15. Spatial frequency sampling look-up table method for computer-generated hologram

    NASA Astrophysics Data System (ADS)

    Zhao, Kai; Huang, Yingqing; Jiang, Xiaoyu; Yan, Xingpeng

    2016-04-01

    A spatial frequency sampling look-up table method is proposed to generate a hologram. The three-dimensional (3-D) scene is sampled as several intensity images by computer rendering. Each object point on the rendered images has a defined spatial frequency. The basis terms for calculating fringe patterns are precomputed and stored in a table to improve the calculation speed. Both numerical simulations and optical experiments are performed. The results show that the proposed approach can easily realize color reconstructions of a 3-D scene with a low computation cost. The occlusion effects and depth information are all provided accurately.

  16. SWEET 2.1 Ontologies

    NASA Astrophysics Data System (ADS)

    Raskin, R. G.

    2010-12-01

    The Semantic Web for Earth and Environmental Terminology (SWEET) ontologies represent a mid- to upper-level concept space for all of Earth and Planetary Science and associated data and applications The latest version (2.1) has been reorganized to improve long-term maintainability. Accompanying the ontologies is a mapping to the CF Standard Name Table and the GCMD Science Keywords. As a higher level concept space, terms can be readily mapped across these vocabularies through the intermediate use of SWEET.

  17. Semantic enrichment for medical ontologies.

    PubMed

    Lee, Yugyung; Geller, James

    2006-04-01

    The Unified Medical Language System (UMLS) contains two separate but interconnected knowledge structures, the Semantic Network (upper level) and the Metathesaurus (lower level). In this paper, we have attempted to work out better how the use of such a two-level structure in the medical field has led to notable advances in terminologies and ontologies. However, most ontologies and terminologies do not have such a two-level structure. Therefore, we present a method, called semantic enrichment, which generates a two-level ontology from a given one-level terminology and an auxiliary two-level ontology. During semantic enrichment, concepts of the one-level terminology are assigned to semantic types, which are the building blocks of the upper level of the auxiliary two-level ontology. The result of this process is the desired new two-level ontology. We discuss semantic enrichment of two example terminologies and how we approach the implementation of semantic enrichment in the medical domain. This implementation performs a major part of the semantic enrichment process with the medical terminologies, with difficult cases left to a human expert. PMID:16185937

  18. Ontology Based Quality Evaluation for Spatial Data

    NASA Astrophysics Data System (ADS)

    Yılmaz, C.; Cömert, Ç.

    2015-08-01

    Many institutions will be providing data to the National Spatial Data Infrastructure (NSDI). Current technical background of the NSDI is based on syntactic web services. It is expected that this will be replaced by semantic web services. The quality of the data provided is important in terms of the decision-making process and the accuracy of transactions. Therefore, the data quality needs to be tested. This topic has been neglected in Turkey. Data quality control for NSDI may be done by private or public "data accreditation" institutions. A methodology is required for data quality evaluation. There are studies for data quality including ISO standards, academic studies and software to evaluate spatial data quality. ISO 19157 standard defines the data quality elements. Proprietary software such as, 1Spatial's 1Validate and ESRI's Data Reviewer offers quality evaluation based on their own classification of rules. Commonly, rule based approaches are used for geospatial data quality check. In this study, we look for the technical components to devise and implement a rule based approach with ontologies using free and open source software in semantic web context. Semantic web uses ontologies to deliver well-defined web resources and make them accessible to end-users and processes. We have created an ontology conforming to the geospatial data and defined some sample rules to show how to test data with respect to data quality elements including; attribute, topo-semantic and geometrical consistency using free and open source software. To test data against rules, sample GeoSPARQL queries are created, associated with specifications.

  19. On the look-up tables for the critical heat flux in tubes (history and problems)

    SciTech Connect

    Kirillov, P.L.; Smogalev, I.P.

    1995-09-01

    The complication of critical heat flux (CHF) problem for boiling in channels is caused by the large number of variable factors and the variety of two-phase flows. The existence of several hundreds of correlations for the prediction of CHF demonstrates the unsatisfactory state of this problem. The phenomenological CHF models can provide only the qualitative predictions of CHF primarily in annular-dispersed flow. The CHF look-up tables covered the results of numerous experiments received more recognition in the last 15 years. These tables are based on the statistical averaging of CHF values for each range of pressure, mass flux and quality. The CHF values for regions, where no experimental data is available, are obtained by extrapolation. The correction of these tables to account for the diameter effect is a complicated problem. There are ranges of conditions where the simple correlations cannot produce the reliable results. Therefore, diameter effect on CHF needs additional study. The modification of look-up table data for CHF in tubes to predict CHF in rod bundles must include a method which to take into account the nonuniformity of quality in a rod bundle cross section.

  20. Fast thumbnail generation for MPEG video by using a multiple-symbol lookup table

    NASA Astrophysics Data System (ADS)

    Kim, Myounghoon; Lee, Hoonjae; Yoon, Ja-Cheon; Kim, Hyeokman; Sull, Sanghoon

    2009-03-01

    A novel method using a multiple-symbol lookup table (mLUT) is proposed to fast-skip the ac coefficients (codewords) not needed to construct a dc image from MPEG-1/2 video streams, resulting in fast thumbnail generation. For MPEG-1/2 video streams, thumbnail generation schemes usually extract dc images directly in a compressed domain where a dc image is constructed using a dc coefficient and a few ac coefficients from among the discrete cosine transform (DCT) coefficients. However, it is required that all codewords for DCT coefficients should be fully decoded whether they are needed or not in generating a dc image, since the bit length of a codeword coded with variable-length coding (VLC) cannot be determined until the previous VLC codeword has been decoded. Thus, a method using a mLUT designed for fast-skipping unnecessary DCT coefficients to construct a dc image is proposed, resulting in a significantly reduced number of table lookups (LUT count) for variable-length decoding of codewords. Experimental results show that the proposed method significantly improves the performance by reducing the LUT count by 50%.

  1. Full-spectrum k-distribution look-up table for nonhomogeneous gas-soot mixtures

    NASA Astrophysics Data System (ADS)

    Wang, Chaojun; Modest, Michael F.; He, Boshu

    2016-06-01

    Full-spectrum k-distribution (FSK) look-up tables provide great accuracy combined with outstanding numerical efficiency for the evaluation of radiative transfer in nonhomogeneous gaseous media. However, previously published tables cannot be used for gas-soot mixtures that are found in most combustion scenarios since it is impossible to assemble k-distributions for a gas mixed with nongray absorbing particles from gas-only full-spectrum k-distributions. Consequently, a new FSK look-up table has been constructed by optimizing the previous table recently published by the authors and then adding one soot volume fraction to this optimized table. Two steps comprise the optimization scheme: (1) direct calculation of the nongray stretching factors (a-values) using the k-distributions (k-values) rather than tabulating them; (2) deletion of unnecessary mole fractions at many thermodynamic states. Results show that after optimization, the size of the new table is reduced from 5 GB (including the k-values and the a-values for gases only) to 3.2 GB (including the k-values for both gases and soot) while both accuracy and efficiency remain the same. Two scaled flames are used to validate the new table. It is shown that the new table gives results of excellent accuracy for those benchmark results together with cheap computational cost for both gas mixtures and gas-soot mixtures.

  2. A region segmentation based algorithm for building a crystal position lookup table in a scintillation detector

    NASA Astrophysics Data System (ADS)

    Wang, Hai-Peng; Yun, Ming-Kai; Liu, Shuang-Quan; Fan, Xin; Cao, Xue-Xiang; Chai, Pei; Shan, Bao-Ci

    2015-03-01

    In a scintillation detector, scintillation crystals are typically made into a 2-dimensional modular array. The location of incident gamma-ray needs be calibrated due to spatial response nonlinearity. Generally, position histograms-the characteristic flood response of scintillation detectors-are used for position calibration. In this paper, a position calibration method based on a crystal position lookup table which maps the inaccurate location calculated by Anger logic to the exact hitting crystal position has been proposed. Firstly, the position histogram is preprocessed, such as noise reduction and image enhancement. Then the processed position histogram is segmented into disconnected regions, and crystal marking points are labeled by finding the centroids of regions. Finally, crystal boundaries are determined and the crystal position lookup table is generated. The scheme is evaluated by the whole-body positron emission tomography (PET) scanner and breast dedicated single photon emission computed tomography scanner developed by the Institute of High Energy Physics, Chinese Academy of Sciences. The results demonstrate that the algorithm is accurate, efficient, robust and applicable to any configurations of scintillation detector. Supported by National Natural Science Foundation of China (81101175) and XIE Jia-Lin Foundation of Institute of High Energy Physics (Y3546360U2)

  3. Spatial Data Integration Using Ontology-Based Approach

    NASA Astrophysics Data System (ADS)

    Hasani, S.; Sadeghi-Niaraki, A.; Jelokhani-Niaraki, M.

    2015-12-01

    In today's world, the necessity for spatial data for various organizations is becoming so crucial that many of these organizations have begun to produce spatial data for that purpose. In some circumstances, the need to obtain real time integrated data requires sustainable mechanism to process real-time integration. Case in point, the disater management situations that requires obtaining real time data from various sources of information. One of the problematic challenges in the mentioned situation is the high degree of heterogeneity between different organizations data. To solve this issue, we introduce an ontology-based method to provide sharing and integration capabilities for the existing databases. In addition to resolving semantic heterogeneity, better access to information is also provided by our proposed method. Our approach is consisted of three steps, the first step is identification of the object in a relational database, then the semantic relationships between them are modelled and subsequently, the ontology of each database is created. In a second step, the relative ontology will be inserted into the database and the relationship of each class of ontology will be inserted into the new created column in database tables. Last step is consisted of a platform based on service-oriented architecture, which allows integration of data. This is done by using the concept of ontology mapping. The proposed approach, in addition to being fast and low cost, makes the process of data integration easy and the data remains unchanged and thus takes advantage of the legacy application provided.

  4. An ontological knowledge framework for adaptive medical workflow.

    PubMed

    Dang, Jiangbo; Hedayati, Amir; Hampel, Ken; Toklu, Candemir

    2008-10-01

    As emerging technologies, semantic Web and SOA (Service-Oriented Architecture) allow BPMS (Business Process Management System) to automate business processes that can be described as services, which in turn can be used to wrap existing enterprise applications. BPMS provides tools and methodologies to compose Web services that can be executed as business processes and monitored by BPM (Business Process Management) consoles. Ontologies are a formal declarative knowledge representation model. It provides a foundation upon which machine understandable knowledge can be obtained, and as a result, it makes machine intelligence possible. Healthcare systems can adopt these technologies to make them ubiquitous, adaptive, and intelligent, and then serve patients better. This paper presents an ontological knowledge framework that covers healthcare domains that a hospital encompasses-from the medical or administrative tasks, to hospital assets, medical insurances, patient records, drugs, and regulations. Therefore, our ontology makes our vision of personalized healthcare possible by capturing all necessary knowledge for a complex personalized healthcare scenario involving patient care, insurance policies, and drug prescriptions, and compliances. For example, our ontology facilitates a workflow management system to allow users, from physicians to administrative assistants, to manage, even create context-aware new medical workflows and execute them on-the-fly. PMID:18602872

  5. Research on land registration procedure ontology of China

    NASA Astrophysics Data System (ADS)

    Zhao, Zhongjun; Du, Qingyun; Zhang, Weiwei; Liu, Tao

    2009-10-01

    Land registration is public act which is to record the state-owned land use right, collective land ownership, collective land use right and land mortgage, servitude, as well as other land rights required the registration according to laws and regulations onto land registering books. Land registration is one of the important government affairs , so it is very important to standardize, optimize and humanize the process of land registration. The management works of organization are realized through a variety of workflows. Process knowledge is in essence a kind of methodology knowledge and a system which including the core and the relational knowledge. In this paper, the ontology is introduced into the field of land registration and management, trying to optimize the flow of land registration, to promote the automation-building and intelligent Service of land registration affairs, to provide humanized and intelligent service for multi-types of users . This paper tries to build land registration procedure ontology by defining the land registration procedure ontology's key concepts which represent the kinds of processes of land registration and mapping the kinds of processes to OWL-S. The land registration procedure ontology shall be the start and the basis of the Web service.

  6. Differentiated Services: A New Reference Model.

    ERIC Educational Resources Information Center

    Whitson, William L.

    1995-01-01

    Examines advantages and disadvantages of the traditional model of undifferentiated service versus an alternative model of differentiated services, which includes directions and general information; technical assistance, "information lookup" for the client, research consultation, and library instruction. Suggests each service should fit staff…

  7. A Foundational Approach to Designing Geoscience Ontologies

    NASA Astrophysics Data System (ADS)

    Brodaric, B.

    2009-05-01

    E-science systems are increasingly deploying ontologies to aid online geoscience research. Geoscience ontologies are typically constructed independently by isolated individuals or groups who tend to follow few design principles. This limits the usability of the ontologies due to conceptualizations that are vague, conflicting, or narrow. Advances in foundational ontologies and formal engineering approaches offer promising solutions, but these advanced techniques have had limited application in the geosciences. This paper develops a design approach for geoscience ontologies by extending aspects of the DOLCE foundational ontology and the OntoClean method. Geoscience examples will be presented to demonstrate the feasibility of the approach.

  8. How granularity issues concern biomedical ontology integration.

    PubMed

    Schulz, Stefan; Boeker, Martin; Stenzhorn, Holger

    2008-01-01

    The application of upper ontologies has been repeatedly advocated for supporting interoperability between domain ontologies in order to facilitate shared data use both within and across disciplines. We have developed BioTop as a top-domain ontology to integrate more specialized ontologies in the biomolecular and biomedical domain. In this paper, we report on concrete integration problems of this ontology with the domain-independent Basic Formal Ontology (BFO) concerning the issue of fiat and aggregated objects in the context of different granularity levels. We conclude that the third BFO level must be ignored in order not to obviate cross-granularity integration. PMID:18487840

  9. A look-up table based approach to characterize crystal twinning for synchrotron X-ray Laue microdiffraction scans

    PubMed Central

    Li, Yao; Wan, Liang; Chen, Kai

    2015-01-01

    An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mapped automatically from Laue microdiffraction raster scans with thousands of data points. Taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system. PMID:26089764

  10. Ontology-based approach for managing personal health and wellness information.

    PubMed

    Sachinopoulou, Anna; Leppänen, Juha; Kaijanranta, Hannu; Lähteenmäki, Jaakko

    2007-01-01

    This paper describes a new approach for collecting and sharing personal health and wellness information. The approach is based on a Personal Health Record (PHR) including both clinical and non-clinical data. The PHR is located on a network server referred as Common Server. The overall service architecture for providing anonymous and private access to the PHR is described. Semantic interoperability is based on an ontology collection and usage of OID (Object Identifier) codes. The formal (upper) ontology combines a set of domain ontologies representing different aspects of personal health and wellness. The ontology collection emphasizes wellness aspects while clinical data is modelled by using OID references to existing vocabularies. Modular ontology approach enables distributed management and expansion of the data model. PMID:18002328

  11. Hydrologic Ontology for the Web

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.; Piasecki, M.

    2003-12-01

    This poster presents the conceptual development of a Hydrologic Ontology for the Web (HOW) that will facilitate data sharing among the hydrologic community. Hydrologic data is difficult to share because of its predicted vast increase in data volume, the availability of new measurement technologies and the heterogeneity of information systems used to produced, store, retrieved and used the data. The augmented capacity of the Internet and the technologies recommended by the W3C, as well as metadata standards provide sophisticated means to make data more usable and systems to be more integrated. Standard metadata is commonly used to solve interoperability issues. For the hydrologic field an explicit metadata standard does not exist, but one could be created extending metadata standards such as the FGDC-STD-001-1998 or ISO 19115. Standard metadata defines a set of elements required to describe data in a consistent manner, and their domains are sometimes restricted by a finite set of values or controlled vocabulary (e.g. code lists in ISO/DIS 19115). This controlled vocabulary is domain specific varying from one information community to another, allowing dissimilar descriptions to similar data sets. This issue is sometimes called semantic non-interoperability or semantic heterogeneity, and it is usually the main problem when sharing data. Explicit domain ontologies could be created to provide semantic interoperability among heterogeneous information communities. Domain ontologies supply the values for restricted domains of some elements in the metadata set and the semantic mapping with other domain ontologies. To achieve interoperability between applications that exchange machine-understandable information on the Web, metadata is expressed using Resource Description Framework (RDF) and domain ontologies are expressed using the Ontology Web Language (OWL), which is also based on RDF. A specific OWL ontology for hydrology is HOW. HOW presents, using a formal syntax, the

  12. An ontology for sensor networks

    NASA Astrophysics Data System (ADS)

    Compton, Michael; Neuhaus, Holger; Bermudez, Luis; Cox, Simon

    2010-05-01

    Sensors and networks of sensors are important ways of monitoring and digitizing reality. As the number and size of sensor networks grows, so too does the amount of data collected. Users of such networks typically need to discover the sensors and data that fit their needs without necessarily understanding the complexities of the network itself. The burden on users is eased if the network and its data are expressed in terms of concepts familiar to the users and their job functions, rather than in terms of the network or how it was designed. Furthermore, the task of collecting and combining data from multiple sensor networks is made easier if metadata about the data and the networks is stored in a format and conceptual models that is amenable to machine reasoning and inference. While the OGC's (Open Geospatial Consortium) SWE (Sensor Web Enablement) standards provide for the description and access to data and metadata for sensors, they do not provide facilities for abstraction, categorization, and reasoning consistent with standard technologies. Once sensors and networks are described using rich semantics (that is, by using logic to describe the sensors, the domain of interest, and the measurements) then reasoning and classification can be used to analyse and categorise data, relate measurements with similar information content, and manage, query and task sensors. This will enable types of automated processing and logical assurance built on OGC standards. The W3C SSN-XG (Semantic Sensor Networks Incubator Group) is producing a generic ontology to describe sensors, their environment and the measurements they make. The ontology provides definitions for the structure of sensors and observations, leaving the details of the observed domain unspecified. This allows abstract representations of real world entities, which are not observed directly but through their observable qualities. Domain semantics, units of measurement, time and time series, and location and mobility

  13. Complex Topographic Feature Ontology Patterns

    USGS Publications Warehouse

    Varanka, Dalia E.; Jerris, Thomas J.

    2015-01-01

    Semantic ontologies are examined as effective data models for the representation of complex topographic feature types. Complex feature types are viewed as integrated relations between basic features for a basic purpose. In the context of topographic science, such component assemblages are supported by resource systems and found on the local landscape. Ontologies are organized within six thematic modules of a domain ontology called Topography that includes within its sphere basic feature types, resource systems, and landscape types. Context is constructed not only as a spatial and temporal setting, but a setting also based on environmental processes. Types of spatial relations that exist between components include location, generative processes, and description. An example is offered in a complex feature type ‘mine.’ The identification and extraction of complex feature types are an area for future research.

  14. IDOMAL: the malaria ontology revisited

    PubMed Central

    2013-01-01

    Background With about half a billion cases, of which nearly one million fatal ones, malaria constitutes one of the major infectious diseases worldwide. A recently revived effort to eliminate the disease also focuses on IT resources for its efficient control, which prominently includes the control of the mosquito vectors that transmit the Plasmodium pathogens. As part of this effort, IDOMAL has been developed and it is continually being updated. Findings In addition to the improvement of IDOMAL’s structure and the correction of some inaccuracies, there were some major subdomain additions such as a section on natural products and remedies, and the import, from other, higher order ontologies, of several terms, which were merged with IDOMAL terms. Effort was put on rendering IDOMAL fully compatible as an extension of IDO, the Infectious Disease Ontology. The reason for the difficulties in fully reaching that target were the inherent differences between vector-borne diseases and “classical” infectious diseases, which make it necessary to specifically adjust the ontology’s architecture in order to comprise vectors and their populations. Conclusions In addition to a higher coverage of domain-specific terms and optimizing its usage by databases and decision-support systems, the new version of IDOMAL described here allows for more cross-talk between it and other ontologies, and in particular IDO. The malaria ontology is available for downloading at the OBO Foundry (http://www.obofoundry.org/cgi-bin/detail.cgi?id=malaria_ontology) and the NCBO BioPortal (http://bioportal.bioontology.org/ontologies/1311). PMID:24034841

  15. CLO: The cell line ontology

    PubMed Central

    2014-01-01

    Background Cell lines have been widely used in biomedical research. The community-based Cell Line Ontology (CLO) is a member of the OBO Foundry library that covers the domain of cell lines. Since its publication two years ago, significant updates have been made, including new groups joining the CLO consortium, new cell line cells, upper level alignment with the Cell Ontology (CL) and the Ontology for Biomedical Investigation, and logical extensions. Construction and content Collaboration among the CLO, CL, and OBI has established consensus definitions of cell line-specific terms such as ‘cell line’, ‘cell line cell’, ‘cell line culturing’, and ‘mortal’ vs. ‘immortal cell line cell’. A cell line is a genetically stable cultured cell population that contains individual cell line cells. The hierarchical structure of the CLO is built based on the hierarchy of the in vivo cell types defined in CL and tissue types (from which cell line cells are derived) defined in the UBERON cross-species anatomy ontology. The new hierarchical structure makes it easier to browse, query, and perform automated classification. We have recently added classes representing more than 2,000 cell line cells from the RIKEN BRC Cell Bank to CLO. Overall, the CLO now contains ~38,000 classes of specific cell line cells derived from over 200 in vivo cell types from various organisms. Utility and discussion The CLO has been applied to different biomedical research studies. Example case studies include annotation and analysis of EBI ArrayExpress data, bioassays, and host-vaccine/pathogen interaction. CLO’s utility goes beyond a catalogue of cell line types. The alignment of the CLO with related ontologies combined with the use of ontological reasoners will support sophisticated inferencing to advance translational informatics development. PMID:25852852

  16. Design of schistosomiasis ontology (IDOSCHISTO) extending the infectious disease ontology.

    PubMed

    Camara, Gaoussou; Despres, Sylvie; Djedidi, Rim; Lo, Moussa

    2013-01-01

    Epidemiological monitoring of the schistosomiasis' spreading brings together many practitioners working at different levels of granularity (biology, host individual, host population), who have different perspectives (biology, clinic and epidemiology) on the same phenomenon. Biological perspective deals with pathogens (e.g. life cycle) or physiopathology while clinical perspective deals with hosts (e.g. healthy or infected host, diagnosis, treatment, etc.). In an epidemiological perspective corresponding to the host population level of granularity, the schistosomiasis disease is characterized according to the way (causes, risk factors, etc.) it spreads in this population over space and time. In this paper we provide an ontological analysis and design for the Schistosomiasis domain knowledge and spreading dynamics. IDOSCHISTO - the schistosomiasis ontology - is designed as an extension of the Infectious Disease Ontology (IDO). This ontology aims at supporting the schistosomiasis monitoring process during a spreading crisis by enabling data integration, semantic interoperability, for collaborative work on one hand and for risk analysis and decision making on the other hand. PMID:23920598

  17. COMPASS: A Geospatial Knowledge Infrastructure Managed with Ontologies

    NASA Astrophysics Data System (ADS)

    Stock, K.

    2009-04-01

    COMPASS: A Geospatial Knowledge Infrastructure Managed with Ontologies Dr Kristin Stock Allworlds Geothinking, United Kingdom and EDINA, University of Edinburgh, United Kingdom and Centre for Geospatial Science University of Nottingham Nottingham United Kingdom The research and decision-making process in any discipline is supported by a vast quantity and diversity of scientific resources, including journal articles; scientific models; scientific theories; data sets and web services that implement scientific models or provide other functionality. Improved discovery and access to these scientific resources has the potential to make the process of using and developing scientific knowledge more effective and efficient. Current scientific research or decision making that relies on scientific resources requires an extensive search for relevant resources. Published journal papers may be discovered using web searches on the basis of words that appear in the title or metadata, but this approach is limited by the need to select the appropriate words, and does not identify articles that may be of interest because they use a similar approach, methodology or technique but are in a different discipline, or that are likely to be helpful despite not sharing the same keywords. The COMPASS project is developing a knowledge infrastructure that is intended to enhance the user experience in discovering scientific resources. This is being achieved with an approach that uses ontologies to manage the knowledge infrastructure in two ways: 1. A set of ontologies describe the resources in the knowledge infrastructure (for example, publications and web services) in terms of the domain concepts to which they relate, the scientific theories and models that they depend on, and the characteristics of the resources themselves. These ontologies are provided to users either directly or with assisted search tools to aid them in the discovery process. OWL-S ontologies are being used to describe web

  18. Controlled Vocabularies, Mini Ontologies and Interoperability (Invited)

    NASA Astrophysics Data System (ADS)

    King, T. A.; Walker, R. J.; Roberts, D.; Thieman, J.; Ritschel, B.; Cecconi, B.; Genot, V. N.

    2013-12-01

    Interoperability has been an elusive goal, but in recent years advances have been made using controlled vocabularies, mini-ontologies and a lot of collaboration. This has led to increased interoperability between disciplines in the U.S. and between international projects. We discuss the successful pattern followed by SPASE, IVOA and IPDA to achieve this new level of international interoperability. A key aspect of the pattern is open standards and open participation with interoperability achieved with shared services, public APIs, standard formats and open access to data. Many of these standards are expressed as controlled vocabularies and mini ontologies. To illustrate the pattern we look at SPASE related efforts and participation of North America's Heliophysics Data Environment and CDPP; Europe's Cluster Active Archive, IMPEx, EuroPlanet, ESPAS and HELIO; and Japan's magnetospheric missions. Each participating project has its own life cycle and successful standards development must always take this into account. A major challenge for sustained collaboration and interoperability is the limited lifespan of many of the participating projects. Innovative approaches and new tools and frameworks are often developed as competitively selected, limited term projects, but for sustainable interoperability successful approaches need to become part of a long term infrastructure. This is being encouraged and achieved in many domains and we are entering a golden age of interoperability.

  19. Gene Ontology annotations and resources.

    PubMed

    Blake, J A; Dolan, M; Drabkin, H; Hill, D P; Li, Ni; Sitnikov, D; Bridges, S; Burgess, S; Buza, T; McCarthy, F; Peddinti, D; Pillai, L; Carbon, S; Dietze, H; Ireland, A; Lewis, S E; Mungall, C J; Gaudet, P; Chrisholm, R L; Fey, P; Kibbe, W A; Basu, S; Siegele, D A; McIntosh, B K; Renfro, D P; Zweifel, A E; Hu, J C; Brown, N H; Tweedie, S; Alam-Faruque, Y; Apweiler, R; Auchinchloss, A; Axelsen, K; Bely, B; Blatter, M -C; Bonilla, C; Bouguerleret, L; Boutet, E; Breuza, L; Bridge, A; Chan, W M; Chavali, G; Coudert, E; Dimmer, E; Estreicher, A; Famiglietti, L; Feuermann, M; Gos, A; Gruaz-Gumowski, N; Hieta, R; Hinz, C; Hulo, C; Huntley, R; James, J; Jungo, F; Keller, G; Laiho, K; Legge, D; Lemercier, P; Lieberherr, D; Magrane, M; Martin, M J; Masson, P; Mutowo-Muellenet, P; O'Donovan, C; Pedruzzi, I; Pichler, K; Poggioli, D; Porras Millán, P; Poux, S; Rivoire, C; Roechert, B; Sawford, T; Schneider, M; Stutz, A; Sundaram, S; Tognolli, M; Xenarios, I; Foulgar, R; Lomax, J; Roncaglia, P; Khodiyar, V K; Lovering, R C; Talmud, P J; Chibucos, M; Giglio, M Gwinn; Chang, H -Y; Hunter, S; McAnulla, C; Mitchell, A; Sangrador, A; Stephan, R; Harris, M A; Oliver, S G; Rutherford, K; Wood, V; Bahler, J; Lock, A; Kersey, P J; McDowall, D M; Staines, D M; Dwinell, M; Shimoyama, M; Laulederkind, S; Hayman, T; Wang, S -J; Petri, V; Lowry, T; D'Eustachio, P; Matthews, L; Balakrishnan, R; Binkley, G; Cherry, J M; Costanzo, M C; Dwight, S S; Engel, S R; Fisk, D G; Hitz, B C; Hong, E L; Karra, K; Miyasato, S R; Nash, R S; Park, J; Skrzypek, M S; Weng, S; Wong, E D; Berardini, T Z; Huala, E; Mi, H; Thomas, P D; Chan, J; Kishore, R; Sternberg, P; Van Auken, K; Howe, D; Westerfield, M

    2013-01-01

    The Gene Ontology (GO) Consortium (GOC, http://www.geneontology.org) is a community-based bioinformatics resource that classifies gene product function through the use of structured, controlled vocabularies. Over the past year, the GOC has implemented several processes to increase the quantity, quality and specificity of GO annotations. First, the number of manual, literature-based annotations has grown at an increasing rate. Second, as a result of a new 'phylogenetic annotation' process, manually reviewed, homology-based annotations are becoming available for a broad range of species. Third, the quality of GO annotations has been improved through a streamlined process for, and automated quality checks of, GO annotations deposited by different annotation groups. Fourth, the consistency and correctness of the ontology itself has increased by using automated reasoning tools. Finally, the GO has been expanded not only to cover new areas of biology through focused interaction with experts, but also to capture greater specificity in all areas of the ontology using tools for adding new combinatorial terms. The GOC works closely with other ontology developers to support integrated use of terminologies. The GOC supports its user community through the use of e-mail lists, social media and web-based resources. PMID:23161678

  20. The SWAN biomedical discourse ontology.

    PubMed

    Ciccarese, Paolo; Wu, Elizabeth; Wong, Gwen; Ocana, Marco; Kinoshita, June; Ruttenberg, Alan; Clark, Tim

    2008-10-01

    Developing cures for highly complex diseases, such as neurodegenerative disorders, requires extensive interdisciplinary collaboration and exchange of biomedical information in context. Our ability to exchange such information across sub-specialties today is limited by the current scientific knowledge ecosystem's inability to properly contextualize and integrate data and discourse in machine-interpretable form. This inherently limits the productivity of research and the progress toward cures for devastating diseases such as Alzheimer's and Parkinson's. SWAN (Semantic Web Applications in Neuromedicine) is an interdisciplinary project to develop a practical, common, semantically structured, framework for biomedical discourse initially applied, but not limited, to significant problems in Alzheimer Disease (AD) research. The SWAN ontology has been developed in the context of building a series of applications for biomedical researchers, as well as in extensive discussions and collaborations with the larger bio-ontologies community. In this paper, we present and discuss the SWAN ontology of biomedical discourse. We ground its development theoretically, present its design approach, explain its main classes and their application, and show its relationship to other ongoing activities in biomedicine and bio-ontologies. PMID:18583197

  1. Ontology driven image search engine

    NASA Astrophysics Data System (ADS)

    Bei, Yun; Dmitrieva, Julia; Belmamoune, Mounia; Verbeek, Fons J.

    2007-01-01

    Image collections are most often domain specific. We have developed a system for image retrieval of multimodal microscopy images. That is, the same object of study visualized with a range of microscope techniques and with a range of different resolutions. In microscopy, image content is depending on the preparation method of the object under study as well as the microscope technique. Both are taken into account in the submission phase as metadata whilst at the same time (domain specific) ontologies are employed as controlled vocabularies to annotate the image. From that point onward, image data are interrelated through the relationships derived from annotated concepts in the ontology. By using concepts and relationships of an ontology, complex queries can be built with true semantic content. Image metadata can be used as powerful criteria to query image data which are directly or indirectly related to original data. The results of image retrieval can be represented using a structural graph by exploiting relationships from ontology rather than a listed table. Applying this to retrieve images from the same subject at different levels of resolution opens a new field for the analysis of image content.

  2. Gene Ontology Annotations and Resources

    PubMed Central

    2013-01-01

    The Gene Ontology (GO) Consortium (GOC, http://www.geneontology.org) is a community-based bioinformatics resource that classifies gene product function through the use of structured, controlled vocabularies. Over the past year, the GOC has implemented several processes to increase the quantity, quality and specificity of GO annotations. First, the number of manual, literature-based annotations has grown at an increasing rate. Second, as a result of a new ‘phylogenetic annotation’ process, manually reviewed, homology-based annotations are becoming available for a broad range of species. Third, the quality of GO annotations has been improved through a streamlined process for, and automated quality checks of, GO annotations deposited by different annotation groups. Fourth, the consistency and correctness of the ontology itself has increased by using automated reasoning tools. Finally, the GO has been expanded not only to cover new areas of biology through focused interaction with experts, but also to capture greater specificity in all areas of the ontology using tools for adding new combinatorial terms. The GOC works closely with other ontology developers to support integrated use of terminologies. The GOC supports its user community through the use of e-mail lists, social media and web-based resources. PMID:23161678

  3. Emotion Education without Ontological Commitment?

    ERIC Educational Resources Information Center

    Kristjansson, Kristjan

    2010-01-01

    Emotion education is enjoying new-found popularity. This paper explores the "cosy consensus" that seems to have developed in education circles, according to which approaches to emotion education are immune from metaethical considerations such as contrasting rationalist and sentimentalist views about the moral ontology of emotions. I spell out five…

  4. Ontology for vector surveillance and management.

    PubMed

    Lozano-Fuentes, Saul; Bandyopadhyay, Aritra; Cowell, Lindsay G; Goldfain, Albert; Eisen, Lars

    2013-01-01

    Ontologies, which are made up by standardized and defined controlled vocabulary terms and their interrelationships, are comprehensive and readily searchable repositories for knowledge in a given domain. The Open Biomedical Ontologies (OBO) Foundry was initiated in 2001 with the aims of becoming an "umbrella" for life-science ontologies and promoting the use of ontology development best practices. A software application (OBO-Edit; *.obo file format) was developed to facilitate ontology development and editing. The OBO Foundry now comprises over 100 ontologies and candidate ontologies, including the NCBI organismal classification ontology (NCBITaxon), the Mosquito Insecticide Resistance Ontology (MIRO), the Infectious Disease Ontology (IDO), the IDOMAL malaria ontology, and ontologies for mosquito gross anatomy and tick gross anatomy. We previously developed a disease data management system for dengue and malaria control programs, which incorporated a set of information trees built upon ontological principles, including a "term tree" to promote the use of standardized terms. In the course of doing so, we realized that there were substantial gaps in existing ontologies with regards to concepts, processes, and, especially, physical entities (e.g., vector species, pathogen species, and vector surveillance and management equipment) in the domain of surveillance and management of vectors and vector-borne pathogens. We therefore produced an ontology for vector surveillance and management, focusing on arthropod vectors and vector-borne pathogens with relevance to humans or domestic animals, and with special emphasis on content to support operational activities through inclusion in databases, data management systems, or decision support systems. The Vector Surveillance and Management Ontology (VSMO) includes >2,200 unique terms, of which the vast majority (>80%) were newly generated during the development of this ontology. One core feature of the VSMO is the linkage, through

  5. Gradient Learning Algorithms for Ontology Computing

    PubMed Central

    Gao, Wei; Zhu, Linli

    2014-01-01

    The gradient learning model has been raising great attention in view of its promising perspectives for applications in statistics, data dimensionality reducing, and other specific fields. In this paper, we raise a new gradient learning model for ontology similarity measuring and ontology mapping in multidividing setting. The sample error in this setting is given by virtue of the hypothesis space and the trick of ontology dividing operator. Finally, two experiments presented on plant and humanoid robotics field verify the efficiency of the new computation model for ontology similarity measure and ontology mapping applications in multidividing setting. PMID:25530752

  6. Ontology for Vector Surveillance and Management

    PubMed Central

    LOZANO-FUENTES, SAUL; BANDYOPADHYAY, ARITRA; COWELL, LINDSAY G.; GOLDFAIN, ALBERT; EISEN, LARS

    2013-01-01

    Ontologies, which are made up by standardized and defined controlled vocabulary terms and their interrelationships, are comprehensive and readily searchable repositories for knowledge in a given domain. The Open Biomedical Ontologies (OBO) Foundry was initiated in 2001 with the aims of becoming an “umbrella” for life-science ontologies and promoting the use of ontology development best practices. A software application (OBO-Edit; *.obo file format) was developed to facilitate ontology development and editing. The OBO Foundry now comprises over 100 ontologies and candidate ontologies, including the NCBI organismal classification ontology (NCBITaxon), the Mosquito Insecticide Resistance Ontology (MIRO), the Infectious Disease Ontology (IDO), the IDOMAL malaria ontology, and ontologies for mosquito gross anatomy and tick gross anatomy. We previously developed a disease data management system for dengue and malaria control programs, which incorporated a set of information trees built upon ontological principles, including a “term tree” to promote the use of standardized terms. In the course of doing so, we realized that there were substantial gaps in existing ontologies with regards to concepts, processes, and, especially, physical entities (e.g., vector species, pathogen species, and vector surveillance and management equipment) in the domain of surveillance and management of vectors and vector-borne pathogens. We therefore produced an ontology for vector surveillance and management, focusing on arthropod vectors and vector-borne pathogens with relevance to humans or domestic animals, and with special emphasis on content to support operational activities through inclusion in databases, data management systems, or decision support systems. The Vector Surveillance and Management Ontology (VSMO) includes >2,200 unique terms, of which the vast majority (>80%) were newly generated during the development of this ontology. One core feature of the VSMO is the linkage

  7. Semantic Similarity in Biomedical Ontologies

    PubMed Central

    Pesquita, Catia; Faria, Daniel; Falcão, André O.; Lord, Phillip; Couto, Francisco M.

    2009-01-01

    In recent years, ontologies have become a mainstream topic in biomedical research. When biological entities are described using a common schema, such as an ontology, they can be compared by means of their annotations. This type of comparison is called semantic similarity, since it assesses the degree of relatedness between two entities by the similarity in meaning of their annotations. The application of semantic similarity to biomedical ontologies is recent; nevertheless, several studies have been published in the last few years describing and evaluating diverse approaches. Semantic similarity has become a valuable tool for validating the results drawn from biomedical studies such as gene clustering, gene expression data analysis, prediction and validation of molecular interactions, and disease gene prioritization. We review semantic similarity measures applied to biomedical ontologies and propose their classification according to the strategies they employ: node-based versus edge-based and pairwise versus groupwise. We also present comparative assessment studies and discuss the implications of their results. We survey the existing implementations of semantic similarity measures, and we describe examples of applications to biomedical research. This will clarify how biomedical researchers can benefit from semantic similarity measures and help them choose the approach most suitable for their studies. Biomedical ontologies are evolving toward increased coverage, formality, and integration, and their use for annotation is increasingly becoming a focus of both effort by biomedical experts and application of automated annotation procedures to create corpora of higher quality and completeness than are currently available. Given that semantic similarity measures are directly dependent on these evolutions, we can expect to see them gaining more relevance and even becoming as essential as sequence similarity is today in biomedical research. PMID:19649320

  8. Lightweight Community-Driven Ontology Evolution

    NASA Astrophysics Data System (ADS)

    Siorpaes, Katharina

    Only few well-maintained domain ontologies can be found on the Web. The likely reasons for the lack of useful domain ontologies include that (1) informal means to convey intended meaning more efficiently are used for ontology specification only to a very limited extent, (2) many relevant domains of discourse show a substantial degree of conceptual dynamics, (3) ontology representation languages are hard to understand for the majority of (potential) ontology users and domain experts, and (4) the community does not have control over the ontology evolution. In this thesis, we propose to (1) ground a methodology for community-grounded ontology building on the culture and philosophy of wikis by giving users who have no or little expertise in ontology engineering the opportunity to contribute in all stages of the ontology lifecycle and (2) exploit the combination of human and computational intelligence to discover and resolve inconsistencies and align lightweight domain ontologies. The contribution of this thesis is a methodology and prototype for community-grounded building and evolution of lightweight domain ontologies.

  9. Ontologies as integrative tools for plant science

    PubMed Central

    Walls, Ramona L.; Athreya, Balaji; Cooper, Laurel; Elser, Justin; Gandolfo, Maria A.; Jaiswal, Pankaj; Mungall, Christopher J.; Preece, Justin; Rensing, Stefan; Smith, Barry; Stevenson, Dennis W.

    2012-01-01

    Premise of the study Bio-ontologies are essential tools for accessing and analyzing the rapidly growing pool of plant genomic and phenomic data. Ontologies provide structured vocabularies to support consistent aggregation of data and a semantic framework for automated analyses and reasoning. They are a key component of the semantic web. Methods This paper provides background on what bio-ontologies are, why they are relevant to botany, and the principles of ontology development. It includes an overview of ontologies and related resources that are relevant to plant science, with a detailed description of the Plant Ontology (PO). We discuss the challenges of building an ontology that covers all green plants (Viridiplantae). Key results Ontologies can advance plant science in four keys areas: (1) comparative genetics, genomics, phenomics, and development; (2) taxonomy and systematics; (3) semantic applications; and (4) education. Conclusions Bio-ontologies offer a flexible framework for comparative plant biology, based on common botanical understanding. As genomic and phenomic data become available for more species, we anticipate that the annotation of data with ontology terms will become less centralized, while at the same time, the need for cross-species queries will become more common, causing more researchers in plant science to turn to ontologies. PMID:22847540

  10. Scalable representations of diseases in biomedical ontologies

    PubMed Central

    2011-01-01

    Background The realm of pathological entities can be subdivided into pathological dispositions, pathological processes, and pathological structures. The latter are the bearer of dispositions, which can then be realized by their manifestations — pathologic processes. Despite its ontological soundness, implementing this model via purpose-oriented domain ontologies will likely require considerable effort, both in ontology construction and maintenance, which constitutes a considerable problem for SNOMED CT, presently the largest biomedical ontology. Results We describe an ontology design pattern which allows ontologists to make assertions that blur the distinctions between dispositions, processes, and structures until necessary. Based on the domain upper-level ontology BioTop, it permits ascriptions of location and participation in the definition of pathological phenomena even without an ontological commitment to a distinction between these three categories. An analysis of SNOMED CT revealed that numerous classes in the findings/disease hierarchy are ambiguous with respect to process vs. disposition. Here our proposed approach can easily be applied to create unambiguous classes. No ambiguities could be defined regarding the distinction of structure and non-structure classes, but here we have found problematic duplications. Conclusions We defend a judicious use of disjunctive, and therefore ambiguous, classes in biomedical ontologies during the process of ontology construction and in the practice of ontology application. The use of these classes is permitted to span across several top-level categories, provided it contributes to ontology simplification and supports the intended reasoning scenarios. PMID:21624161

  11. CLASSIFYING PROCESSES: AN ESSAY IN APPLIED ONTOLOGY

    PubMed Central

    Smith, Barry

    2013-01-01

    We begin by describing recent developments in the burgeoning discipline of applied ontology, focusing especially on the ways ontologies are providing a means for the consistent representation of scientific data. We then introduce Basic Formal Ontology (BFO), a top-level ontology that is serving as domain-neutral framework for the development of lower level ontologies in many specialist disciplines, above all in biology and medicine. BFO is a bicategorial ontology, embracing both three-dimensionalist (continuant) and four-dimensionalist (occurrent) perspectives within a single framework. We examine how BFO-conformant domain ontologies can deal with the consistent representation of scientific data deriving from the measurement of processes of different types, and we outline on this basis the first steps of an approach to the classification of such processes within the BFO framework.1 PMID:23888086

  12. How orthogonal are the OBO Foundry ontologies?

    PubMed Central

    2011-01-01

    Background Ontologies in biomedicine facilitate information integration, data exchange, search and query of biomedical data, and other critical knowledge-intensive tasks. The OBO Foundry is a collaborative effort to establish a set of principles for ontology development with the eventual goal of creating a set of interoperable reference ontologies in the domain of biomedicine. One of the key requirements to achieve this goal is to ensure that ontology developers reuse term definitions that others have already created rather than create their own definitions, thereby making the ontologies orthogonal. Methods We used a simple lexical algorithm to analyze the extent to which the set of OBO Foundry candidate ontologies identified from September 2009 to September 2010 conforms to this vision. Specifically, we analyzed (1) the level of explicit term reuse in this set of ontologies, (2) the level of overlap, where two ontologies define similar terms independently, and (3) how the levels of reuse and overlap changed during the course of this year. Results We found that 30% of the ontologies reuse terms from other Foundry candidates and 96% of the candidate ontologies contain terms that overlap with terms from the other ontologies. We found that while term reuse increased among the ontologies between September 2009 and September 2010, the level of overlap among the ontologies remained relatively constant. Additionally, we analyzed the six ontologies announced as OBO Foundry members on March 5, 2010, and identified that the level of overlap was extremely low, but, notably, so was the level of term reuse. Conclusions We have created a prototype web application that allows OBO Foundry ontology developers to see which classes from their ontologies overlap with classes from other ontologies in the OBO Foundry (http://obomap.bioontology.org). From our analysis, we conclude that while the OBO Foundry has made significant progress toward orthogonality during the period of this

  13. Ontological realism: A methodology for coordinated evolution of scientific ontologies

    PubMed Central

    Smith, Barry; Ceusters, Werner

    2011-01-01

    Since 2002 we have been testing and refining a methodology for ontology development that is now being used by multiple groups of researchers in different life science domains. Gary Merrill, in a recent paper in this journal, describes some of the reasons why this methodology has been found attractive by researchers in the biological and biomedical sciences. At the same time he assails the methodology on philosophical grounds, focusing specifically on our recommendation that ontologies developed for scientific purposes should be constructed in such a way that their terms are seen as referring to what we call universals or types in reality. As we show, Merrill’s critique is of little relevance to the success of our realist project, since it not only reveals no actual errors in our work but also criticizes views on universals that we do not in fact hold. However, it nonetheless provides us with a valuable opportunity to clarify the realist methodology, and to show how some of its principles are being applied, especially within the framework of the OBO (Open Biomedical Ontologies) Foundry initiative. PMID:21637730

  14. Track-Level-Compensation Look-Up Table Improves Antenna Pointing Precision

    NASA Technical Reports Server (NTRS)

    Gawronski, W.; Baher, F.; Gama, E.

    2006-01-01

    This article presents the improvement of the beam-waveguide antenna pointing accuracy due to the implementation of the track-level-compensation look-up table. It presents the development of the table, from the measurements of the inclinometer tilts to the processing of the measurement data and the determination of the threeaxis alidade rotations. The table consists of three axis rotations of the alidade as a function of the azimuth position. The article also presents the equations to determine the elevation and cross-elevation errors of the antenna as a function of the alidade rotations and the antenna azimuth and elevation positions. The table performance was verified using radio beam pointing data. The pointing error decreased from 4.5 mdeg to 1.4 mdeg in elevation and from 14.5 mdeg to 3.1 mdeg in cross-elevation. I. Introduction The Deep Space Station 25 (DSS 25) antenna shown in Fig. 1 is one of NASA s Deep Space Network beam-waveguide (BWG) antennas. At 34 GHz (Ka-band) operation, it is necessary to be able to track with a pointing accuracy of 2-mdeg root-mean-square (rms). Repeatable pointing errors of several millidegrees of magnitude have been observed during the BWG antenna calibration measurements. The systematic errors of order 4 and lower are eliminated using the antenna pointing model. However, repeatable pointing errors of higher order are out of reach of the model. The most prominent high-order systematic errors are the ones caused by the uneven azimuth track. The track is shown in Fig. 2. Manufacturing and installation tolerances, as well as gaps between the segments of the track, are the sources of the pointing errors that reach over 14-mdeg peak-to-peak magnitude, as reported in [1,2]. This article presents a continuation of the investigations and measurements of the pointing errors caused by the azimuth-track-level unevenness that were presented in [1] and [2], and it presents the implementation results. Track-level-compensation (TLC) look-up

  15. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  16. Definition of an Ontology Matching Algorithm for Context Integration in Smart Cities.

    PubMed

    Otero-Cerdeira, Lorena; Rodríguez-Martínez, Francisco J; Gómez-Rodríguez, Alma

    2014-01-01

    In this paper we describe a novel proposal in the field of smart cities: using an ontology matching algorithm to guarantee the automatic information exchange between the agents and the smart city. A smart city is composed by different types of agents that behave as producers and/or consumers of the information in the smart city. In our proposal, the data from the context is obtained by sensor and device agents while users interact with the smart city by means of user or system agents. The knowledge of each agent, as well as the smart city's knowledge, is semantically represented using different ontologies. To have an open city, that is fully accessible to any agent and therefore to provide enhanced services to the users, there is the need to ensure a seamless communication between agents and the city, regardless of their inner knowledge representations, i.e., ontologies. To meet this goal we use ontology matching techniques, specifically we have defined a new ontology matching algorithm called OntoPhil to be deployed within a smart city, which has never been done before. OntoPhil was tested on the benchmarks provided by the well known evaluation initiative, Ontology Alignment Evaluation Initiative, and also compared to other matching algorithms, although these algorithms were not specifically designed for smart cities. Additionally, specific tests involving a smart city's ontology and different types of agents were conducted to validate the usefulness of OntoPhil in the smart city environment. PMID:25494353

  17. The ChEBI reference database and ontology for biologically relevant chemistry: enhancements for 2013.

    PubMed

    Hastings, Janna; de Matos, Paula; Dekker, Adriano; Ennis, Marcus; Harsha, Bhavana; Kale, Namrata; Muthukrishnan, Venkatesh; Owen, Gareth; Turner, Steve; Williams, Mark; Steinbeck, Christoph

    2013-01-01

    ChEBI (http://www.ebi.ac.uk/chebi) is a database and ontology of chemical entities of biological interest. Over the past few years, ChEBI has continued to grow steadily in content, and has added several new features. In addition to incorporating all user-requested compounds, our annotation efforts have emphasized immunology, natural products and metabolites in many species. All database entries are now 'is_a' classified within the ontology, meaning that all of the chemicals are available to semantic reasoning tools that harness the classification hierarchy. We have completely aligned the ontology with the Open Biomedical Ontologies (OBO) Foundry-recommended upper level Basic Formal Ontology. Furthermore, we have aligned our chemical classification with the classification of chemical-involving processes in the Gene Ontology (GO), and as a result of this effort, the majority of chemical-involving processes in GO are now defined in terms of the ChEBI entities that participate in them. This effort necessitated incorporating many additional biologically relevant compounds. We have incorporated additional data types including reference citations, and the species and component for metabolites. Finally, our website and web services have had several enhancements, most notably the provision of a dynamic new interactive graph-based ontology visualization. PMID:23180789

  18. Definition of an Ontology Matching Algorithm for Context Integration in Smart Cities

    PubMed Central

    Otero-Cerdeira, Lorena; Rodríguez-Martínez, Francisco J.; Gómez-Rodríguez, Alma

    2014-01-01

    In this paper we describe a novel proposal in the field of smart cities: using an ontology matching algorithm to guarantee the automatic information exchange between the agents and the smart city. A smart city is composed by different types of agents that behave as producers and/or consumers of the information in the smart city. In our proposal, the data from the context is obtained by sensor and device agents while users interact with the smart city by means of user or system agents. The knowledge of each agent, as well as the smart city's knowledge, is semantically represented using different ontologies. To have an open city, that is fully accessible to any agent and therefore to provide enhanced services to the users, there is the need to ensure a seamless communication between agents and the city, regardless of their inner knowledge representations, i.e., ontologies. To meet this goal we use ontology matching techniques, specifically we have defined a new ontology matching algorithm called OntoPhil to be deployed within a smart city, which has never been done before. OntoPhil was tested on the benchmarks provided by the well known evaluation initiative, Ontology Alignment Evaluation Initiative, and also compared to other matching algorithms, although these algorithms were not specifically designed for smart cities. Additionally, specific tests involving a smart city's ontology and different types of agents were conducted to validate the usefulness of OntoPhil in the smart city environment. PMID:25494353

  19. COBE: A Conjunctive Ontology Browser and Explorer for Visualizing SNOMED CT Fragments.

    PubMed

    Sun, Mengmeng; Zhu, Wei; Tao, Shiqiang; Cui, Licong; Zhang, Guo-Qiang

    2015-01-01

    Ontology search interfaces can benefit from the latest information retrieval advances. This paper introduces a Conjunctive Ontology Browser and Explorer (COBE) for searching and exploring SNOMED CT concepts and visualizing SNOMED CT fragments. COBE combines navigational exploration (NE) with direct lookup (DL) as two complementary modes for finding specific SNOMED CT concepts. The NE mode allows a user to interactively and incrementally narrow down (hence conjunctive) the search space by adding word stems, one at a time. Such word stems serve as attribute constraints, or "attributes" in Formal Concept Analysis, which allows the user to navigate to specific SNOMED CT concept clusters. The DL mode represents the common search mechanism by using a collection of key words, as well as concept identifiers. With respect to the DL mode, evaluation against manually created reference standard showed that COBE attains an example-based precision of 0.958, recall of 0.917, and F1 measure of 0.875. With respect to the NE mode, COBE leverages 28,371 concepts in non-lattice fragments to construct the stem cloud. With merely 9.37% of the total SNOMED CT stem cloud, our navigational exploration mode covers 98.97% of the entire concept collection. PMID:26958309

  20. COBE: A Conjunctive Ontology Browser and Explorer for Visualizing SNOMED CT Fragments

    PubMed Central

    Sun, Mengmeng; Zhu, Wei; Tao, Shiqiang; Cui, Licong; Zhang, Guo-Qiang

    2015-01-01

    Ontology search interfaces can benefit from the latest information retrieval advances. This paper introduces a Conjunctive Ontology Browser and Explorer (COBE) for searching and exploring SNOMED CT concepts and visualizing SNOMED CT fragments. COBE combines navigational exploration (NE) with direct lookup (DL) as two complementary modes for finding specific SNOMED CT concepts. The NE mode allows a user to interactively and incrementally narrow down (hence conjunctive) the search space by adding word stems, one at a time. Such word stems serve as attribute constraints, or “attributes” in Formal Concept Analysis, which allows the user to navigate to specific SNOMED CT concept clusters. The DL mode represents the common search mechanism by using a collection of key words, as well as concept identifiers. With respect to the DL mode, evaluation against manually created reference standard showed that COBE attains an example-based precision of 0.958, recall of 0.917, and F1 measure of 0.875. With respect to the NE mode, COBE leverages 28,371 concepts in non-lattice fragments to construct the stem cloud. With merely 9.37% of the total SNOMED CT stem cloud, our navigational exploration mode covers 98.97% of the entire concept collection. PMID:26958309

  1. Ontology Reuse in Geoscience Semantic Applications

    NASA Astrophysics Data System (ADS)

    Mayernik, M. S.; Gross, M. B.; Daniels, M. D.; Rowan, L. R.; Stott, D.; Maull, K. E.; Khan, H.; Corson-Rikert, J.

    2015-12-01

    The tension between local ontology development and wider ontology connections is fundamental to the Semantic web. It is often unclear, however, what the key decision points should be for new semantic web applications in deciding when to reuse existing ontologies and when to develop original ontologies. In addition, with the growth of semantic web ontologies and applications, new semantic web applications can struggle to efficiently and effectively identify and select ontologies to reuse. This presentation will describe the ontology comparison, selection, and consolidation effort within the EarthCollab project. UCAR, Cornell University, and UNAVCO are collaborating on the EarthCollab project to use semantic web technologies to enable the discovery of the research output from a diverse array of projects. The EarthCollab project is using the VIVO Semantic web software suite to increase discoverability of research information and data related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) diverse research projects informed by geodesy through the UNAVCO geodetic facility and consortium. This presentation will outline of EarthCollab use cases, and provide an overview of key ontologies being used, including the VIVO-Integrated Semantic Framework (VIVO-ISF), Global Change Information System (GCIS), and Data Catalog (DCAT) ontologies. We will discuss issues related to bringing these ontologies together to provide a robust ontological structure to support the EarthCollab use cases. It is rare that a single pre-existing ontology meets all of a new application's needs. New projects need to stitch ontologies together in ways that fit into the broader semantic web ecosystem.

  2. An Evolutionary Ontology Approach for Community-Based Competency Management

    NASA Astrophysics Data System (ADS)

    de Baer, Peter; Meersman, Robert; Zhao, Gang

    In this article we describe an evolutionary ontology approach that distinguishes between major ontology changes and minor ontology changes. We divide the community in three (possibly overlapping) groups, i.e. facilitators, contributors, and users. Facilitators are a selected group of domain experts who represent the intended community. These facilitators define the intended goals of the ontology and will be responsible for major ontology and ontology platform changes. A larger group of contributors consists of all participating domain experts. The contributors will carry out minor ontology changes, like instantiation of concepts and description of concept instances. Users of the ontology may explore the ontology content via the ontology platform and/or make use of the published ontology content in XML or HTML format. The approach makes use of goal and group specific user interfaces to guide the ontology evolution process. For the minor ontology changes, the approach relies on the wisdom of crowds.

  3. Revealing ontological commitments by magic.

    PubMed

    Griffiths, Thomas L

    2015-03-01

    Considering the appeal of different magical transformations exposes some systematic asymmetries. For example, it is more interesting to transform a vase into a rose than a rose into a vase. An experiment in which people judged how interesting they found different magic tricks showed that these asymmetries reflect the direction a transformation moves in an ontological hierarchy: transformations in the direction of animacy and intelligence are favored over the opposite. A second and third experiment demonstrated that judgments of the plausibility of machines that perform the same transformations do not show the same asymmetries, but judgments of the interestingness of such machines do. A formal argument relates this sense of interestingness to evidence for an alternative to our current physical theory, with magic tricks being a particularly pure source of such evidence. These results suggest that people's intuitions about magic tricks can reveal the ontological commitments that underlie human cognition. PMID:25490128

  4. Ontological Model for EHR Interoperability.

    PubMed

    Bouanani-Oukhaled, Zahra; Verdier, Christine; Dupuy-Chessa, Sophie; Fouladi, Karan; Breda, Laurent

    2016-01-01

    The main purpose of this paper is to design a data model for Electronic Health Records which main goal is to enable cooperation of various heterogeneous health information systems. We investigate the interest of the meta-ontologies proposed in [1] by instantiating it with real data. We tested the feasibility of our model on real anonymous medical data provided by the Médibase Systèmes company. PMID:27350489

  5. Track Level Compensation Look-up Table Improves Antenna Pointing Precision

    NASA Technical Reports Server (NTRS)

    Gawronski, Wodek; Baher, Farrokh; Gama, Eric

    2006-01-01

    The pointing accuracy of the NASA Deep Space Network antennas is significantly impacted by the unevenness of the antenna azimuth track. The track unevenness causes repeatable antenna rotations, and repeatable pointing errors. The paper presents the improvement of the pointing accuracy of the antennas by implementing the track-level-compensation look-up table. The table consists of three axis rotations of the alidade as a function of the azimuth position. The paper presents the development of the table, based on the measurements of the inclinometer tilts, processing the measurement data, and determination of the three-axis alidade rotations from the tilt data. It also presents the determination of the elevation and cross-elevation errors of the antenna as a function of the alidade rotations. The pointing accuracy of the antenna with and without a table was measured using various radio beam pointing techniques. The pointing error decreased when the table was used, from 1.5 mdeg to 1.2 mdeg in elevation, and from 20.4 mdeg to 2.2 mdeg in cross-elevation.

  6. Modeling high-energy cosmic ray induced terrestrial muon flux: A lookup table

    NASA Astrophysics Data System (ADS)

    Atri, Dimitra; Melott, Adrian L.

    2011-06-01

    On geological timescales, the Earth is likely to be exposed to an increased flux of high-energy cosmic rays (HECRs) from astrophysical sources such as nearby supernovae, gamma-ray bursts or by galactic shocks. Typical cosmic ray energies may be much higher than the ≤1GeV flux which normally dominates. These high-energy particles strike the Earth's atmosphere initiating an extensive air shower. As the air shower propagates deeper, it ionizes the atmosphere by producing charged secondary particles. Secondary particles such as muons and thermal neutrons produced as a result of nuclear interactions are able to reach the ground, enhancing the radiation dose. Muons contribute 85% to the radiation dose from cosmic rays. This enhanced dose could be potentially harmful to the biosphere. This mechanism has been discussed extensively in literature but has never been quantified. Here, we have developed a lookup table that can be used to quantify this effect by modeling terrestrial muon flux from any arbitrary cosmic ray spectra with 10 GeV to 1 PeV primaries. This will enable us to compute the radiation dose on terrestrial planetary surfaces from a number of astrophysical sources.

  7. Lookup Tables for Predicting CHF and Film-Boiling Heat Transfer: Past, Present, and Future

    SciTech Connect

    Groeneveld, D.C.; Leung, L.K. H.; Guo, Y.; Vasic, A.; El Nakla, M.; Peng, S.W.; Yang, J.; Cheng, S.C.

    2005-10-15

    Lookup tables (LUTs) have been used widely for the prediction of critical heat flux (CHF) and film-boiling heat transfer for water-cooled tubes. LUTs are basically normalized data banks. They eliminate the need to choose between the many different CHF and film-boiling heat transfer prediction methods available.The LUTs have many advantages; e.g., (a) they are simple to use, (b) there is no iteration required, (c) they have a wide range of applications, (d) they may be applied to nonaqueous fluids using fluid-to-fluid modeling relationships, and (e) they are based on a very large database. Concerns associated with the use of LUTs include (a) there are fluctuations in the value of the CHF or film-boiling heat transfer coefficient (HTC) with pressure, mass flux, and quality, (b) there are large variations in the CHF or the film-boiling HTC between the adjacent table entries, and (c) there is a lack or scarcity of data at certain flow conditions.Work on the LUTs is continuing. This will resolve the aforementioned concerns and improve the LUT prediction capability. This work concentrates on better smoothing of the LUT entries, increasing the database, and improving models at conditions where data are sparse or absent.

  8. Legal Ontologies and Loopholes in the Law

    NASA Astrophysics Data System (ADS)

    Lovrenčić, Sandra; Tomac, Ivorka Jurenec; Mavrek, Blaženka

    The use of ontologies is today widely spread across many different domains. The main effort today is, with the development of Semantic Web, to make them available across the Internet community with the purpose of reuse. The legal domain has also been explored concerning ontologies, both on the general as on the sub-domain level. In this paper are explored problems of formal ontology development regarding areas in specific legislation acts that are understated or unequally described across the act — popularly said: loopholes in the law. An example of such a problematic act is shown. For ontology implementation, a well-known tool, Protégé, is used. The ontology is made in formal way, using PAL — Protégé Axiom Language, for expressing constraints, where needed. Ontology is evaluated using known evaluation methods.

  9. Anatomy Ontology Matching Using Markov Logic Networks

    PubMed Central

    Li, Chunhua; Zhao, Pengpeng; Wu, Jian; Cui, Zhiming

    2016-01-01

    The anatomy of model species is described in ontologies, which are used to standardize the annotations of experimental data, such as gene expression patterns. To compare such data between species, we need to establish relationships between ontologies describing different species. Ontology matching is a kind of solutions to find semantic correspondences between entities of different ontologies. Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. We combine several different matching strategies through first-order logic formulas according to the structure of anatomy ontologies. Experiments on the adult mouse anatomy and the human anatomy have demonstrated the effectiveness of proposed approach in terms of the quality of result alignment. PMID:27382498

  10. A Monte Carlo based lookup table for spectrum analysis of turbid media in the reflectance probe regime

    SciTech Connect

    Xiang Wen; Xiewei Zhong; Tingting Yu; Dan Zhu

    2014-07-31

    Fibre-optic diffuse reflectance spectroscopy offers a method for characterising phantoms of biotissue with specified optical properties. For a commercial reflectance probe (six source fibres surrounding a central collection fibre with an inter-fibre spacing of 480 μm; R400-7, Ocean Optics, USA) we have constructed a Monte Carlo based lookup table to create a function called getR(μ{sub a}, μ'{sub s}), where μ{sub a} is the absorption coefficient and μ'{sub s} is the reduced scattering coefficient. Experimental measurements of reflectance from homogeneous calibrated phantoms with given optical properties are compared with the predicted reflectance from the lookup table. The deviation between experiment and prediction is on average 12.1%. (laser biophotonics)

  11. Toward cognitivist ontologies : on the role of selective attention for upper ontologies.

    PubMed

    Carstensen, Kai-Uwe

    2011-11-01

    Ontologies play a key role in modern information society although there are still many fundamental questions regarding their structure to be answered. In this paper, some of these are presented, and it is argued that they require a shift from realist to cognitivist ontologies, with ontology design crucially depending on taking both cognitive and linguistic aspects into consideration. A detailed discussion of central parts of a proposed cognitivist upper ontology based on qualitative representations of selective attention is presented. PMID:21523446

  12. A Marketplace for Ontologies and Ontology-Based Tools and Applications in the Life Sciences

    SciTech Connect

    McEntire, R; Goble, C; Stevens, R; Neumann, E; Matuszek, P; Critchlow, T; Tarczy-Hornoch, P

    2005-06-30

    This paper describes a strategy for the development of ontologies in the life sciences, tools to support the creation and use of those ontologies, and a framework whereby these ontologies can support the development of commercial applications within the field. At the core of these efforts is the need for an organization that will provide a focus for ontology work that will engage researchers as well as drive forward the commercial aspects of this effort.

  13. Efficient table lookup without inverse square roots for calculation of pair wise atomic interactions in classical simulations.

    PubMed

    Nilsson, Lennart

    2009-07-15

    A major bottleneck in classical atomistic simulations of biomolecular systems is the calculation of the pair wise nonbonded (Coulomb, van der Waals) interactions. This remains an issue even when methods are used (e.g., lattice summation or spherical cutoffs) in which the number of interactions is reduced from O(N(2)) to O(NlogN) or O(N). The interaction forces and energies can either be calculated directly each time they are needed or retrieved using precomputed values in a lookup table; the choice between direct calculation and table lookup methods depends on the characteristics of the system studied (total number of particles and the number of particle kinds) as well as the hardware used (CPU speed, size and speed of cache, and main memory). A recently developed lookup table code, implemented in portable and easily maintained FORTRAN 95 in the CHARMM program (www.charmm.org), achieves a 1.5- to 2-fold speedup compared with standard calculations using highly optimized FORTRAN code in real molecular dynamics simulations for a wide range of molecular system sizes. No approximations other than the finite resolution of the tables are introduced, and linear interpolation in a table with the relatively modest density of 100 points/A(2) yields the same accuracy as the standard double precision calculations. For proteins in explicit water a less dense table (10 points/A(2)) is 10-20% faster than using the larger table, and only slightly less accurate. The lookup table is even faster than hand coded assembler routines in most cases, mainly due to a significantly smaller operation count inside the inner loop. PMID:19072764

  14. Vaccine and Drug Ontology Studies (VDOS 2014).

    PubMed

    Tao, Cui; He, Yongqun; Arabandi, Sivaram

    2016-01-01

    The "Vaccine and Drug Ontology Studies" (VDOS) international workshop series focuses on vaccine- and drug-related ontology modeling and applications. Drugs and vaccines have been critical to prevent and treat human and animal diseases. Work in both (drugs and vaccines) areas is closely related - from preclinical research and development to manufacturing, clinical trials, government approval and regulation, and post-licensure usage surveillance and monitoring. Over the last decade, tremendous efforts have been made in the biomedical ontology community to ontologically represent various areas associated with vaccines and drugs - extending existing clinical terminology systems such as SNOMED, RxNorm, NDF-RT, and MedDRA, developing new models such as the Vaccine Ontology (VO) and Ontology of Adverse Events (OAE), vernacular medical terminologies such as the Consumer Health Vocabulary (CHV). The VDOS workshop series provides a platform for discussing innovative solutions as well as the challenges in the development and applications of biomedical ontologies for representing and analyzing drugs and vaccines, their administration, host immune responses, adverse events, and other related topics. The five full-length papers included in this 2014 thematic issue focus on two main themes: (i) General vaccine/drug-related ontology development and exploration, and (ii) Interaction and network-related ontology studies. PMID:26918107

  15. Predicting the Extension of Biomedical Ontologies

    PubMed Central

    Pesquita, Catia; Couto, Francisco M.

    2012-01-01

    Developing and extending a biomedical ontology is a very demanding task that can never be considered complete given our ever-evolving understanding of the life sciences. Extension in particular can benefit from the automation of some of its steps, thus releasing experts to focus on harder tasks. Here we present a strategy to support the automation of change capturing within ontology extension where the need for new concepts or relations is identified. Our strategy is based on predicting areas of an ontology that will undergo extension in a future version by applying supervised learning over features of previous ontology versions. We used the Gene Ontology as our test bed and obtained encouraging results with average f-measure reaching 0.79 for a subset of biological process terms. Our strategy was also able to outperform state of the art change capturing methods. In addition we have identified several issues concerning prediction of ontology evolution, and have delineated a general framework for ontology extension prediction. Our strategy can be applied to any biomedical ontology with versioning, to help focus either manual or semi-automated extension methods on areas of the ontology that need extension. PMID:23028267

  16. FYPO: the fission yeast phenotype ontology

    PubMed Central

    Harris, Midori A.; Lock, Antonia; Bähler, Jürg; Oliver, Stephen G.; Wood, Valerie

    2013-01-01

    Motivation: To provide consistent computable descriptions of phenotype data, PomBase is developing a formal ontology of phenotypes observed in fission yeast. Results: The fission yeast phenotype ontology (FYPO) is a modular ontology that uses several existing ontologies from the open biological and biomedical ontologies (OBO) collection as building blocks, including the phenotypic quality ontology PATO, the Gene Ontology and Chemical Entities of Biological Interest. Modular ontology development facilitates partially automated effective organization of detailed phenotype descriptions with complex relationships to each other and to underlying biological phenomena. As a result, FYPO supports sophisticated querying, computational analysis and comparison between different experiments and even between species. Availability: FYPO releases are available from the Subversion repository at the PomBase SourceForge project page (https://sourceforge.net/p/pombase/code/HEAD/tree/phenotype_ontology/). The current version of FYPO is also available on the OBO Foundry Web site (http://obofoundry.org/). Contact: mah79@cam.ac.uk or vw253@cam.ac.uk PMID:23658422

  17. Scientific Digital Libraries, Interoperability, and Ontologies

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel J.; Mattmann, Chris A.

    2009-01-01

    Scientific digital libraries serve complex and evolving research communities. Justifications for the development of scientific digital libraries include the desire to preserve science data and the promises of information interconnectedness, correlative science, and system interoperability. Shared ontologies are fundamental to fulfilling these promises. We present a tool framework, some informal principles, and several case studies where shared ontologies are used to guide the implementation of scientific digital libraries. The tool framework, based on an ontology modeling tool, was configured to develop, manage, and keep shared ontologies relevant within changing domains and to promote the interoperability, interconnectedness, and correlation desired by scientists.

  18. How Ontologies are Made: Studying the Hidden Social Dynamics Behind Collaborative Ontology Engineering Projects

    PubMed Central

    Strohmaier, Markus; Walk, Simon; Pöschko, Jan; Lamprecht, Daniel; Tudorache, Tania; Nyulas, Csongor; Musen, Mark A.; Noy, Natalya F.

    2013-01-01

    Traditionally, evaluation methods in the field of semantic technologies have focused on the end result of ontology engineering efforts, mainly, on evaluating ontologies and their corresponding qualities and characteristics. This focus has led to the development of a whole arsenal of ontology-evaluation techniques that investigate the quality of ontologies as a product. In this paper, we aim to shed light on the process of ontology engineering construction by introducing and applying a set of measures to analyze hidden social dynamics. We argue that especially for ontologies which are constructed collaboratively, understanding the social processes that have led to its construction is critical not only in understanding but consequently also in evaluating the ontology. With the work presented in this paper, we aim to expose the texture of collaborative ontology engineering processes that is otherwise left invisible. Using historical change-log data, we unveil qualitative differences and commonalities between different collaborative ontology engineering projects. Explaining and understanding these differences will help us to better comprehend the role and importance of social factors in collaborative ontology engineering projects. We hope that our analysis will spur a new line of evaluation techniques that view ontologies not as the static result of deliberations among domain experts, but as a dynamic, collaborative and iterative process that needs to be understood, evaluated and managed in itself. We believe that advances in this direction would help our community to expand the existing arsenal of ontology evaluation techniques towards more holistic approaches. PMID:24311994

  19. Where to Publish and Find Ontologies? A Survey of Ontology Libraries

    PubMed Central

    d'Aquin, Mathieu; Noy, Natalya F.

    2011-01-01

    One of the key promises of the Semantic Web is its potential to enable and facilitate data interoperability. The ability of data providers and application developers to share and reuse ontologies is a critical component of this data interoperability: if different applications and data sources use the same set of well defined terms for describing their domain and data, it will be much easier for them to “talk” to one another. Ontology libraries are the systems that collect ontologies from different sources and facilitate the tasks of finding, exploring, and using these ontologies. Thus ontology libraries can serve as a link in enabling diverse users and applications to discover, evaluate, use, and publish ontologies. In this paper, we provide a survey of the growing—and surprisingly diverse—landscape of ontology libraries. We highlight how the varying scope and intended use of the libraries a ects their features, content, and potential exploitation in applications. From reviewing eleven ontology libraries, we identify a core set of questions that ontology practitioners and users should consider in choosing an ontology library for finding ontologies or publishing their own. We also discuss the research challenges that emerge from this survey, for the developers of ontology libraries to address. PMID:22408576

  20. How Ontologies are Made: Studying the Hidden Social Dynamics Behind Collaborative Ontology Engineering Projects.

    PubMed

    Strohmaier, Markus; Walk, Simon; Pöschko, Jan; Lamprecht, Daniel; Tudorache, Tania; Nyulas, Csongor; Musen, Mark A; Noy, Natalya F

    2013-05-01

    Traditionally, evaluation methods in the field of semantic technologies have focused on the end result of ontology engineering efforts, mainly, on evaluating ontologies and their corresponding qualities and characteristics. This focus has led to the development of a whole arsenal of ontology-evaluation techniques that investigate the quality of ontologies as a product. In this paper, we aim to shed light on the process of ontology engineering construction by introducing and applying a set of measures to analyze hidden social dynamics. We argue that especially for ontologies which are constructed collaboratively, understanding the social processes that have led to its construction is critical not only in understanding but consequently also in evaluating the ontology. With the work presented in this paper, we aim to expose the texture of collaborative ontology engineering processes that is otherwise left invisible. Using historical change-log data, we unveil qualitative differences and commonalities between different collaborative ontology engineering projects. Explaining and understanding these differences will help us to better comprehend the role and importance of social factors in collaborative ontology engineering projects. We hope that our analysis will spur a new line of evaluation techniques that view ontologies not as the static result of deliberations among domain experts, but as a dynamic, collaborative and iterative process that needs to be understood, evaluated and managed in itself. We believe that advances in this direction would help our community to expand the existing arsenal of ontology evaluation techniques towards more holistic approaches. PMID:24311994

  1. Towards Ontology-Driven Information Systems: Guidelines to the Creation of New Methodologies to Build Ontologies

    ERIC Educational Resources Information Center

    Soares, Andrey

    2009-01-01

    This research targeted the area of Ontology-Driven Information Systems, where ontology plays a central role both at development time and at run time of Information Systems (IS). In particular, the research focused on the process of building domain ontologies for IS modeling. The motivation behind the research was the fact that researchers have…

  2. Surreptitious, Evolving and Participative Ontology Development: An End-User Oriented Ontology Development Methodology

    ERIC Educational Resources Information Center

    Bachore, Zelalem

    2012-01-01

    Ontology not only is considered to be the backbone of the semantic web but also plays a significant role in distributed and heterogeneous information systems. However, ontology still faces limited application and adoption to date. One of the major problems is that prevailing engineering-oriented methodologies for building ontologies do not…

  3. Research on geo-ontology construction based on spatial affairs

    NASA Astrophysics Data System (ADS)

    Li, Bin; Liu, Jiping; Shi, Lihong

    2008-12-01

    Geo-ontology, a kind of domain ontology, is used to make the knowledge, information and data of concerned geographical science in the abstract to form a series of single object or entity with common cognition. These single object or entity can compose a specific system in some certain way and can be disposed on conception and given specific definition at the same time. Ultimately, these above-mentioned worked results can be expressed in some manners of formalization. The main aim of constructing geo-ontology is to get the knowledge of the domain of geography, and provide the commonly approbatory vocabularies in the domain, as well as give the definite definition about these geographical vocabularies and mutual relations between them in the mode of formalization at different hiberarchy. Consequently, the modeling tool of conception model of describing geographic Information System at the hiberarchy of semantic meaning and knowledge can be provided to solve the semantic conception of information exchange in geographical space and make them possess the comparatively possible characters of accuracy, maturity and universality, etc. In fact, some experiments have been made to validate geo-ontology. During the course of studying, Geo-ontology oriented to flood can be described and constructed by making the method based on geo-spatial affairs to serve the governmental departments at all levels to deal with flood. Thereinto, intelligent retrieve and service based on geoontology of disaster are main functions known from the traditional manner by using keywords. For instance, the function of dealing with disaster information based on geo-ontology can be provided when a supposed flood happened in a certain city. The correlative officers can input some words, such as "city name, flood", which have been realized semantic label, to get the information they needed when they browse different websites. The information, including basic geographical information and flood distributing

  4. Ontodog: a web-based ontology community view generation tool.

    PubMed

    Zheng, Jie; Xiang, Zuoshuang; Stoeckert, Christian J; He, Yongqun

    2014-05-01

    Biomedical ontologies are often very large and complex. Only a subset of the ontology may be needed for a specified application or community. For ontology end users, it is desirable to have community-based labels rather than the labels generated by ontology developers. Ontodog is a web-based system that can generate an ontology subset based on Excel input, and support generation of an ontology community view, which is defined as the whole or a subset of the source ontology with user-specified annotations including user-preferred labels. Ontodog allows users to easily generate community views with minimal ontology knowledge and no programming skills or installation required. Currently >100 ontologies including all OBO Foundry ontologies are available to generate the views based on user needs. We demonstrate the application of Ontodog for the generation of community views using the Ontology for Biomedical Investigations as the source ontology. PMID:24413522

  5. Federated ontology-based queries over cancer data

    PubMed Central

    2012-01-01

    Background Personalised medicine provides patients with treatments that are specific to their genetic profiles. It requires efficient data sharing of disparate data types across a variety of scientific disciplines, such as molecular biology, pathology, radiology and clinical practice. Personalised medicine aims to offer the safest and most effective therapeutic strategy based on the gene variations of each subject. In particular, this is valid in oncology, where knowledge about genetic mutations has already led to new therapies. Current molecular biology techniques (microarrays, proteomics, epigenetic technology and improved DNA sequencing technology) enable better characterisation of cancer tumours. The vast amounts of data, however, coupled with the use of different terms - or semantic heterogeneity - in each discipline makes the retrieval and integration of information difficult. Results Existing software infrastructures for data-sharing in the cancer domain, such as caGrid, support access to distributed information. caGrid follows a service-oriented model-driven architecture. Each data source in caGrid is associated with metadata at increasing levels of abstraction, including syntactic, structural, reference and domain metadata. The domain metadata consists of ontology-based annotations associated with the structural information of each data source. However, caGrid's current querying functionality is given at the structural metadata level, without capitalising on the ontology-based annotations. This paper presents the design of and theoretical foundations for distributed ontology-based queries over cancer research data. Concept-based queries are reformulated to the target query language, where join conditions between multiple data sources are found by exploiting the semantic annotations. The system has been implemented, as a proof of concept, over the caGrid infrastructure. The approach is applicable to other model-driven architectures. A graphical user

  6. A method of extracting ontology module using concept relations for sharing knowledge in mobile cloud computing environment.

    PubMed

    Lee, Keonsoo; Rho, Seungmin; Lee, Seok-Won

    2014-01-01

    In mobile cloud computing environment, the cooperation of distributed computing objects is one of the most important requirements for providing successful cloud services. To satisfy this requirement, all the members, who are employed in the cooperation group, need to share the knowledge for mutual understanding. Even if ontology can be the right tool for this goal, there are several issues to make a right ontology. As the cost and complexity of managing knowledge increase according to the scale of the knowledge, reducing the size of ontology is one of the critical issues. In this paper, we propose a method of extracting ontology module to increase the utility of knowledge. For the given signature, this method extracts the ontology module, which is semantically self-contained to fulfill the needs of the service, by considering the syntactic structure and semantic relation of concepts. By employing this module, instead of the original ontology, the cooperation of computing objects can be performed with less computing load and complexity. In particular, when multiple external ontologies need to be combined for more complex services, this method can be used to optimize the size of shared knowledge. PMID:25250374

  7. XOA: Web-Enabled Cross-Ontological Analytics

    SciTech Connect

    Riensche, Roderick M.; Baddeley, Bob; Sanfilippo, Antonio P.; Posse, Christian; Gopalan, Banu

    2007-07-09

    The paper being submitted (as an "extended abstract" prior to conference acceptance) provides a technical description of our proof-of-concept prototype for the XOA method. Abstract: To address meaningful questions, scientists need to relate information across diverse classification schemes such as ontologies, terminologies and thesauri. These resources typically address a single knowledge domain at a time and are not cross-indexed. Information that is germane to the same object may therefore remain unlinked with consequent loss of knowledge discovery across disciplines and even sub-domains of the same discipline. We propose to address these problems by fostering semantic interoperability through the development of ontology alignment web services capable of enabling cross-scale knowledge discovery, and demonstrate a specific application of such an approach to the biomedical domain.

  8. Ontology driven health information systems architectures enable pHealth for empowered patients.

    PubMed

    Blobel, Bernd

    2011-02-01

    The paradigm shift from organization-centered to managed care and on to personal health settings increases specialization and distribution of actors and services related to the health of patients or even citizens before becoming patients. As a consequence, extended communication and cooperation is required between all principals involved in health services such as persons, organizations, devices, systems, applications, and components. Personal health (pHealth) environments range over many disciplines, where domain experts present their knowledge by using domain-specific terminologies and ontologies. Therefore, the mapping of domain ontologies is inevitable for ensuring interoperability. The paper introduces the care paradigms and the related requirements as well as an architectural approach for meeting the business objectives. Furthermore, it discusses some theoretical challenges and practical examples of ontologies, concept and knowledge representations, starting general and then focusing on security and privacy related services. The requirements and solutions for empowering the patient or the citizen before becoming a patient are especially emphasized. PMID:21036660

  9. BioPortal: An Open-Source Community-Based Ontology Repository

    NASA Astrophysics Data System (ADS)

    Noy, N.; NCBO Team

    2011-12-01

    Advances in computing power and new computational techniques have changed the way researchers approach science. In many fields, one of the most fruitful approaches has been to use semantically aware software to break down the barriers among disparate domains, systems, data sources, and technologies. Such software facilitates data aggregation, improves search, and ultimately allows the detection of new associations that were previously not detectable. Achieving these analyses requires software systems that take advantage of the semantics and that can intelligently negotiate domains and knowledge sources, identifying commonality across systems that use different and conflicting vocabularies, while understanding apparent differences that may be concealed by the use of superficially similar terms. An ontology, a semantically rich vocabulary for a domain of interest, is the cornerstone of software for bridging systems, domains, and resources. However, as ontologies become the foundation of all semantic technologies in e-science, we must develop an infrastructure for sharing ontologies, finding and evaluating them, integrating and mapping among them, and using ontologies in applications that help scientists process their data. BioPortal [1] is an open-source on-line community-based ontology repository that has been used as a critical component of semantic infrastructure in several domains, including biomedicine and bio-geochemical data. BioPortal, uses the social approaches in the Web 2.0 style to bring structure and order to the collection of biomedical ontologies. It enables users to provide and discuss a wide array of knowledge components, from submitting the ontologies themselves, to commenting on and discussing classes in the ontologies, to reviewing ontologies in the context of their own ontology-based projects, to creating mappings between overlapping ontologies and discussing and critiquing the mappings. Critically, it provides web-service access to all its

  10. A Gross Anatomy Ontology for Hymenoptera

    PubMed Central

    Yoder, Matthew J.; Mikó, István; Seltmann, Katja C.; Bertone, Matthew A.; Deans, Andrew R.

    2010-01-01

    Hymenoptera is an extraordinarily diverse lineage, both in terms of species numbers and morphotypes, that includes sawflies, bees, wasps, and ants. These organisms serve critical roles as herbivores, predators, parasitoids, and pollinators, with several species functioning as models for agricultural, behavioral, and genomic research. The collective anatomical knowledge of these insects, however, has been described or referred to by labels derived from numerous, partially overlapping lexicons. The resulting corpus of information—millions of statements about hymenopteran phenotypes—remains inaccessible due to language discrepancies. The Hymenoptera Anatomy Ontology (HAO) was developed to surmount this challenge and to aid future communication related to hymenopteran anatomy. The HAO was built using newly developed interfaces within mx, a Web-based, open source software package, that enables collaborators to simultaneously contribute to an ontology. Over twenty people contributed to the development of this ontology by adding terms, genus differentia, references, images, relationships, and annotations. The database interface returns an Open Biomedical Ontology (OBO) formatted version of the ontology and includes mechanisms for extracting candidate data and for publishing a searchable ontology to the Web. The application tools are subject-agnostic and may be used by others initiating and developing ontologies. The present core HAO data constitute 2,111 concepts, 6,977 terms (labels for concepts), 3,152 relations, 4,361 sensus (links between terms, concepts, and references) and over 6,000 text and graphical annotations. The HAO is rooted with the Common Anatomy Reference Ontology (CARO), in order to facilitate interoperability with and future alignment to other anatomy ontologies, and is available through the OBO Foundry ontology repository and BioPortal. The HAO provides a foundation through which connections between genomic, evolutionary developmental biology

  11. Issues in learning an ontology from text

    PubMed Central

    Brewster, Christopher; Jupp, Simon; Luciano, Joanne; Shotton, David; Stevens, Robert D; Zhang, Ziqi

    2009-01-01

    Ontology construction for any domain is a labour intensive and complex process. Any methodology that can reduce the cost and increase efficiency has the potential to make a major impact in the life sciences. This paper describes an experiment in ontology construction from text for the animal behaviour domain. Our objective was to see how much could be done in a simple and relatively rapid manner using a corpus of journal papers. We used a sequence of pre-existing text processing steps, and here describe the different choices made to clean the input, to derive a set of terms and to structure those terms in a number of hierarchies. We describe some of the challenges, especially that of focusing the ontology appropriately given a starting point of a heterogeneous corpus. Using mainly automated techniques, we were able to construct an 18055 term ontology-like structure with 73% recall of animal behaviour terms, but a precision of only 26%. We were able to clean unwanted terms from the nascent ontology using lexico-syntactic patterns that tested the validity of term inclusion within the ontology. We used the same technique to test for subsumption relationships between the remaining terms to add structure to the initially broad and shallow structure we generated. All outputs are available at . We present a systematic method for the initial steps of ontology or structured vocabulary construction for scientific domains that requires limited human effort and can make a contribution both to ontology learning and maintenance. The method is useful both for the exploration of a scientific domain and as a stepping stone towards formally rigourous ontologies. The filtering of recognised terms from a heterogeneous corpus to focus upon those that are the topic of the ontology is identified to be one of the main challenges for research in ontology learning. PMID:19426458

  12. Global Aerosol Optical Models and Lookup Tables for the New MODIS Aerosol Retrieval over Land

    NASA Technical Reports Server (NTRS)

    Levy, Robert C.; Remer, Loraine A.; Dubovik, Oleg

    2007-01-01

    Since 2000, MODIS has been deriving aerosol properties over land from MODIS observed spectral reflectance, by matching the observed reflectance with that simulated for selected aerosol optical models, aerosol loadings, wavelengths and geometrical conditions (that are contained in a lookup table or 'LUT'). Validation exercises have showed that MODIS tends to under-predict aerosol optical depth (tau) in cases of large tau (tau greater than 1.0), signaling errors in the assumed aerosol optical properties. Using the climatology of almucantur retrievals from the hundreds of global AERONET sunphotometer sites, we found that three spherical-derived models (describing fine-sized dominated aerosol), and one spheroid-derived model (describing coarse-sized dominated aerosol, presumably dust) generally described the range of observed global aerosol properties. The fine dominated models were separated mainly by their single scattering albedo (omega(sub 0)), ranging from non-absorbing aerosol (omega(sub 0) approx. 0.95) in developed urban/industrial regions, to neutrally absorbing aerosol (omega(sub 0) approx.90) in forest fire burning and developing industrial regions, to absorbing aerosol (omega(sub 0) approx. 0.85) in regions of savanna/grassland burning. We determined the dominant model type in each region and season, to create a 1 deg. x 1 deg. grid of assumed aerosol type. We used vector radiative transfer code to create a new LUT, simulating the four aerosol models, in four MODIS channels. Independent AERONET observations of spectral tau agree with the new models, indicating that the new models are suitable for use by the MODIS aerosol retrieval.

  13. Nosology, ontology and promiscuous realism.

    PubMed

    Binney, Nicholas

    2015-06-01

    Medics may consider worrying about their metaphysics and ontology to be a waste of time. I will argue here that this is not the case. Promiscuous realism is a metaphysical position which holds that multiple, equally valid, classification schemes should be applied to objects (such as patients) to capture different aspects of their complex and heterogeneous nature. As medics at the bedside may need to capture different aspects of their patients' problems, they may need to use multiple classification schemes (multiple nosologies), and thus consider adopting a different metaphysics to the one commonly in use. PMID:25389077

  14. Ontology-Driven Information Integration

    NASA Technical Reports Server (NTRS)

    Tissot, Florence; Menzel, Chris

    2005-01-01

    Ontology-driven information integration (ODII) is a method of computerized, automated sharing of information among specialists who have expertise in different domains and who are members of subdivisions of a large, complex enterprise (e.g., an engineering project, a government agency, or a business). In ODII, one uses rigorous mathematical techniques to develop computational models of engineering and/or business information and processes. These models are then used to develop software tools that support the reliable processing and exchange of information among the subdivisions of this enterprise or between this enterprise and other enterprises.

  15. Inflammation ontology design pattern: an exercise in building a core biomedical ontology with descriptions and situations.

    PubMed

    Gangemi, Aldo; Catenacci, Carola; Battaglia, Massimo

    2004-01-01

    Formal ontology has proved to be an extremely useful tool for negotiating intended meaning, for building explicit, formal data sheets, and for the discovery of novel views on existing data structures. This paper describes an example of application of formal ontological methods to the creation of biomedical ontologies. Addressed here is the ambiguous notion of inflammation, which spans across multiple linguistic meanings, multiple layers of reality, and multiple details of granularity. We use UML class diagrams, description logics, and the DOLCE foundational ontology, augmented with the Description and Situation theory, in order to provide the representational and ontological primitives that are necessary for the development of detailed, flexible, and functional biomedical ontologies. An ontology design pattern is proposed as a modelling template for inflammations. PMID:15853264

  16. Ontology Design Patterns as Interfaces (invited)

    NASA Astrophysics Data System (ADS)

    Janowicz, K.

    2015-12-01

    In recent years ontology design patterns (ODP) have gained popularity among knowledge engineers. ODPs are modular but self-contained building blocks that are reusable and extendible. They minimize the amount of ontological commitments and thereby are easier to integrate than large monolithic ontologies. Typically, patterns are not directly used to annotate data or to model certain domain problems but are combined and extended to form data and purpose-driven local ontologies that serve the needs of specific applications or communities. By relying on a common set of patterns these local ontologies can be aligned to improve interoperability and enable federated queries without enforcing a top-down model of the domain. In previous work, we introduced ontological views as layer on top of ontology design patterns to ease the reuse, combination, and integration of patterns. While the literature distinguishes multiple types of patterns, e.g., content patterns or logical patterns, we propose to use them as interfaces here to guide the development of ontology-driven systems.

  17. Developing Domain Ontologies for Course Content

    ERIC Educational Resources Information Center

    Boyce, Sinead; Pahl, Claus

    2007-01-01

    Ontologies have the potential to play an important role in instructional design and the development of course content. They can be used to represent knowledge about content, supporting instructors in creating content or learners in accessing content in a knowledge-guided way. While ontologies exist for many subject domains, their quality and…

  18. Statistical mechanics of ontology based annotations

    NASA Astrophysics Data System (ADS)

    Hoyle, David C.; Brass, Andrew

    2016-01-01

    We present a statistical mechanical theory of the process of annotating an object with terms selected from an ontology. The term selection process is formulated as an ideal lattice gas model, but in a highly structured inhomogeneous field. The model enables us to explain patterns recently observed in real-world annotation data sets, in terms of the underlying graph structure of the ontology. By relating the external field strengths to the information content of each node in the ontology graph, the statistical mechanical model also allows us to propose a number of practical metrics for assessing the quality of both the ontology, and the annotations that arise from its use. Using the statistical mechanical formalism we also study an ensemble of ontologies of differing size and complexity; an analysis not readily performed using real data alone. Focusing on regular tree ontology graphs we uncover a rich set of scaling laws describing the growth in the optimal ontology size as the number of objects being annotated increases. In doing so we provide a further possible measure for assessment of ontologies.

  19. Automating Ontological Annotation with WordNet

    SciTech Connect

    Sanfilippo, Antonio P.; Tratz, Stephen C.; Gregory, Michelle L.; Chappell, Alan R.; Whitney, Paul D.; Posse, Christian; Paulson, Patrick R.; Baddeley, Bob L.; Hohimer, Ryan E.; White, Amanda M.

    2006-01-22

    Semantic Web applications require robust and accurate annotation tools that are capable of automating the assignment of ontological classes to words in naturally occurring text (ontological annotation). Most current ontologies do not include rich lexical databases and are therefore not easily integrated with word sense disambiguation algorithms that are needed to automate ontological annotation. WordNet provides a potentially ideal solution to this problem as it offers a highly structured lexical conceptual representation that has been extensively used to develop word sense disambiguation algorithms. However, WordNet has not been designed as an ontology, and while it can be easily turned into one, the result of doing this would present users with serious practical limitations due to the great number of concepts (synonym sets) it contains. Moreover, mapping WordNet to an existing ontology may be difficult and requires substantial labor. We propose to overcome these limitations by developing an analytical platform that (1) provides a WordNet-based ontology offering a manageable and yet comprehensive set of concept classes, (2) leverages the lexical richness of WordNet to give an extensive characterization of concept class in terms of lexical instances, and (3) integrates a class recognition algorithm that automates the assignment of concept classes to words in naturally occurring text. The ensuing framework makes available an ontological annotation platform that can be effectively integrated with intelligence analysis systems to facilitate evidence marshaling and sustain the creation and validation of inference models.

  20. Ontological Annotation with WordNet

    SciTech Connect

    Sanfilippo, Antonio P.; Tratz, Stephen C.; Gregory, Michelle L.; Chappell, Alan R.; Whitney, Paul D.; Posse, Christian; Paulson, Patrick R.; Baddeley, Bob; Hohimer, Ryan E.; White, Amanda M.

    2006-06-06

    Semantic Web applications require robust and accurate annotation tools that are capable of automating the assignment of ontological classes to words in naturally occurring text (ontological annotation). Most current ontologies do not include rich lexical databases and are therefore not easily integrated with word sense disambiguation algorithms that are needed to automate ontological annotation. WordNet provides a potentially ideal solution to this problem as it offers a highly structured lexical conceptual representation that has been extensively used to develop word sense disambiguation algorithms. However, WordNet has not been designed as an ontology, and while it can be easily turned into one, the result of doing this would present users with serious practical limitations due to the great number of concepts (synonym sets) it contains. Moreover, mapping WordNet to an existing ontology may be difficult and requires substantial labor. We propose to overcome these limitations by developing an analytical platform that (1) provides a WordNet-based ontology offering a manageable and yet comprehensive set of concept classes, (2) leverages the lexical richness of WordNet to give an extensive characterization of concept class in terms of lexical instances, and (3) integrates a class recognition algorithm that automates the assignment of concept classes to words in naturally occurring text. The ensuing framework makes available an ontological annotation platform that can be effectively integrated with intelligence analysis systems to facilitate evidence marshaling and sustain the creation and validation of inference models.

  1. Representing default knowledge in biomedical ontologies: application to the integration of anatomy and phenotype ontologies

    PubMed Central

    Hoehndorf, Robert; Loebe, Frank; Kelso, Janet; Herre, Heinrich

    2007-01-01

    Background Current efforts within the biomedical ontology community focus on achieving interoperability between various biomedical ontologies that cover a range of diverse domains. Achieving this interoperability will contribute to the creation of a rich knowledge base that can be used for querying, as well as generating and testing novel hypotheses. The OBO Foundry principles, as applied to a number of biomedical ontologies, are designed to facilitate this interoperability. However, semantic extensions are required to meet the OBO Foundry interoperability goals. Inconsistencies may arise when ontologies of properties – mostly phenotype ontologies – are combined with ontologies taking a canonical view of a domain – such as many anatomical ontologies. Currently, there is no support for a correct and consistent integration of such ontologies. Results We have developed a methodology for accurately representing canonical domain ontologies within the OBO Foundry. This is achieved by adding an extension to the semantics for relationships in the biomedical ontologies that allows for treating canonical information as default. Conclusions drawn from default knowledge may be revoked when additional information becomes available. We show how this extension can be used to achieve interoperability between ontologies, and further allows for the inclusion of more knowledge within them. We apply the formalism to ontologies of mouse anatomy and mammalian phenotypes in order to demonstrate the approach. Conclusion Biomedical ontologies require a new class of relations that can be used in conjunction with default knowledge, thereby extending those currently in use. The inclusion of default knowledge is necessary in order to ensure interoperability between ontologies. PMID:17925014

  2. An Ontology-Based Collaborative Design System

    NASA Astrophysics Data System (ADS)

    Su, Tieming; Qiu, Xinpeng; Yu, Yunlong

    A collaborative design system architecture based on ontology is proposed. In the architecture, OWL is used to construct global shared ontology and local ontology; both of them are machine-interpretable. The former provides a semantic basis for the communication among designers so as to make the designers share the common understanding of knowledge. The latter which describes knowledge of designer’s own is the basis of design by reasoning. SWRL rule base comprising rules defined based on local ontology is constructed to enhance the reasoning capability of local knowledge base. The designers can complete collaborative design at a higher level based on the local knowledge base and the global shared ontology, which enhances the intelligence of design. Finally, a collaborative design case is presented and analyzed.

  3. Towards an Ontology for Reef Islands

    NASA Astrophysics Data System (ADS)

    Duce, Stephanie

    Reef islands are complex, dynamic and vulnerable environments with a diverse range of stake holders. Communication and data sharing between these different groups of stake holders is often difficult. An ontology for the reef island domain would improve the understanding of reef island geomorphology and improve communication between stake holders as well as forming a platform from which to move towards interoperability and the application of Information Technology to forecast and monitor these environments. This paper develops a small, prototypical reef island domain ontology, based on informal, natural language relations, aligned to the DOLCE upper-level ontology, for 20 fundamental terms within the domain. A subset of these terms and their relations are discussed in detail. This approach reveals and discusses challenges which must be overcome in the creation of a reef island domain ontology and which could be relevant to other ontologies in dynamic geospatial domains.

  4. An Ontology Based Approach to Information Security

    NASA Astrophysics Data System (ADS)

    Pereira, Teresa; Santos, Henrique

    The semantically structure of knowledge, based on ontology approaches have been increasingly adopted by several expertise from diverse domains. Recently ontologies have been moved from the philosophical and metaphysics disciplines to be used in the construction of models to describe a specific theory of a domain. The development and the use of ontologies promote the creation of a unique standard to represent concepts within a specific knowledge domain. In the scope of information security systems the use of an ontology to formalize and represent the concepts of security information challenge the mechanisms and techniques currently used. This paper intends to present a conceptual implementation model of an ontology defined in the security domain. The model presented contains the semantic concepts based on the information security standard ISO/IEC_JTC1, and their relationships to other concepts, defined in a subset of the information security domain.

  5. FROG - Fingerprinting Genomic Variation Ontology.

    PubMed

    Abinaya, E; Narang, Pankaj; Bhardwaj, Anshu

    2015-01-01

    Genetic variations play a crucial role in differential phenotypic outcomes. Given the complexity in establishing this correlation and the enormous data available today, it is imperative to design machine-readable, efficient methods to store, label, search and analyze this data. A semantic approach, FROG: "FingeRprinting Ontology of Genomic variations" is implemented to label variation data, based on its location, function and interactions. FROG has six levels to describe the variation annotation, namely, chromosome, DNA, RNA, protein, variations and interactions. Each level is a conceptual aggregation of logically connected attributes each of which comprises of various properties for the variant. For example, in chromosome level, one of the attributes is location of variation and which has two properties, allosomes or autosomes. Another attribute is variation kind which has four properties, namely, indel, deletion, insertion, substitution. Likewise, there are 48 attributes and 278 properties to capture the variation annotation across six levels. Each property is then assigned a bit score which in turn leads to generation of a binary fingerprint based on the combination of these properties (mostly taken from existing variation ontologies). FROG is a novel and unique method designed for the purpose of labeling the entire variation data generated till date for efficient storage, search and analysis. A web-based platform is designed as a test case for users to navigate sample datasets and generate fingerprints. The platform is available at http://ab-openlab.csir.res.in/frog. PMID:26244889

  6. [Towards a structuring fibrillar ontology].

    PubMed

    Guimberteau, J-C

    2012-10-01

    Over previous decades and centuries, the difficulty encountered in the manner in which the tissue of our bodies is organised, and structured, is clearly explained by the impossibility of exploring it in detail. Since the creation of the microscope, the perception of the basic unity, which is the cell, has been essential in understanding the functioning of reproduction and of transmission, but has not been able to explain the notion of form; since the cells are not everywhere and are not distributed in an apparently balanced manner. The problems that remain are those of form and volume and also of connection. The concept of multifibrillar architecture, shaping the interfibrillar microvolumes in space, represents a solution to all these questions. The architectural structures revealed, made up of fibres, fibrils and microfibrils, from the mesoscopic to the microscopic level, provide the concept of a living form with structural rationalism that permits the association of psychochemical molecular biodynamics and quantum physics: the form can thus be described and interpreted, and a true structural ontology is elaborated from a basic functional unity, which is the microvacuole, the intra and interfibrillar volume of the fractal organisation, and the chaotic distribution. Naturally, new, less linear, less conclusive, and less specific concepts will be implied by this ontology, leading one to believe that the emergence of life takes place under submission to forces that the original form will have imposed and oriented the adaptive finality. PMID:22921289

  7. Geo-Ontologies Are Scale Dependent

    NASA Astrophysics Data System (ADS)

    Frank, A. U.

    2009-04-01

    Philosophers aim at a single ontology that describes "how the world is"; for information systems we aim only at ontologies that describe a conceptualization of reality (Guarino 1995; Gruber 2005). A conceptualization of the world implies a spatial and temporal scale: what are the phenomena, the objects and the speed of their change? Few articles (Reitsma et al. 2003) seem to address that an ontology is scale specific (but many articles indicate that ontologies are scale-free in another sense namely that they are scale free in the link densities between concepts). The scale in the conceptualization can be linked to the observation process. The extent of the support of the physical observation instrument and the sampling theorem indicate what level of detail we find in a dataset. These rules apply for remote sensing or sensor networks alike. An ontology of observations must include scale or level of detail, and concepts derived from observations should carry this relation forward. A simple example: in high resolution remote sensing image agricultural plots and roads between them are shown, at lower resolution, only the plots and not the roads are visible. This gives two ontologies, one with plots and roads, the other with plots only. Note that a neighborhood relation in the two different ontologies also yield different results. References Gruber, T. (2005). "TagOntology - a way to agree on the semantics of tagging data." Retrieved October 29, 2005., from http://tomgruber.org/writing/tagontology-tagcapm-talk.pdf. Guarino, N. (1995). "Formal Ontology, Conceptual Analysis and Knowledge Representation." International Journal of Human and Computer Studies. Special Issue on Formal Ontology, Conceptual Analysis and Knowledge Representation, edited by N. Guarino and R. Poli 43(5/6). Reitsma, F. and T. Bittner (2003). Process, Hierarchy, and Scale. Spatial Information Theory. Cognitive and Computational Foundations of Geographic Information ScienceInternational Conference

  8. Temporal Ontologies for Geoscience: Alignment Challenges

    NASA Astrophysics Data System (ADS)

    Cox, S. J. D.

    2014-12-01

    Time is a central concept in geoscience. Geologic histories are composed of sequences of geologic processes and events. Calibration of their timing ties a local history into a broader context, and enables correlation of events between locations. The geologic timescale is standardized in the International Chronostratigraphic Chart, which specifies interval names, and calibrations for the ages of the interval boundaries. Time is also a key concept in the world at large. A number of general purpose temporal ontologies have been developed, both stand-alone and as parts of general purpose or upper ontologies. A temporal ontology for geoscience should apply or extend a suitable general purpose temporal ontology. However, geologic time presents two challenges: Geology involves greater spans of time than in other temporal ontologies, inconsistent with the year-month-day/hour-minute-second formalization that is a basic assumption of most general purpose temporal schemes; The geologic timescale is a temporal topology. Its calibration in terms of an absolute (numeric) scale is a scientific issue in its own right supporting a significant community. In contrast, the general purpose temporal ontologies are premised on exact numeric values for temporal position, and do not allow for temporal topology as a primary structure. We have developed an ontology for the geologic timescale to account for these concerns. It uses the ISO 19108 distinctions between different types of temporal reference system, also linking to an explicit temporal topology model. Stratotypes used in the calibration process are modelled as sampling-features following the ISO 19156 Observations and Measurements model. A joint OGC-W3C harmonization project is underway, with standardization of the W3C OWL-Time ontology as one of its tasks. The insights gained from the geologic timescale ontology will assist in development of a general ontology capable of modelling a richer set of use-cases from geoscience.

  9. Multiangle Implementation of Atmospheric Correction (MAIAC):. 1; Radiative Transfer Basis and Look-up Tables

    NASA Technical Reports Server (NTRS)

    Lyapustin, Alexei; Martonchik, John; Wang, Yujie; Laszlo, Istvan; Korkin, Sergey

    2011-01-01

    This paper describes a radiative transfer basis of the algorithm MAIAC which performs simultaneous retrievals of atmospheric aerosol and bidirectional surface reflectance from the Moderate Resolution Imaging Spectroradiometer (MODIS). The retrievals are based on an accurate semianalytical solution for the top-of-atmosphere reflectance expressed as an explicit function of three parameters of the Ross-Thick Li-Sparse model of surface bidirectional reflectance. This solution depends on certain functions of atmospheric properties and geometry which are precomputed in the look-up table (LUT). This paper further considers correction of the LUT functions for variations of surface pressure/height and of atmospheric water vapor, which is a common task in the operational remote sensing. It introduces a new analytical method for the water vapor correction of the multiple ]scattering path radiance. It also summarizes the few basic principles that provide a high efficiency and accuracy of the LUT ]based radiative transfer for the aerosol/surface retrievals and optimize the size of LUT. For example, the single-scattering path radiance is calculated analytically for a given surface pressure and atmospheric water vapor. The same is true for the direct surface-reflected radiance, which along with the single-scattering path radiance largely defines the angular dependence of measurements. For these calculations, the aerosol phase functions and kernels of the surface bidirectional reflectance model are precalculated at a high angular resolution. The other radiative transfer functions depend rather smoothly on angles because of multiple scattering and can be calculated at coarser angular resolution to reduce the LUT size. At the same time, this resolution should be high enough to use the nearest neighbor geometry angles to avoid costly three ]dimensional interpolation. The pressure correction is implemented via linear interpolation between two LUTs computed for the standard and reduced

  10. The MMI Device Ontology: Enabling Sensor Integration

    NASA Astrophysics Data System (ADS)

    Rueda, C.; Galbraith, N.; Morris, R. A.; Bermudez, L. E.; Graybeal, J.; Arko, R. A.; Mmi Device Ontology Working Group

    2010-12-01

    The Marine Metadata Interoperability (MMI) project has developed an ontology for devices to describe sensors and sensor networks. This ontology is implemented in the W3C Web Ontology Language (OWL) and provides an extensible conceptual model and controlled vocabularies for describing heterogeneous instrument types, with different data characteristics, and their attributes. It can help users populate metadata records for sensors; associate devices with their platforms, deployments, measurement capabilities and restrictions; aid in discovery of sensor data, both historic and real-time; and improve the interoperability of observational oceanographic data sets. We developed the MMI Device Ontology following a community-based approach. By building on and integrating other models and ontologies from related disciplines, we sought to facilitate semantic interoperability while avoiding duplication. Key concepts and insights from various communities, including the Open Geospatial Consortium (eg., SensorML and Observations and Measurements specifications), Semantic Web for Earth and Environmental Terminology (SWEET), and W3C Semantic Sensor Network Incubator Group, have significantly enriched the development of the ontology. Individuals ranging from instrument designers, science data producers and consumers to ontology specialists and other technologists contributed to the work. Applications of the MMI Device Ontology are underway for several community use cases. These include vessel-mounted multibeam mapping sonars for the Rolling Deck to Repository (R2R) program and description of diverse instruments on deepwater Ocean Reference Stations for the OceanSITES program. These trials involve creation of records completely describing instruments, either by individual instances or by manufacturer and model. Individual terms in the MMI Device Ontology can be referenced with their corresponding Uniform Resource Identifiers (URIs) in sensor-related metadata specifications (e

  11. SPONGY (SPam ONtoloGY): email classification using two-level dynamic ontology.

    PubMed

    Youn, Seongwook

    2014-01-01

    Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance. PMID:25254240

  12. SPONGY (SPam ONtoloGY): Email Classification Using Two-Level Dynamic Ontology

    PubMed Central

    2014-01-01

    Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance. PMID:25254240

  13. Reasoning Based Quality Assurance of Medical Ontologies: A Case Study

    PubMed Central

    Horridge, Matthew; Parsia, Bijan; Noy, Natalya F.; Musenm, Mark A.

    2014-01-01

    The World Health Organisation is using OWL as a key technology to develop ICD-11 – the next version of the well-known International Classification of Diseases. Besides providing better opportunities for data integration and linkages to other well-known ontologies such as SNOMED-CT, one of the main promises of using OWL is that it will enable various forms of automated error checking. In this paper we investigate how automated OWL reasoning, along with a Justification Finding Service can be used as a Quality Assurance technique for the development of large and complex ontologies such as ICD-11. Using the International Classification of Traditional Medicine (ICTM) – Chapter 24 of ICD-11 – as a case study, and an expert panel of knowledge engineers, we reveal the kinds of problems that can occur, how they can be detected, and how they can be fixed. Specifically, we found that a logically inconsistent version of the ICTM ontology could be repaired using justifications (minimal entailing subsets of an ontology). Although over 600 justifications for the inconsistency were initially computed, we found that there were three main manageable patterns or categories of justifications involving TBox and ABox axioms. These categories represented meaningful domain errors to an expert panel of ICTM project knowledge engineers, who were able to use them to successfully determine the axioms that needed to be revised in order to fix the problem. All members of the expert panel agreed that the approach was useful for debugging and ensuring the quality of ICTM. PMID:25954373

  14. Ontologies and tag-statistics

    NASA Astrophysics Data System (ADS)

    Tibély, Gergely; Pollner, Péter; Vicsek, Tamás; Palla, Gergely

    2012-05-01

    Due to the increasing popularity of collaborative tagging systems, the research on tagged networks, hypergraphs, ontologies, folksonomies and other related concepts is becoming an important interdisciplinary area with great potential and relevance for practical applications. In most collaborative tagging systems the tagging by the users is completely ‘flat’, while in some cases they are allowed to define a shallow hierarchy for their own tags. However, usually no overall hierarchical organization of the tags is given, and one of the interesting challenges of this area is to provide an algorithm generating the ontology of the tags from the available data. In contrast, there are also other types of tagged networks available for research, where the tags are already organized into a directed acyclic graph (DAG), encapsulating the ‘is a sub-category of’ type of hierarchy between each other. In this paper, we study how this DAG affects the statistical distribution of tags on the nodes marked by the tags in various real networks. The motivation for this research was the fact that understanding the tagging based on a known hierarchy can help in revealing the hidden hierarchy of tags in collaborative tagging systems. We analyse the relation between the tag-frequency and the position of the tag in the DAG in two large sub-networks of the English Wikipedia and a protein-protein interaction network. We also study the tag co-occurrence statistics by introducing a two-dimensional (2D) tag-distance distribution preserving both the difference in the levels and the absolute distance in the DAG for the co-occurring pairs of tags. Our most interesting finding is that the local relevance of tags in the DAG (i.e. their rank or significance as characterized by, e.g., the length of the branches starting from them) is much more important than their global distance from the root. Furthermore, we also introduce a simple tagging model based on random walks on the DAG, capable of

  15. Types of Concepts in Geoscience Ontologies

    NASA Astrophysics Data System (ADS)

    Brodaric, B.

    2006-05-01

    Ontologies are increasingly viewed as a key enabler of scientific research in cyber-infrastructures. They provide a way of digitally representing the meaning of concepts embedded in the theories and models of geoscience, enabling such representations to be compared and contrasted computationally. This facilitates the discovery, integration and communication of digitally accessible geoscience resources, and potentially helps geoscientists attain new knowledge. As ontologies are typically built to closely reflect some aspect or viewpoint of a domain, recognizing significant ontological patterns within the domain should thus lead to more useful and robust ontologies. A key idea then motivating this work is the notion that geoscience concepts possess an ontological pattern that helps not only structure them, but also aids ontology development in disciplines where concepts are similarly abstracted from geospatial regions, such as in ecology, soil science, etc. Proposed is an ontology structure in which six basic concept types are identified, defined, and organized in increasing levels of abstraction, including a level for general concepts (e.g. 'granite') and a level for concepts specific to a geospace-time region (e.g. 'granites of Ireland'). Discussed will be the six concept types, the proposed structure that organizes them, and several examples from geoscience. Also mentioned will be the significant implementation challenges faced but not addressed by the proposed structure. In general, the proposal prioritizes conceptual granularity over its engineering deficits, but this prioritization remains to be tested in serious applications.

  16. Ontology-Based Multiple Choice Question Generation

    PubMed Central

    Al-Yahya, Maha

    2014-01-01

    With recent advancements in Semantic Web technologies, a new trend in MCQ item generation has emerged through the use of ontologies. Ontologies are knowledge representation structures that formally describe entities in a domain and their relationships, thus enabling automated inference and reasoning. Ontology-based MCQ item generation is still in its infancy, but substantial research efforts are being made in the field. However, the applicability of these models for use in an educational setting has not been thoroughly evaluated. In this paper, we present an experimental evaluation of an ontology-based MCQ item generation system known as OntoQue. The evaluation was conducted using two different domain ontologies. The findings of this study show that ontology-based MCQ generation systems produce satisfactory MCQ items to a certain extent. However, the evaluation also revealed a number of shortcomings with current ontology-based MCQ item generation systems with regard to the educational significance of an automatically constructed MCQ item, the knowledge level it addresses, and its language structure. Furthermore, for the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework. PMID:24982937

  17. CiTO, the Citation Typing Ontology.

    PubMed

    Shotton, David

    2010-01-01

    CiTO, the Citation Typing Ontology, is an ontology for describing the nature of reference citations in scientific research articles and other scholarly works, both to other such publications and also to Web information resources, and for publishing these descriptions on the Semantic Web. Citation are described in terms of the factual and rhetorical relationships between citing publication and cited publication, the in-text and global citation frequencies of each cited work, and the nature of the cited work itself, including its publication and peer review status. This paper describes CiTO and illustrates its usefulness both for the annotation of bibliographic reference lists and for the visualization of citation networks. The latest version of CiTO, which this paper describes, is CiTO Version 1.6, published on 19 March 2010. CiTO is written in the Web Ontology Language OWL, uses the namespace http://purl.org/net/cito/, and is available from http://purl.org/net/cito/. This site uses content negotiation to deliver to the user an OWLDoc Web version of the ontology if accessed via a Web browser, or the OWL ontology itself if accessed from an ontology management tool such as Protégé 4 (http://protege.stanford.edu/). Collaborative work is currently under way to harmonize CiTO with other ontologies describing bibliographies and the rhetorical structure of scientific discourse. PMID:20626926

  18. Ontology-based multiple choice question generation.

    PubMed

    Al-Yahya, Maha

    2014-01-01

    With recent advancements in Semantic Web technologies, a new trend in MCQ item generation has emerged through the use of ontologies. Ontologies are knowledge representation structures that formally describe entities in a domain and their relationships, thus enabling automated inference and reasoning. Ontology-based MCQ item generation is still in its infancy, but substantial research efforts are being made in the field. However, the applicability of these models for use in an educational setting has not been thoroughly evaluated. In this paper, we present an experimental evaluation of an ontology-based MCQ item generation system known as OntoQue. The evaluation was conducted using two different domain ontologies. The findings of this study show that ontology-based MCQ generation systems produce satisfactory MCQ items to a certain extent. However, the evaluation also revealed a number of shortcomings with current ontology-based MCQ item generation systems with regard to the educational significance of an automatically constructed MCQ item, the knowledge level it addresses, and its language structure. Furthermore, for the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework. PMID:24982937

  19. An improved version of the table look-up algorithm for pattern recognition. [for MSS data processing

    NASA Technical Reports Server (NTRS)

    Eppler, W. G.

    1974-01-01

    The table look-up approach to pattern recognition has been used for 3 years at several research centers in a variety of applications. A new version has been developed which is faster, requires significantly less core memory, and retains full precision of the input data. The new version can be used on low-cost minicomputers having 32K words (16 bits each) of core memory and fixed-point arithmetic; no special-purpose hardware is required. An initial FORTRAN version of this system can classify an ERTS computer-compatible tape into 24 classes in less than 15 minutes.

  20. A Knowledge Engineering Approach to Develop Domain Ontology

    ERIC Educational Resources Information Center

    Yun, Hongyan; Xu, Jianliang; Xiong, Jing; Wei, Moji

    2011-01-01

    Ontologies are one of the most popular and widespread means of knowledge representation and reuse. A few research groups have proposed a series of methodologies for developing their own standard ontologies. However, because this ontological construction concerns special fields, there is no standard method to build domain ontology. In this paper,…

  1. Nuclear Nonproliferation Ontology Assessment Team Final Report

    SciTech Connect

    Strasburg, Jana D.; Hohimer, Ryan E.

    2012-01-01

    Final Report for the NA22 Simulations, Algorithm and Modeling (SAM) Ontology Assessment Team's efforts from FY09-FY11. The Ontology Assessment Team began in May 2009 and concluded in September 2011. During this two-year time frame, the Ontology Assessment team had two objectives: (1) Assessing the utility of knowledge representation and semantic technologies for addressing nuclear nonproliferation challenges; and (2) Developing ontological support tools that would provide a framework for integrating across the Simulation, Algorithm and Modeling (SAM) program. The SAM Program was going through a large assessment and strategic planning effort during this time and as a result, the relative importance of these two objectives changed, altering the focus of the Ontology Assessment Team. In the end, the team conducted an assessment of the state of art, created an annotated bibliography, and developed a series of ontological support tools, demonstrations and presentations. A total of more than 35 individuals from 12 different research institutions participated in the Ontology Assessment Team. These included subject matter experts in several nuclear nonproliferation-related domains as well as experts in semantic technologies. Despite the diverse backgrounds and perspectives, the Ontology Assessment team functioned very well together and aspects could serve as a model for future inter-laboratory collaborations and working groups. While the team encountered several challenges and learned many lessons along the way, the Ontology Assessment effort was ultimately a success that led to several multi-lab research projects and opened up a new area of scientific exploration within the Office of Nuclear Nonproliferation and Verification.

  2. Hierarchical Analysis of the Omega Ontology

    SciTech Connect

    Joslyn, Cliff A.; Paulson, Patrick R.

    2009-12-01

    Initial delivery for mathematical analysis of the Omega Ontology. We provide an analysis of the hierarchical structure of a version of the Omega Ontology currently in use within the US Government. After providing an initial statistical analysis of the distribution of all link types in the ontology, we then provide a detailed order theoretical analysis of each of the four main hierarchical links present. This order theoretical analysis includes the distribution of components and their properties, their parent/child and multiple inheritance structure, and the distribution of their vertical ranks.

  3. Toward a patient safety upper level ontology.

    PubMed

    Souvignet, Julien; Rodrigues, Jean-Marie

    2015-01-01

    Patient Safety (PS) standardization is the key to improve interoperability and expand international share of incident reporting system knowledge. By aligning the Patient Safety Categorial Structure (PS-CAST) to the Basic Formal Ontology version 2 (BFO2) upper level ontology, we aim to provide more rigor on the underlying organization on the one hand, and to share instances of concepts of categorial structure on the other hand. This alignment is a big step in the top-down approach, to build a complete and standardized domain ontology in order to facilitate the basis to a WHO accepted new information model for Patient Safety. PMID:25991122

  4. Software Engineering Approaches to Ontology Development

    NASA Astrophysics Data System (ADS)

    Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan

    Ontologies, as formal representations of domain knowledge, enable knowledge sharing between different knowledge-based applications. Diverse techniques originating from the field of artificial intelligence are aimed at facilitating ontology development. However, these techniques, although well known to AI experts, are typically unknown to a large population of software engineers. In order to overcome the gap between the knowledge of software engineering practitioners and AI techniques, a few proposals have been made suggesting the use of well-known software engineering techniques, such as UML, for ontology development (Cranefield 2001a).

  5. Utilizing a structural meta-ontology for family-based quality assurance of the BioPortal ontologies.

    PubMed

    Ochs, Christopher; He, Zhe; Zheng, Ling; Geller, James; Perl, Yehoshua; Hripcsak, George; Musen, Mark A

    2016-06-01

    An Abstraction Network is a compact summary of an ontology's structure and content. In previous research, we showed that Abstraction Networks support quality assurance (QA) of biomedical ontologies. The development of an Abstraction Network and its associated QA methodologies, however, is a labor-intensive process that previously was applicable only to one ontology at a time. To improve the efficiency of the Abstraction-Network-based QA methodology, we introduced a QA framework that uses uniform Abstraction Network derivation techniques and QA methodologies that are applicable to whole families of structurally similar ontologies. For the family-based framework to be successful, it is necessary to develop a method for classifying ontologies into structurally similar families. We now describe a structural meta-ontology that classifies ontologies according to certain structural features that are commonly used in the modeling of ontologies (e.g., object properties) and that are important for Abstraction Network derivation. Each class of the structural meta-ontology represents a family of ontologies with identical structural features, indicating which types of Abstraction Networks and QA methodologies are potentially applicable to all of the ontologies in the family. We derive a collection of 81 families, corresponding to classes of the structural meta-ontology, that enable a flexible, streamlined family-based QA methodology, offering multiple choices for classifying an ontology. The structure of 373 ontologies from the NCBO BioPortal is analyzed and each ontology is classified into multiple families modeled by the structural meta-ontology. PMID:26988001

  6. Ontology-Based Annotation of Brain MRI Images

    PubMed Central

    Mechouche, Ammar; Golbreich, Christine; Morandi, Xavier; Gibaud, Bernard

    2008-01-01

    This paper describes a hybrid system for annotating anatomical structures in brain Magnetic Resonance Images. The system involves both numerical knowledge from an atlas and symbolic knowledge represented in a rule-extended ontology, written in standard web languages, and symbolic constraints. The system combines this knowledge with graphical data automatically extracted from the images. The annotations of the parts of sulci and of gyri located in a region of interest selected by the user are obtained with a reasoning based on a Constraint Satisfaction Problem solving combined with Description Logics inference services. The first results obtained with both normal and pathological data are promising. PMID:18998967

  7. Rapid spatial frequency domain inverse problem solutions using look-up tables for real-time processing (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Angelo, Joseph P.; Bigio, Irving J.; Gioux, Sylvain

    2016-03-01

    Imaging technologies working in the spatial frequency domain are becoming increasingly popular for generating wide-field optical property maps, enabling further analysis of tissue parameters such as absorption or scattering. While acquisition methods have witnessed a very rapid growth and are now performing in real-time, processing methods are yet slow preventing information to be acquired and displayed in real-time. In this work, we present solutions for rapid inverse problem solving for optical properties by use of advanced look-up tables. In particular, we present methods and results from a dense, linearized look-up table and an analytical representation that currently run 100 times faster than the standard method and within 10% in both absorption and scattering. With the resulting computation time in the tens of milliseconds range, the proposed techniques enable video-rate feedback of real-time techniques such as snapshot of optical properties (SSOP) imaging, making full video-rate guidance in the clinic possible.

  8. Ontology-Driven Disability-Aware E-Learning Personalisation with ONTODAPS

    ERIC Educational Resources Information Center

    Nganji, Julius T.; Brayshaw, Mike; Tompsett, Brian

    2013-01-01

    Purpose: The purpose of this paper is to show how personalisation of learning resources and services can be achieved for students with and without disabilities, particularly responding to the needs of those with multiple disabilities in e-learning systems. The paper aims to introduce ONTODAPS, the Ontology-Driven Disability-Aware Personalised…

  9. Metadata and Ontologies in Learning Resources Design

    NASA Astrophysics Data System (ADS)

    Vidal C., Christian; Segura Navarrete, Alejandra; Menéndez D., Víctor; Zapata Gonzalez, Alfredo; Prieto M., Manuel

    Resource design and development requires knowledge about educational goals, instructional context and information about learner's characteristics among other. An important information source about this knowledge are metadata. However, metadata by themselves do not foresee all necessary information related to resource design. Here we argue the need to use different data and knowledge models to improve understanding the complex processes related to e-learning resources and their management. This paper presents the use of semantic web technologies, as ontologies, supporting the search and selection of resources used in design. Classification is done, based on instructional criteria derived from a knowledge acquisition process, using information provided by IEEE-LOM metadata standard. The knowledge obtained is represented in an ontology using OWL and SWRL. In this work we give evidence of the implementation of a Learning Object Classifier based on ontology. We demonstrate that the use of ontologies can support the design activities in e-learning.

  10. The Gene Ontology: enhancements for 2011.

    PubMed

    2012-01-01

    The Gene Ontology (GO) (http://www.geneontology.org) is a community bioinformatics resource that represents gene product function through the use of structured, controlled vocabularies. The number of GO annotations of gene products has increased due to curation efforts among GO Consortium (GOC) groups, including focused literature-based annotation and ortholog-based functional inference. The GO ontologies continue to expand and improve as a result of targeted ontology development, including the introduction of computable logical definitions and development of new tools for the streamlined addition of terms to the ontology. The GOC continues to support its user community through the use of e-mail lists, social media and web-based resources. PMID:22102568

  11. The Gene Ontology: enhancements for 2011

    PubMed Central

    2012-01-01

    The Gene Ontology (GO) (http://www.geneontology.org) is a community bioinformatics resource that represents gene product function through the use of structured, controlled vocabularies. The number of GO annotations of gene products has increased due to curation efforts among GO Consortium (GOC) groups, including focused literature-based annotation and ortholog-based functional inference. The GO ontologies continue to expand and improve as a result of targeted ontology development, including the introduction of computable logical definitions and development of new tools for the streamlined addition of terms to the ontology. The GOC continues to support its user community through the use of e-mail lists, social media and web-based resources. PMID:22102568

  12. The pathway ontology – updates and applications

    PubMed Central

    2014-01-01

    Background The Pathway Ontology (PW) developed at the Rat Genome Database (RGD), covers all types of biological pathways, including altered and disease pathways and captures the relationships between them within the hierarchical structure of a directed acyclic graph. The ontology allows for the standardized annotation of rat, and of human and mouse genes to pathway terms. It also constitutes a vehicle for easy navigation between gene and ontology report pages, between reports and interactive pathway diagrams, between pathways directly connected within a diagram and between those that are globally related in pathway suites and suite networks. Surveys of the literature and the development of the Pathway and Disease Portals are important sources for the ongoing development of the ontology. User requests and mapping of pathways in other databases to terms in the ontology further contribute to increasing its content. Recently built automated pipelines use the mapped terms to make available the annotations generated by other groups. Results The two released pipelines – the Pathway Interaction Database (PID) Annotation Import Pipeline and the Kyoto Encyclopedia of Genes and Genomes (KEGG) Annotation Import Pipeline, make available over 7,400 and 31,000 pathway gene annotations, respectively. Building the PID pipeline lead to the addition of new terms within the signaling node, also augmented by the release of the RGD “Immune and Inflammatory Disease Portal” at that time. Building the KEGG pipeline lead to a substantial increase in the number of disease pathway terms, such as those within the ‘infectious disease pathway’ parent term category. The ‘drug pathway’ node has also seen increases in the number of terms as well as a restructuring of the node. Literature surveys, disease portal deployments and user requests have contributed and continue to contribute additional new terms across the ontology. Since first presented, the content of PW has increased by

  13. GFVO: the Genomic Feature and Variation Ontology.

    PubMed

    Baran, Joachim; Durgahee, Bibi Sehnaaz Begum; Eilbeck, Karen; Antezana, Erick; Hoehndorf, Robert; Dumontier, Michel

    2015-01-01

    Falling costs in genomic laboratory experiments have led to a steady increase of genomic feature and variation data. Multiple genomic data formats exist for sharing these data, and whilst they are similar, they are addressing slightly different data viewpoints and are consequently not fully compatible with each other. The fragmentation of data format specifications makes it hard to integrate and interpret data for further analysis with information from multiple data providers. As a solution, a new ontology is presented here for annotating and representing genomic feature and variation dataset contents. The Genomic Feature and Variation Ontology (GFVO) specifically addresses genomic data as it is regularly shared using the GFF3 (incl. FASTA), GTF, GVF and VCF file formats. GFVO simplifies data integration and enables linking of genomic annotations across datasets through common semantics of genomic types and relations. Availability and implementation. The latest stable release of the ontology is available via its base URI; previous and development versions are available at the ontology's GitHub repository: https://github.com/BioInterchange/Ontologies; versions of the ontology are indexed through BioPortal (without external class-/property-equivalences due to BioPortal release 4.10 limitations); examples and reference documentation is provided on a separate web-page: http://www.biointerchange.org/ontologies.html. GFVO version 1.0.2 is licensed under the CC0 1.0 Universal license (https://creativecommons.org/publicdomain/zero/1.0) and therefore de facto within the public domain; the ontology can be appropriated without attribution for commercial and non-commercial use. PMID:26019997

  14. Extensible Ontological Modeling Framefork for Subject Mediation

    NASA Astrophysics Data System (ADS)

    Kalinichenko, L. A.; Skvortsov, N. A.

    An approach for extensible ontological model construction in a mediation environment intended for heterogeneous information sources integration in various subject domains is presented. A mediator ontological language (MOL) may depend on a subject domain and is to be defined at the mediator consolidation phase. On the other hand, for different information sources different ontological models (languages) can be used to define their own ontologies. Reversible mapping of the source ontological models into MOL is needed for information sources registration at the mediator. An approach for such reversible mapping is demonstrated for a class of the Web information sources. It is assumed that such sources apply the DAML+OIL ontological model. A subset of the hybrid object-oriented and semi-structured canonical mediator data model is used for the core of MOL. Construction of a reversible mapping of DAML+OIL into an extension of the core of MOL is presented in the paper. Such mapping is a necessary pre-requisite for contextualizing and registration of information sources at the mediator. The mapping shows how extensible MOL can be constructed. The approach proposed is oriented on digital libraries where retrieval is focused on information content, rather than on information entities.

  15. Ontological Modeling for Integrated Spacecraft Analysis

    NASA Technical Reports Server (NTRS)

    Wicks, Erica

    2011-01-01

    Current spacecraft work as a cooperative group of a number of subsystems. Each of these requiresmodeling software for development, testing, and prediction. It is the goal of my team to create anoverarching software architecture called the Integrated Spacecraft Analysis (ISCA) to aid in deploying the discrete subsystems' models. Such a plan has been attempted in the past, and has failed due to the excessive scope of the project. Our goal in this version of ISCA is to use new resources to reduce the scope of the project, including using ontological models to help link the internal interfaces of subsystems' models with the ISCA architecture.I have created an ontology of functions specific to the modeling system of the navigation system of a spacecraft. The resulting ontology not only links, at an architectural level, language specificinstantiations of the modeling system's code, but also is web-viewable and can act as a documentation standard. This ontology is proof of the concept that ontological modeling can aid in the integration necessary for ISCA to work, and can act as the prototype for future ISCA ontologies.

  16. An Approach to Support Collaborative Ontology Construction.

    PubMed

    Tahar, Kais; Schaaf, Michael; Jahn, Franziska; Kücherer, Christian; Paech, Barbara; Herre, Heinrich; Winter, Alfred

    2016-01-01

    The increasing number of terms used in textbooks for information management (IM) in hospitals makes it difficult for medical informatics students to grasp IM concepts and their interrelations. Formal ontologies which comprehend and represent the essential content of textbooks can facilitate the learning process in IM education. The manual construction of such ontologies is time-consuming and thus very expensive [3]. Moreover, most domain experts lack skills in using a formal language like OWL [2] and usually have no experience with standard editing tools like Protégé http://protege.stanford.edu [4,5]. This paper presents an ontology modeling approach based on Excel2OWL, a self-developed tool which efficiently supports domain experts in collaboratively constructing ontologies from textbooks. This approach was applied to classic IM textbooks, resulting in an ontology called SNIK. Our method facilitates the collaboration between domain experts and ontologists in the development process. Furthermore, the proposed approach enables ontologists to detect modeling errors and also to evaluate and improve the quality of the resulting ontology rapidly. This approach allows us to visualize the modeled textbooks and to analyze their semantics automatically. Hence, it can be used for e-learning purposes, particularly in the field of IM in hospitals. PMID:27577406

  17. COHeRE: Cross-Ontology Hierarchical Relation Examination for Ontology Quality Assurance

    PubMed Central

    Cui, Licong

    2015-01-01

    Biomedical ontologies play a vital role in healthcare information management, data integration, and decision support. Ontology quality assurance (OQA) is an indispensable part of the ontology engineering cycle. Most existing OQA methods are based on the knowledge provided within the targeted ontology. This paper proposes a novel cross-ontology analysis method, Cross-Ontology Hierarchical Relation Examination (COHeRE), to detect inconsistencies and possible errors in hierarchical relations across multiple ontologies. COHeRE leverages the Unified Medical Language System (UMLS) knowledge source and the MapReduce cloud computing technique for systematic, large-scale ontology quality assurance work. COHeRE consists of three main steps with the UMLS concepts and relations as the input. First, the relations claimed in source vocabularies are filtered and aggregated for each pair of concepts. Second, inconsistent relations are detected if a concept pair is related by different types of relations in different source vocabularies. Finally, the uncovered inconsistent relations are voted according to their number of occurrences across different source vocabularies. The voting result together with the inconsistent relations serve as the output of COHeRE for possible ontological change. The highest votes provide initial suggestion on how such inconsistencies might be fixed. In UMLS, 138,987 concept pairs were found to have inconsistent relationships across multiple source vocabularies. 40 inconsistent concept pairs involving hierarchical relationships were randomly selected and manually reviewed by a human expert. 95.8% of the inconsistent relations involved in these concept pairs indeed exist in their source vocabularies rather than being introduced by mistake in the UMLS integration process. 73.7% of the concept pairs with suggested relationship were agreed by the human expert. The effectiveness of COHeRE indicates that UMLS provides a promising environment to enhance

  18. COHeRE: Cross-Ontology Hierarchical Relation Examination for Ontology Quality Assurance.

    PubMed

    Cui, Licong

    2015-01-01

    Biomedical ontologies play a vital role in healthcare information management, data integration, and decision support. Ontology quality assurance (OQA) is an indispensable part of the ontology engineering cycle. Most existing OQA methods are based on the knowledge provided within the targeted ontology. This paper proposes a novel cross-ontology analysis method, Cross-Ontology Hierarchical Relation Examination (COHeRE), to detect inconsistencies and possible errors in hierarchical relations across multiple ontologies. COHeRE leverages the Unified Medical Language System (UMLS) knowledge source and the MapReduce cloud computing technique for systematic, large-scale ontology quality assurance work. COHeRE consists of three main steps with the UMLS concepts and relations as the input. First, the relations claimed in source vocabularies are filtered and aggregated for each pair of concepts. Second, inconsistent relations are detected if a concept pair is related by different types of relations in different source vocabularies. Finally, the uncovered inconsistent relations are voted according to their number of occurrences across different source vocabularies. The voting result together with the inconsistent relations serve as the output of COHeRE for possible ontological change. The highest votes provide initial suggestion on how such inconsistencies might be fixed. In UMLS, 138,987 concept pairs were found to have inconsistent relationships across multiple source vocabularies. 40 inconsistent concept pairs involving hierarchical relationships were randomly selected and manually reviewed by a human expert. 95.8% of the inconsistent relations involved in these concept pairs indeed exist in their source vocabularies rather than being introduced by mistake in the UMLS integration process. 73.7% of the concept pairs with suggested relationship were agreed by the human expert. The effectiveness of COHeRE indicates that UMLS provides a promising environment to enhance

  19. A 2013 workshop: vaccine and drug ontology studies (VDOS 2013).

    PubMed

    Tao, Cui; He, Yongqun; Arabandi, Sivaram

    2014-01-01

    The 2013 "Vaccine and Drug Ontology Studies" (VDOS 2013) international workshop series focuses on vaccine- and drug-related ontology modeling and applications. Drugs and vaccines have contributed to dramatic improvements in public health worldwide. Over the last decade, tremendous efforts have been made in the biomedical ontology community to ontologically represent various areas associated with vaccines and drugs - extending existing clinical terminology systems such as SNOMED, RxNorm, NDF-RT, and MedDRA, as well as developing new models such as Vaccine Ontology. The VDOS workshop series provides a platform for discussing innovative solutions as well as the challenges in the development and applications of biomedical ontologies for representing and analyzing drugs and vaccines, their administration, host immune responses, adverse events, and other related topics. The six full-length papers included in this thematic issue focuses on three main areas: (i) ontology development and representation, (ii) ontology mapping, maintaining and auditing, and (iii) ontology applications. PMID:24650607

  20. Speeding up ontology creation of scientific terms

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.; Graybeal, J.

    2005-12-01

    An ontology is a formal specification of a controlled vocabulary. Ontologies are composed of classes (similar to categories), individuals (members of classes) and properties (attributes of the individuals). Having vocabularies expressed in a formal specification like the Web Ontology Language (OWL) enables interoperability due to the comprehensiveness of OWL by software programs. Two main non-inclusive strategies exist when constructing an ontology: an up-down approach and a bottom-up approach. The former one is directed towards the creation of top classes first (main concepts) and then finding the required subclasses and individuals. The later approach starts from the individuals and then finds similar properties promoting the creation of classes. At the Marine Metadata Interoperability (MMI) Initiative we used a bottom-up approach to create ontologies from simple-vocabularies (those that are not expressed in a conceptual way). We found that the vocabularies were available in different formats (relational data bases, plain files, HTML, XML, PDF) and sometimes were composed of thousands of terms, making the ontology creation process a very time consuming activity. To expedite the conversion process we created a tool VOC2OWL that takes a vocabulary in a table like structure (CSV or TAB format) and a conversion-property file to create automatically an ontology. We identified two basic structures of simple-vocabularies: Flat vocabularies (e.g., phone directory) and hierarchical vocabularies (e.g., taxonomies). The property file defines a list of attributes for the conversion process for each structure type. The attributes included metadata information (title, description, subject, contributor, urlForMoreInformation) and conversion flags (treatAsHierarchy, generateAutoIds) and other conversion information needed to create the ontology (columnForPrimaryClass, columnsToCreateClassesFrom, fileIn, fileOut, namespace, format). We created more than 50 ontologies and

  1. An ontology-based hierarchical semantic modeling approach to clinical pathway workflows.

    PubMed

    Ye, Yan; Jiang, Zhibin; Diao, Xiaodi; Yang, Dong; Du, Gang

    2009-08-01

    This paper proposes an ontology-based approach of modeling clinical pathway workflows at the semantic level for facilitating computerized clinical pathway implementation and efficient delivery of high-quality healthcare services. A clinical pathway ontology (CPO) is formally defined in OWL web ontology language (OWL) to provide common semantic foundation for meaningful representation and exchange of pathway-related knowledge. A CPO-based semantic modeling method is then presented to describe clinical pathways as interconnected hierarchical models including the top-level outcome flow and intervention workflow level along a care timeline. Furthermore, relevant temporal knowledge can be fully represented by combing temporal entities in CPO and temporal rules based on semantic web rule language (SWRL). An illustrative example about a clinical pathway for cesarean section shows the applicability of the proposed methodology in enabling structured semantic descriptions of any real clinical pathway. PMID:19539278

  2. The Semanticscience Integrated Ontology (SIO) for biomedical research and knowledge discovery

    PubMed Central

    2014-01-01

    The Semanticscience Integrated Ontology (SIO) is an ontology to facilitate biomedical knowledge discovery. SIO features a simple upper level comprised of essential types and relations for the rich description of arbitrary (real, hypothesized, virtual, fictional) objects, processes and their attributes. SIO specifies simple design patterns to describe and associate qualities, capabilities, functions, quantities, and informational entities including textual, geometrical, and mathematical entities, and provides specific extensions in the domains of chemistry, biology, biochemistry, and bioinformatics. SIO provides an ontological foundation for the Bio2RDF linked data for the life sciences project and is used for semantic integration and discovery for SADI-based semantic web services. SIO is freely available to all users under a creative commons by attribution license. See website for further information: http://sio.semanticscience.org. PMID:24602174

  3. The Semanticscience Integrated Ontology (SIO) for biomedical research and knowledge discovery.

    PubMed

    Dumontier, Michel; Baker, Christopher Jo; Baran, Joachim; Callahan, Alison; Chepelev, Leonid; Cruz-Toledo, José; Del Rio, Nicholas R; Duck, Geraint; Furlong, Laura I; Keath, Nichealla; Klassen, Dana; McCusker, James P; Queralt-Rosinach, Núria; Samwald, Matthias; Villanueva-Rosales, Natalia; Wilkinson, Mark D; Hoehndorf, Robert

    2014-01-01

    The Semanticscience Integrated Ontology (SIO) is an ontology to facilitate biomedical knowledge discovery. SIO features a simple upper level comprised of essential types and relations for the rich description of arbitrary (real, hypothesized, virtual, fictional) objects, processes and their attributes. SIO specifies simple design patterns to describe and associate qualities, capabilities, functions, quantities, and informational entities including textual, geometrical, and mathematical entities, and provides specific extensions in the domains of chemistry, biology, biochemistry, and bioinformatics. SIO provides an ontological foundation for the Bio2RDF linked data for the life sciences project and is used for semantic integration and discovery for SADI-based semantic web services. SIO is freely available to all users under a creative commons by attribution license. See website for further information: http://sio.semanticscience.org. PMID:24602174

  4. OpenTox predictive toxicology framework: toxicological ontology and semantic media wiki-based OpenToxipedia

    PubMed Central

    2012-01-01

    Background The OpenTox Framework, developed by the partners in the OpenTox project (http://www.opentox.org), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing. Results The following related ontologies have been developed for OpenTox: a) Toxicological ontology – listing the toxicological endpoints; b) Organs system and Effects ontology – addressing organs, targets/examinations and effects observed in in vivo studies; c) ToxML ontology – representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology– representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink–ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology. OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources. The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists). Availability The OpenTox toxicological

  5. Ontology Mapping Neural Network: An Approach to Learning and Inferring Correspondences among Ontologies

    ERIC Educational Resources Information Center

    Peng, Yefei

    2010-01-01

    An ontology mapping neural network (OMNN) is proposed in order to learn and infer correspondences among ontologies. It extends the Identical Elements Neural Network (IENN)'s ability to represent and map complex relationships. The learning dynamics of simultaneous (interlaced) training of similar tasks interact at the shared connections of the…

  6. Interoperability between biomedical ontologies through relation expansion, upper-level ontologies and automatic reasoning.

    PubMed

    Hoehndorf, Robert; Dumontier, Michel; Oellrich, Anika; Rebholz-Schuhmann, Dietrich; Schofield, Paul N; Gkoutos, Georgios V

    2011-01-01

    Researchers design ontologies as a means to accurately annotate and integrate experimental data across heterogeneous and disparate data- and knowledge bases. Formal ontologies make the semantics of terms and relations explicit such that automated reasoning can be used to verify the consistency of knowledge. However, many biomedical ontologies do not sufficiently formalize the semantics of their relations and are therefore limited with respect to automated reasoning for large scale data integration and knowledge discovery. We describe a method to improve automated reasoning over biomedical ontologies and identify several thousand contradictory class definitions. Our approach aligns terms in biomedical ontologies with foundational classes in a top-level ontology and formalizes composite relations as class expressions. We describe the semi-automated repair of contradictions and demonstrate expressive queries over interoperable ontologies. Our work forms an important cornerstone for data integration, automatic inference and knowledge discovery based on formal representations of knowledge. Our results and analysis software are available at http://bioonto.de/pmwiki.php/Main/ReasonableOntologies. PMID:21789201

  7. Building a semi-automatic ontology learning and construction system for geosciences

    NASA Astrophysics Data System (ADS)

    Babaie, H. A.; Sunderraman, R.; Zhu, Y.

    2013-12-01

    We are developing an ontology learning and construction framework that allows continuous, semi-automatic knowledge extraction, verification, validation, and maintenance by potentially a very large group of collaborating domain experts in any geosciences field. The system brings geoscientists from the side-lines to the center stage of ontology building, allowing them to collaboratively construct and enrich new ontologies, and merge, align, and integrate existing ontologies and tools. These constantly evolving ontologies can more effectively address community's interests, purposes, tools, and change. The goal is to minimize the cost and time of building ontologies, and maximize the quality, usability, and adoption of ontologies by the community. Our system will be a domain-independent ontology learning framework that applies natural language processing, allowing users to enter their ontology in a semi-structured form, and a combined Semantic Web and Social Web approach that lets direct participation of geoscientists who have no skill in the design and development of their domain ontologies. A controlled natural language (CNL) interface and an integrated authoring and editing tool automatically convert syntactically correct CNL text into formal OWL constructs. The WebProtege-based system will allow a potentially large group of geoscientists, from multiple domains, to crowd source and participate in the structuring of their knowledge model by sharing their knowledge through critiquing, testing, verifying, adopting, and updating of the concept models (ontologies). We will use cloud storage for all data and knowledge base components of the system, such as users, domain ontologies, discussion forums, and semantic wikis that can be accessed and queried by geoscientists in each domain. We will use NoSQL databases such as MongoDB as a service in the cloud environment. MongoDB uses the lightweight JSON format, which makes it convenient and easy to build Web applications using

  8. A Prototype Ontology Tool and Interface for Coastal Atlas Interoperability

    NASA Astrophysics Data System (ADS)

    Wright, D. J.; Bermudez, L.; O'Dea, L.; Haddad, T.; Cummins, V.

    2007-12-01

    While significant capacity has been built in the field of web-based coastal mapping and informatics in the last decade, little has been done to take stock of the implications of these efforts or to identify best practice in terms of taking lessons learned into consideration. This study reports on the second of two transatlantic workshops that bring together key experts from Europe, the United States and Canada to examine state-of-the-art developments in coastal web atlases (CWA), based on web enabled geographic information systems (GIS), along with future needs in mapping and informatics for the coastal practitioner community. While multiple benefits are derived from these tailor-made atlases (e.g. speedy access to multiple sources of coastal data and information; economic use of time by avoiding individual contact with different data holders), the potential exists to derive added value from the integration of disparate CWAs, to optimize decision-making at a variety of levels and across themes. The second workshop focused on the development of a strategy to make coastal web atlases interoperable by way of controlled vocabularies and ontologies. The strategy is based on web service oriented architecture and an implementation of Open Geospatial Consortium (OGC) web services, such as Web Feature Services (WFS) and Web Map Service (WMS). Atlases publishes Catalog Web Services (CSW) using ISO 19115 metadata and controlled vocabularies encoded as Uniform Resource Identifiers (URIs). URIs allows the terminology of each atlas to be uniquely identified and facilitates mapping of terminologies using semantic web technologies. A domain ontology was also created to formally represent coastal erosion terminology as a use case, and with a test linkage of those terms between the Marine Irish Digital Atlas and the Oregon Coastal Atlas. A web interface is being developed to discover coastal hazard themes in distributed coastal atlases as part of a broader International Coastal

  9. Process attributes in bio-ontologies

    PubMed Central

    2012-01-01

    Background Biomedical processes can provide essential information about the (mal-) functioning of an organism and are thus frequently represented in biomedical terminologies and ontologies, including the GO Biological Process branch. These processes often need to be described and categorised in terms of their attributes, such as rates or regularities. The adequate representation of such process attributes has been a contentious issue in bio-ontologies recently; and domain ontologies have correspondingly developed ad hoc workarounds that compromise interoperability and logical consistency. Results We present a design pattern for the representation of process attributes that is compatible with upper ontology frameworks such as BFO and BioTop. Our solution rests on two key tenets: firstly, that many of the sorts of process attributes which are biomedically interesting can be characterised by the ways that repeated parts of such processes constitute, in combination, an overall process; secondly, that entities for which a full logical definition can be assigned do not need to be treated as primitive within a formal ontology framework. We apply this approach to the challenge of modelling and automatically classifying examples of normal and abnormal rates and patterns of heart beating processes, and discuss the expressivity required in the underlying ontology representation language. We provide full definitions for process attributes at increasing levels of domain complexity. Conclusions We show that a logical definition of process attributes is feasible, though limited by the expressivity of DL languages so that the creation of primitives is still necessary. This finding may endorse current formal upper-ontology frameworks as a way of ensuring consistency, interoperability and clarity. PMID:22928880

  10. GFVO: the Genomic Feature and Variation Ontology

    PubMed Central

    Durgahee, Bibi Sehnaaz Begum; Eilbeck, Karen; Antezana, Erick; Hoehndorf, Robert; Dumontier, Michel

    2015-01-01

    Falling costs in genomic laboratory experiments have led to a steady increase of genomic feature and variation data. Multiple genomic data formats exist for sharing these data, and whilst they are similar, they are addressing slightly different data viewpoints and are consequently not fully compatible with each other. The fragmentation of data format specifications makes it hard to integrate and interpret data for further analysis with information from multiple data providers. As a solution, a new ontology is presented here for annotating and representing genomic feature and variation dataset contents. The Genomic Feature and Variation Ontology (GFVO) specifically addresses genomic data as it is regularly shared using the GFF3 (incl. FASTA), GTF, GVF and VCF file formats. GFVO simplifies data integration and enables linking of genomic annotations across datasets through common semantics of genomic types and relations. Availability and implementation. The latest stable release of the ontology is available via its base URI; previous and development versions are available at the ontology’s GitHub repository: https://github.com/BioInterchange/Ontologies; versions of the ontology are indexed through BioPortal (without external class-/property-equivalences due to BioPortal release 4.10 limitations); examples and reference documentation is provided on a separate web-page: http://www.biointerchange.org/ontologies.html. GFVO version 1.0.2 is licensed under the CC0 1.0 Universal license (https://creativecommons.org/publicdomain/zero/1.0) and therefore de facto within the public domain; the ontology can be appropriated without attribution for commercial and non-commercial use. PMID:26019997

  11. Brucellosis Ontology (IDOBRU) as an extension of the Infectious Disease Ontology

    PubMed Central

    2011-01-01

    Background Caused by intracellular Gram-negative bacteria Brucella spp., brucellosis is the most common bacterial zoonotic disease. Extensive studies in brucellosis have yielded a large number of publications and data covering various topics ranging from basic Brucella genetic study to vaccine clinical trials. To support data interoperability and reasoning, a community-based brucellosis-specific biomedical ontology is needed. Results The Brucellosis Ontology (IDOBRU: http://sourceforge.net/projects/idobru), a biomedical ontology in the brucellosis domain, is an extension ontology of the core Infectious Disease Ontology (IDO-core) and follows OBO Foundry principles. Currently IDOBRU contains 1503 ontology terms, which includes 739 Brucella-specific terms, 414 IDO-core terms, and 350 terms imported from 10 existing ontologies. IDOBRU has been used to model different aspects of brucellosis, including host infection, zoonotic disease transmission, symptoms, virulence factors and pathogenesis, diagnosis, intentional release, vaccine prevention, and treatment. Case studies are typically used in our IDOBRU modeling. For example, diurnal temperature variation in Brucella patients, a Brucella-specific PCR method, and a WHO-recommended brucellosis treatment were selected as use cases to model brucellosis symptom, diagnosis, and treatment, respectively. Developed using OWL, IDOBRU supports OWL-based ontological reasoning. For example, by performing a Description Logic (DL) query in the OWL editor Protégé 4 or a SPARQL query in an IDOBRU SPARQL server, a check of Brucella virulence factors showed that eight of them are known protective antigens based on the biological knowledge captured within the ontology. Conclusions IDOBRU is the first reported bacterial infectious disease ontology developed to represent different disease aspects in a formal logical format. It serves as a brucellosis knowledgebase and supports brucellosis data integration and automated reasoning. PMID

  12. Real-Time Image Reconstruction for Pulse EPR Oxygen Imaging Using a GPU and Lookup Table Parameter Fitting

    PubMed Central

    Redler, Gage; Qiao, Zhiwei; Epel, Boris; Halpern, Howard J.

    2015-01-01

    The importance of tissue oxygenation has led to a great interest in methods for imaging pO2 in vivo. Electron paramagnetic resonance imaging (EPRI) provides noninvasive, near absolute 1 mm-resolved 3D images of pO2 in the tissues and tumors of living animals. Current EPRI image reconstruction methods tend to be time consuming and preclude real-time visualization of information. Methods are presented to significantly accelerate the reconstruction process in order to enable real-time reconstruction of EPRI pO2 images. These methods are image reconstruction using graphics processing unit (GPU)-based 3D filtered back-projection and lookup table parameter fitting. The combination of these methods leads to acceleration factors of over 650 compared to current methods and allows for real-time reconstruction of EPRI images of pO2 in vivo. PMID:26167137

  13. NOA: a novel Network Ontology Analysis method

    PubMed Central

    Wang, Jiguang; Huang, Qiang; Liu, Zhi-Ping; Wang, Yong; Wu, Ling-Yun; Chen, Luonan; Zhang, Xiang-Sun

    2011-01-01

    Gene ontology analysis has become a popular and important tool in bioinformatics study, and current ontology analyses are mainly conducted in individual gene or a gene list. However, recent molecular network analysis reveals that the same list of genes with different interactions may perform different functions. Therefore, it is necessary to consider molecular interactions to correctly and specifically annotate biological networks. Here, we propose a novel Network Ontology Analysis (NOA) method to perform gene ontology enrichment analysis on biological networks. Specifically, NOA first defines link ontology that assigns functions to interactions based on the known annotations of joint genes via optimizing two novel indexes ‘Coverage’ and ‘Diversity’. Then, NOA generates two alternative reference sets to statistically rank the enriched functional terms for a given biological network. We compare NOA with traditional enrichment analysis methods in several biological networks, and find that: (i) NOA can capture the change of functions not only in dynamic transcription regulatory networks but also in rewiring protein interaction networks while the traditional methods cannot and (ii) NOA can find more relevant and specific functions than traditional methods in different types of static networks. Furthermore, a freely accessible web server for NOA has been developed at http://www.aporc.org/noa/. PMID:21543451

  14. The ontology model of FrontCRM framework

    NASA Astrophysics Data System (ADS)

    Budiardjo, Eko K.; Perdana, Wira; Franshisca, Felicia

    2013-03-01

    Adoption and implementation of Customer Relationship Management (CRM) is not merely a technological installation, but the emphasis is more on the application of customer-centric philosophy and culture as a whole. CRM must begin at the level of business strategy, the only level that thorough organizational changes are possible to be done. Changes agenda can be directed to each departmental plans, and supported by information technology. Work processes related to CRM concept include marketing, sales, and services. FrontCRM is developed as framework to guide in identifying business processes related to CRM in which based on the concept of strategic planning approach. This leads to processes and practices identification in every process area related to marketing, sales, and services. The Ontology model presented on this paper by means serves as tools to avoid framework misunderstanding, to define practices systematically within process area and to find CRM software features related to those practices.

  15. The Locus Lookup Tool at MaizeGDB: Identification of Genomic Regions in Maize by Integrating Sequence Information with Physical and Genetic Maps

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Methods to automatically integrate sequence information with physical and genetic maps are scarce. The Locus Lookup Tool enables researchers to define windows of genomic sequence likely to contain loci of interest where only genetic or physical mapping associations are reported. Using the Locus Look...

  16. Ontological metaphors for negative energy in an interdisciplinary context

    NASA Astrophysics Data System (ADS)

    Dreyfus, Benjamin W.; Geller, Benjamin D.; Gouvea, Julia; Sawtelle, Vashti; Turpen, Chandra; Redish, Edward F.

    2014-12-01

    Teaching about energy in interdisciplinary settings that emphasize coherence among physics, chemistry, and biology leads to a more central role for chemical bond energy. We argue that an interdisciplinary approach to chemical energy leads to modeling chemical bonds in terms of negative energy. While recent work on ontological metaphors for energy has emphasized the affordances of the substance ontology, this ontology is problematic in the context of negative energy. Instead, we apply a dynamic ontologies perspective to argue that blending the substance and location ontologies for energy can be effective in reasoning about negative energy in the context of reasoning about chemical bonds. We present data from an introductory physics for the life sciences course in which both experts and students successfully use this blended ontology. Blending these ontologies is most successful when the substance and location ontologies are combined such that each is strategically utilized in reasoning about particular aspects of energetic processes.

  17. OntoMama: An Ontology Applied to Breast Cancer.

    PubMed

    Melo, M T D; Gonçalves, V H L; Costa, H D R; Braga, D S; Gomide, L B; Alves, C S; Brasil, L M

    2015-01-01

    This article describes the process of building an ontology to assist medical students and professionals specialized in Oncology. The ontology allows the user to obtain knowledge more quickly and thus assist professionals in their decision-making. PMID:26262403

  18. A Posteriori Ontology Engineering for Data-Driven Science

    SciTech Connect

    Gessler, Damian Dg; Joslyn, Cliff A.; Verspoor, Karin M.

    2013-05-28

    Science—and biology in particular—has a rich tradition in categorical knowledge management. This continues today in the generation and use of formal ontologies. Unfortunately, the link between hard data and ontological content is predominately qualitative, not quantitative. The usual approach is to construct ontologies of qualitative concepts, and then annotate the data to the ontologies. This process has seen great value, yet it is laborious, and the success to which ontologies are managing and organizing the full information content of the data is uncertain. An alternative approach is the converse: use the data itself to quantitatively drive ontology creation. Under this model, one generates ontologies at the time they are needed, allowing them to change as more data influences both their topology and their concept space. We outline a combined approach to achieve this, taking advantage of two technologies, the mathematical approach of Formal Concept Analysis (FCA) and the semantic web technologies of the Web Ontology Language (OWL).

  19. Statistical mechanics and the ontological interpretation

    NASA Astrophysics Data System (ADS)

    Bohm, D.; Hiley, B. J.

    1996-06-01

    To complete our ontological interpretation of quantum theory we have to conclude a treatment of quantum statistical mechanics. The basic concepts in the ontological approach are the particle and the wave function. The density matrix cannot play a fundamental role here. Therefore quantum statistical mechanics will require a further statistical distribution over wave functions in addition to the distribution of particles that have a specified wave function. Ultimately the wave function of the universe will he required, but we show that if the universe in not in thermodynamic equilibrium then it can he treated in terms of weakly interacting large scale constituents that are very nearly independent of each other. In this way we obtain the same results as those of the usual approach within the framework of the ontological interpretation.

  20. Intuitive ontologies for energy in physics

    NASA Astrophysics Data System (ADS)

    Scherr, Rachel E.; Close, Hunter G.; McKagan, Sarah B.

    2012-02-01

    The nature of energy is not typically an explicit topic of physics instruction. Nonetheless, participants in physics courses that involve energy are frequently saying what kind of thing they think energy is, both verbally and nonverbally. Physics textbooks also provide discourse suggesting the nature of energy as conceptualized by disciplinary experts. The premise of an embodied cognition theoretical perspective is that we understand the kinds of things that may exist in the world (ontology) in terms of sensorimotor experiences such as object permanence and movement. We offer examples of intuitive ontologies for energy that we have observed in classroom contexts and physics texts, including energy as a quasi-material substance; as a stimulus to action; and as a vertical location. Each of the intuitive ontologies we observe has features that contribute to a valid understanding of energy. The quasi-material substance metaphor best supports understanding energy as a conserved quantity.

  1. Modularizing Spatial Ontologies for Assisted Living Systems

    NASA Astrophysics Data System (ADS)

    Hois, Joana

    Assisted living systems are intended to support daily-life activities in user homes by automatizing and monitoring behavior of the environment while interacting with the user in a non-intrusive way. The knowledge base of such systems therefore has to define thematically different aspects of the environment mostly related to space, such as basic spatial floor plan information, pieces of technical equipment in the environment and their functions and spatial ranges, activities users can perform, entities that occur in the environment, etc. In this paper, we present thematically different ontologies, each of which describing environmental aspects from a particular perspective. The resulting modular structure allows the selection of application-specific ontologies as necessary. This hides information and reduces complexity in terms of the represented spatial knowledge and reasoning practicability. We motivate and present the different spatial ontologies applied to an ambient assisted living application.

  2. A Water Conservation Digital Library Using Ontologies

    NASA Astrophysics Data System (ADS)

    Ziemba, Lukasz; Cornejo, Camilo; Beck, Howard

    New technologies are emerging that assist in organizing and retrieving knowledge stored in a variety of forms (books, papers, models, decision support systems, databases), but they can only be evaluated through real world applications. Ontology has been used to manage the Water Conservation Digital Library holding a growing collection of various types of digital resources in the domain of urban water conservation in Florida, USA. The ontology based back-end powers a fully operational web interface, available at http://library.conservefloridawater.org . The system has already demonstrated numerous benefits of the ontology application, including: easier and more precise finding of resources, information sharing and reuse, and proved to effectively facilitate information management.

  3. Ontology-enriched Visualization of Human Anatomy

    SciTech Connect

    Pouchard, LC

    2005-12-20

    The project focuses on the problem of presenting a human anatomical 3D model associated with other types of human systemic information ranging from physiological to anatomical information while navigating the 3D model. We propose a solution that integrates a visual 3D interface and navigation features with the display of structured information contained in an ontology of anatomy where the structures of the human body are formally and semantically linked. The displayed and annotated anatomy serves as a visual entry point into a patient's anatomy, medical indicators and other information. The ontology of medical information provides labeling to the highlighted anatomical parts in the 3D display. Because of the logical organization and links between anatomical objects found in the ontology and associated 3D model, the analysis of a structure by a physician is greatly enhanced. Navigation within the 3D visualization and between this visualization and objects representing anatomical concepts within the model is also featured.

  4. OAE: The Ontology of Adverse Events

    PubMed Central

    2014-01-01

    Background A medical intervention is a medical procedure or application intended to relieve or prevent illness or injury. Examples of medical interventions include vaccination and drug administration. After a medical intervention, adverse events (AEs) may occur which lie outside the intended consequences of the intervention. The representation and analysis of AEs are critical to the improvement of public health. Description The Ontology of Adverse Events (OAE), previously named Adverse Event Ontology (AEO), is a community-driven ontology developed to standardize and integrate data relating to AEs arising subsequent to medical interventions, as well as to support computer-assisted reasoning. OAE has over 3,000 terms with unique identifiers, including terms imported from existing ontologies and more than 1,800 OAE-specific terms. In OAE, the term ‘adverse event’ denotes a pathological bodily process in a patient that occurs after a medical intervention. Causal adverse events are defined by OAE as those events that are causal consequences of a medical intervention. OAE represents various adverse events based on patient anatomic regions and clinical outcomes, including symptoms, signs, and abnormal processes. OAE has been used in the analysis of several different sorts of vaccine and drug adverse event data. For example, using the data extracted from the Vaccine Adverse Event Reporting System (VAERS), OAE was used to analyse vaccine adverse events associated with the administrations of different types of influenza vaccines. OAE has also been used to represent and classify the vaccine adverse events cited in package inserts of FDA-licensed human vaccines in the USA. Conclusion OAE is a biomedical ontology that logically defines and classifies various adverse events occurring after medical interventions. OAE has successfully been applied in several adverse event studies. The OAE ontological framework provides a platform for systematic representation and analysis of

  5. Terminology representation guidelines for biomedical ontologies in the semantic web notations.

    PubMed

    Tao, Cui; Pathak, Jyotishman; Solbrig, Harold R; Wei, Wei-Qi; Chute, Christopher G

    2013-02-01

    Terminologies and ontologies are increasingly prevalent in healthcare and biomedicine. However they suffer from inconsistent renderings, distribution formats, and syntax that make applications through common terminologies services challenging. To address the problem, one could posit a shared representation syntax, associated schema, and tags. We identified a set of commonly-used elements in biomedical ontologies and terminologies based on our experience with the Common Terminology Services 2 (CTS2) Specification as well as the Lexical Grid (LexGrid) project. We propose guidelines for precisely such a shared terminology model, and recommend tags assembled from SKOS, OWL, Dublin Core, RDF Schema, and DCMI meta-terms. We divide these guidelines into lexical information (e.g. synonyms, and definitions) and semantic information (e.g. hierarchies). The latter we distinguish for use by informal terminologies vs. formal ontologies. We then evaluate the guidelines with a spectrum of widely used terminologies and ontologies to examine how the lexical guidelines are implemented, and whether our proposed guidelines would enhance interoperability. PMID:23026232

  6. Terminology Representation Guidelines for Biomedical Ontologies in the Semantic Web Notations

    PubMed Central

    Tao, Cui; Pathak, Jyotishman; Solbrig, Harold R.; Wei, Wei-Qi; Chute, Christopher G.

    2012-01-01

    Terminologies and ontologies are increasingly prevalent in health-care and biomedicine. However they suffer from inconsistent renderings, distribution formats, and syntax that make applications through common terminologies services challenging. To address the problem, one could posit a shared representation syntax, associated schema, and tags. We identified a set of commonly-used elements in biomedical ontologies and terminologies based on our experience with the Common Terminology Services 2 (CTS2) Specification as well as the Lexical Grid (LexGrid) project. We propose guidelines for precisely such a shared terminology model, and recommend tags assembled from SKOS, OWL, Dublin Core, RDF Schema, and DCMI meta-terms. We divide these guidelines into lexical information (e.g. synonyms, and definitions) and semantic information (e.g. hierarchies.) The latter we distinguish for use by informal terminologies vs. formal ontologies. We then evaluate the guidelines with a spectrum of widely used terminologies and ontologies to examine how the lexical guidelines are implemented, and whether our proposed guidelines would enhance interoperability. PMID:23026232

  7. Towards an upper level ontology for molecular biology.

    PubMed

    Schulz, Stefan; Beisswanger, Elena; Wermter, Joachim; Hahn, Udo

    2006-01-01

    There is a growing need for the general-purpose description of the basic conceptual entities in the life sciences. Up until now, upper level models have mainly been purpose-driven, such as the GENIA ontology, originally devised as a vocabulary for corpus annotation. As an alternative,we here present BioTop, a description-logic-based top level ontology for molecular biology, which we consider as an ontologically conscious redesign of the GENIA ontology. PMID:17238430

  8. Semi-automated ontology generation and evolution

    NASA Astrophysics Data System (ADS)

    Stirtzinger, Anthony P.; Anken, Craig S.

    2009-05-01

    Extending the notion of data models or object models, ontology can provide rich semantic definition not only to the meta-data but also to the instance data of domain knowledge, making these semantic definitions available in machine readable form. However, the generation of an effective ontology is a difficult task involving considerable labor and skill. This paper discusses an Ontology Generation and Evolution Processor (OGEP) aimed at automating this process, only requesting user input when un-resolvable ambiguous situations occur. OGEP directly attacks the main barrier which prevents automated (or self learning) ontology generation: the ability to understand the meaning of artifacts and the relationships the artifacts have to the domain space. OGEP leverages existing lexical to ontological mappings in the form of WordNet, and Suggested Upper Merged Ontology (SUMO) integrated with a semantic pattern-based structure referred to as the Semantic Grounding Mechanism (SGM) and implemented as a Corpus Reasoner. The OGEP processing is initiated by a Corpus Parser performing a lexical analysis of the corpus, reading in a document (or corpus) and preparing it for processing by annotating words and phrases. After the Corpus Parser is done, the Corpus Reasoner uses the parts of speech output to determine the semantic meaning of a word or phrase. The Corpus Reasoner is the crux of the OGEP system, analyzing, extrapolating, and evolving data from free text into cohesive semantic relationships. The Semantic Grounding Mechanism provides a basis for identifying and mapping semantic relationships. By blending together the WordNet lexicon and SUMO ontological layout, the SGM is given breadth and depth in its ability to extrapolate semantic relationships between domain entities. The combination of all these components results in an innovative approach to user assisted semantic-based ontology generation. This paper will describe the OGEP technology in the context of the architectural

  9. An ontological view of advanced practice nursing.

    PubMed

    Arslanian-Engoren, Cynthia; Hicks, Frank D; Whall, Ann L; Algase, Donna L

    2005-01-01

    Identifying, developing, and incorporating nursing's unique ontological and epistemological perspective into advanced practice nursing practice places priority on delivering care based on research-derived knowledge. Without a clear distinction of our metatheoretical space, we risk blindly adopting the practice values of other disciplines, which may not necessarily reflect those of nursing. A lack of focus may lead current advanced practice nursing curricula and emerging doctorate of nursing practice programs to mirror the logical positivist paradigm and perspective of medicine. This article presents an ontological perspective for advanced practice nursing education, practice, and research. PMID:16350595

  10. Ontology building by dictionary database mining

    NASA Astrophysics Data System (ADS)

    Deliyska, B.; Rozeva, A.; Malamov, D.

    2012-11-01

    The paper examines the problem of building ontologies in automatic and semi-automatic way by means of mining a dictionary database. An overview of data mining tools and methods is presented. On this basis an extended and improved approach is proposed which involves operations for pre-processing the dictionary database, clustering and associating database entries for extracting hierarchical and nonhierarchical relations. The approach is applied on sample dictionary database in the environment of the Rapid Miner mining tool. As a result the dictionary database is complemented to thesaurus database which can be further on easily converted to reusable formal ontology.

  11. Ontology-Based Model Of Firm Competitiveness

    NASA Astrophysics Data System (ADS)

    Deliyska, Boryana; Stoenchev, Nikolay

    2010-10-01

    Competitiveness is important characteristics of each business organization (firm, company, corporation etc). It is of great significance for the organization existence and defines evaluation criteria of business success at microeconomical level. Each criterium comprises set of indicators with specific weight coefficients. In the work an ontology-based model of firm competitiveness is presented as a set of several mutually connected ontologies. It would be useful for knowledge structuring, standardization and sharing among experts and software engineers who develop application in the domain. Then the assessment of the competitiveness of various business organizations could be generated more effectively.

  12. Modeling sample variables with an Experimental Factor Ontology

    PubMed Central

    Malone, James; Holloway, Ele; Adamusiak, Tomasz; Kapushesky, Misha; Zheng, Jie; Kolesnikov, Nikolay; Zhukova, Anna; Brazma, Alvis; Parkinson, Helen

    2010-01-01

    Motivation: Describing biological sample variables with ontologies is complex due to the cross-domain nature of experiments. Ontologies provide annotation solutions; however, for cross-domain investigations, multiple ontologies are needed to represent the data. These are subject to rapid change, are often not interoperable and present complexities that are a barrier to biological resource users. Results: We present the Experimental Factor Ontology, designed to meet cross-domain, application focused use cases for gene expression data. We describe our methodology and open source tools used to create the ontology. These include tools for creating ontology mappings, ontology views, detecting ontology changes and using ontologies in interfaces to enhance querying. The application of reference ontologies to data is a key problem, and this work presents guidelines on how community ontologies can be presented in an application ontology in a data-driven way. Availability: http://www.ebi.ac.uk/efo Contact: malone@ebi.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20200009

  13. Developing Learning Materials Using an Ontology of Mathematical Logic

    ERIC Educational Resources Information Center

    Boyatt, Russell; Joy, Mike

    2012-01-01

    Ontologies describe a body of knowledge and give formal structure to a domain by describing concepts and their relationships. The construction of an ontology provides an opportunity to develop a shared understanding and a consistent vocabulary to be used for a given activity. This paper describes the construction of an ontology for an area of…

  14. The Relationship between User Expertise and Structural Ontology Characteristics

    ERIC Educational Resources Information Center

    Waldstein, Ilya Michael

    2014-01-01

    Ontologies are commonly used to support application tasks such as natural language processing, knowledge management, learning, browsing, and search. Literature recommends considering specific context during ontology design, and highlights that a different context is responsible for problems in ontology reuse. However, there is still no clear…

  15. Unsupervised Ontology Generation from Unstructured Text. CRESST Report 827

    ERIC Educational Resources Information Center

    Mousavi, Hamid; Kerr, Deirdre; Iseli, Markus R.

    2013-01-01

    Ontologies are a vital component of most knowledge acquisition systems, and recently there has been a huge demand for generating ontologies automatically since manual or supervised techniques are not scalable. In this paper, we introduce "OntoMiner", a rule-based, iterative method to extract and populate ontologies from unstructured or…

  16. Onotology-Based Annotation and Ranking Service for Geoscience

    NASA Astrophysics Data System (ADS)

    Sainju, R.; Ramachandran, R.; Li, X.; McEniry, M.; Kulkarni, A.; Conover, H.

    2012-12-01

    There is a need to automatically annotate information using a either a control vocabulary or an ontology to make the information not only easily discoverable but also allow the information to be linked to other information based on these semantic annotations. We present an ontology annotation and a ranking service designed to address this need. The service can be configured to use an ontology describing a specific application domain. Given text inputs, this service generates annotations whenever the service finds terms that intersect both in the text and the ontology. The service is also capable of ranking the different inputs based on the "contextual" similarity to the information captured in the ontology. To rank a given input, the service uses a specialized algorithm which calculated both an ontological score based on precomputed weights of the intersecting term from the ontology and a statistical score using traditional term frequency- inverse document frequency (TF-IDF) approach. Both these scores are normalized and combined to generate the final ranking. An example application of this service to find relevant datasets for studying Hurricanes within NASA's data catalog. A hurricane ontology is used to index and rank all the data set descriptions from the metadata catalog and only the datasets that rank high are presented to the end users as contextually relevant for studying Hurricanes.

  17. Mapping between the OBO and OWL ontology languages

    PubMed Central

    2011-01-01

    Background Ontologies are commonly used in biomedicine to organize concepts to describe domains such as anatomies, environments, experiment, taxonomies etc. NCBO BioPortal currently hosts about 180 different biomedical ontologies. These ontologies have been mainly expressed in either the Open Biomedical Ontology (OBO) format or the Web Ontology Language (OWL). OBO emerged from the Gene Ontology, and supports most of the biomedical ontology content. In comparison, OWL is a Semantic Web language, and is supported by the World Wide Web consortium together with integral query languages, rule languages and distributed infrastructure for information interchange. These features are highly desirable for the OBO content as well. A convenient method for leveraging these features for OBO ontologies is by transforming OBO ontologies to OWL. Results We have developed a methodology for translating OBO ontologies to OWL using the organization of the Semantic Web itself to guide the work. The approach reveals that the constructs of OBO can be grouped together to form a similar layer cake. Thus we were able to decompose the problem into two parts. Most OBO constructs have easy and obvious equivalence to a construct in OWL. A small subset of OBO constructs requires deeper consideration. We have defined transformations for all constructs in an effort to foster a standard common mapping between OBO and OWL. Our mapping produces OWL-DL, a Description Logics based subset of OWL with desirable computational properties for efficiency and correctness. Our Java implementation of the mapping is part of the official Gene Ontology project source. Conclusions Our transformation system provides a lossless roundtrip mapping for OBO ontologies, i.e. an OBO ontology may be translated to OWL and back without loss of knowledge. In addition, it provides a roadmap for bridging the gap between the two ontology languages in order to enable the use of ontology content in a language independent manner

  18. Bridging the gap between data acquisition and inference ontologies: toward ontology-based link discovery

    NASA Astrophysics Data System (ADS)

    Goldstein, Michel L.; Morris, Steven A.; Yen, Gary G.

    2003-09-01

    Bridging the gap between low level ontologies used for data acquisition and high level ontologies used for inference is essential to enable the discovery of high-level links between low-level entities. This is of utmost importance in many applications, where the semantic distance between the observable evidence and the target relations is large. Examples of these applications would be detection of terrorist activity, crime analysis, and technology monitoring, among others. Currently this inference gap has been filled by expert knowledge. However, with the increase of the data and system size, it has become too costly to perform such manual inference. This paper proposes a semi-automatic system to bridge the inference gap using network correlation methods, similar to Bayesian Belief Networks, combined with hierarchical clustering, to group and organize data so that experts can observe and build the inference gap ontologies quickly and efficiently, decreasing the cost of this labor-intensive process. A simple application of this method is shown here, where the co-author collaboration structure ontology is inferred from the analysis of a collection of journal publications on the subject of anthrax. This example uncovers a co-author collaboration structures (a well defined ontology) from a scientific publication dataset (also a well defined ontology). Nevertheless, the evidence of author collaboration is poorly defined, requiring the use of evidence from keywords, citations, publication dates, and paper co-authorship. The proposed system automatically suggests candidate collaboration group patterns for evaluation by experts. Using an intuitive graphic user interface, these experts identify, confirm and refine the proposed ontologies and add them to the ontology database to be used in subsequent processes.

  19. ICEPO: the ion channel electrophysiology ontology

    PubMed Central

    Hinard, V.; Britan, A.; Rougier, J.S.; Bairoch, A.; Abriel, H.; Gaudet, P.

    2016-01-01

    Ion channels are transmembrane proteins that selectively allow ions to flow across the plasma membrane and play key roles in diverse biological processes. A multitude of diseases, called channelopathies, such as epilepsies, muscle paralysis, pain syndromes, cardiac arrhythmias or hypoglycemia are due to ion channel mutations. A wide corpus of literature is available on ion channels, covering both their functions and their roles in disease. The research community needs to access this data in a user-friendly, yet systematic manner. However, extraction and integration of this increasing amount of data have been proven to be difficult because of the lack of a standardized vocabulary that describes the properties of ion channels at the molecular level. To address this, we have developed Ion Channel ElectroPhysiology Ontology (ICEPO), an ontology that allows one to annotate the electrophysiological parameters of the voltage-gated class of ion channels. This ontology is based on a three-state model of ion channel gating describing the three conformations/states that an ion channel can adopt: closed, open and inactivated. This ontology supports the capture of voltage-gated ion channel electrophysiological data from the literature in a structured manner and thus enables other applications such as querying and reasoning tools. Here, we present ICEPO (ICEPO ftp site: ftp://ftp.nextprot.org/pub/current_release/controlled_vocabularies/), as well as examples of its use. PMID:27055825

  20. The Teacher's Vocation: Ontology of Response

    ERIC Educational Resources Information Center

    Game, Ann; Metcalfe, Andrew

    2008-01-01

    We argue that pedagogic authority relies on love, which is misunderstood if seen as a matter of actions and subjects. Love is based not on finite subjects and objects existing in Euclidean space and linear time, but, rather, on the non-finite ontology, space and time of relations. Loving authority is a matter of calling and vocation, arising from…

  1. Using ontologies to study cell transitions

    PubMed Central

    2013-01-01

    Background Understanding, modelling and influencing the transition between different states of cells, be it reprogramming of somatic cells to pluripotency or trans-differentiation between cells, is a hot topic in current biomedical and cell-biological research. Nevertheless, the large body of published knowledge in this area is underused, as most results are only represented in natural language, impeding their finding, comparison, aggregation, and usage. Scientific understanding of the complex molecular mechanisms underlying cell transitions could be improved by making essential pieces of knowledge available in a formal (and thus computable) manner. Results We describe the outline of two ontologies for cell phenotypes and for cellular mechanisms which together enable the representation of data curated from the literature or obtained by bioinformatics analyses and thus for building a knowledge base on mechanisms involved in cellular reprogramming. In particular, we discuss how comprehensive ontologies of cell phenotypes and of changes in mechanisms can be designed using the entity-quality (EQ) model. Conclusions We show that the principles for building cellular ontologies published in this work allow deeper insights into the relations between the continuants (cell phenotypes) and the occurrents (cell mechanism changes) involved in cellular reprogramming, although implementation remains for future work. Further, our design principles lead to ontologies that allow the meaningful application of similarity searches in the spaces of cell phenotypes and of mechanisms, and, especially, of changes of mechanisms during cellular transitions. PMID:24103098

  2. Gene Ontology: looking backwards and forwards

    PubMed Central

    Lewis, Suzanna E

    2005-01-01

    The Gene Ontology consortium began six years ago with a group of scientists who decided to connect our data by sharing the same language for describing it. Its most significant achievement lies in uniting many independent biological database efforts into a cooperative force. PMID:15642104

  3. Ontology-Based Administration of Web Directories

    NASA Astrophysics Data System (ADS)

    Horvat, Marko; Gledec, Gordan; Bogunović, Nikola

    Administration of a Web directory and maintenance of its content and the associated structure is a delicate and labor intensive task performed exclusively by human domain experts. Subsequently there is an imminent risk of a directory structures becoming unbalanced, uneven and difficult to use to all except for a few users proficient with the particular Web directory and its domain. These problems emphasize the need to establish two important issues: i) generic and objective measures of Web directories structure quality, and ii) mechanism for fully automated development of a Web directory's structure. In this paper we demonstrate how to formally and fully integrate Web directories with the Semantic Web vision. We propose a set of criteria for evaluation of a Web directory's structure quality. Some criterion functions are based on heuristics while others require the application of ontologies. We also suggest an ontology-based algorithm for construction of Web directories. By using ontologies to describe the semantics of Web resources and Web directories' categories it is possible to define algorithms that can build or rearrange the structure of a Web directory. Assessment procedures can provide feedback and help steer the ontology-based construction process. The issues raised in the article can be equally applied to new and existing Web directories.

  4. Modeling biochemical pathways in the gene ontology

    PubMed Central

    Hill, David P.; D’Eustachio, Peter; Berardini, Tanya Z.; Mungall, Christopher J.; Renedo, Nikolai; Blake, Judith A.

    2016-01-01

    The concept of a biological pathway, an ordered sequence of molecular transformations, is used to collect and represent molecular knowledge for a broad span of organismal biology. Representations of biomedical pathways typically are rich but idiosyncratic presentations of organized knowledge about individual pathways. Meanwhile, biomedical ontologies and associated annotation files are powerful tools that organize molecular information in a logically rigorous form to support computational analysis. The Gene Ontology (GO), representing Molecular Functions, Biological Processes and Cellular Components, incorporates many aspects of biological pathways within its ontological representations. Here we present a methodology for extending and refining the classes in the GO for more comprehensive, consistent and integrated representation of pathways, leveraging knowledge embedded in current pathway representations such as those in the Reactome Knowledgebase and MetaCyc. With carbohydrate metabolic pathways as a use case, we discuss how our representation supports the integration of variant pathway classes into a unified ontological structure that can be used for data comparison and analysis. PMID:27589964

  5. Bacterial clinical infectious diseases ontology (BCIDO) dataset.

    PubMed

    Gordon, Claire L; Weng, Chunhua

    2016-09-01

    This article describes the Bacterial Infectious Diseases Ontology (BCIDO) dataset related to research published in http:dx.doi.org/ 10.1016/j.jbi.2015.07.014 [1], and contains the Protégé OWL files required to run BCIDO in the Protégé environment. BCIDO contains 1719 classes and 39 object properties. PMID:27508237

  6. ICEPO: the ion channel electrophysiology ontology.

    PubMed

    Hinard, V; Britan, A; Rougier, J S; Bairoch, A; Abriel, H; Gaudet, P

    2016-01-01

    Ion channels are transmembrane proteins that selectively allow ions to flow across the plasma membrane and play key roles in diverse biological processes. A multitude of diseases, called channelopathies, such as epilepsies, muscle paralysis, pain syndromes, cardiac arrhythmias or hypoglycemia are due to ion channel mutations. A wide corpus of literature is available on ion channels, covering both their functions and their roles in disease. The research community needs to access this data in a user-friendly, yet systematic manner. However, extraction and integration of this increasing amount of data have been proven to be difficult because of the lack of a standardized vocabulary that describes the properties of ion channels at the molecular level. To address this, we have developed Ion Channel ElectroPhysiology Ontology (ICEPO), an ontology that allows one to annotate the electrophysiological parameters of the voltage-gated class of ion channels. This ontology is based on a three-state model of ion channel gating describing the three conformations/states that an ion channel can adopt: closed, open and inactivated. This ontology supports the capture of voltage-gated ion channel electrophysiological data from the literature in a structured manner and thus enables other applications such as querying and reasoning tools. Here, we present ICEPO (ICEPO ftp site:ftp://ftp.nextprot.org/pub/current_release/controlled_vocabularies/), as well as examples of its use. PMID:27055825

  7. In Defense of Chi's Ontological Incompatibility Hypothesis

    ERIC Educational Resources Information Center

    Slotta, James D.

    2011-01-01

    This article responds to an article by A. Gupta, D. Hammer, and E. F. Redish (2010) that asserts that M. T. H. Chi's (1992, 2005) hypothesis of an "ontological commitment" in conceptual development is fundamentally flawed. In this article, I argue that Chi's theoretical perspective is still very much intact and that the critique offered by Gupta…

  8. Modeling biochemical pathways in the gene ontology.

    PubMed

    Hill, David P; D'Eustachio, Peter; Berardini, Tanya Z; Mungall, Christopher J; Renedo, Nikolai; Blake, Judith A

    2016-01-01

    The concept of a biological pathway, an ordered sequence of molecular transformations, is used to collect and represent molecular knowledge for a broad span of organismal biology. Representations of biomedical pathways typically are rich but idiosyncratic presentations of organized knowledge about individual pathways. Meanwhile, biomedical ontologies and associated annotation files are powerful tools that organize molecular information in a logically rigorous form to support computational analysis. The Gene Ontology (GO), representing Molecular Functions, Biological Processes and Cellular Components, incorporates many aspects of biological pathways within its ontological representations. Here we present a methodology for extending and refining the classes in the GO for more comprehensive, consistent and integrated representation of pathways, leveraging knowledge embedded in current pathway representations such as those in the Reactome Knowledgebase and MetaCyc. With carbohydrate metabolic pathways as a use case, we discuss how our representation supports the integration of variant pathway classes into a unified ontological structure that can be used for data comparison and analysis. PMID:27589964

  9. The Ontology of Interactive Art

    ERIC Educational Resources Information Center

    Lopes, Dominic M. McIver

    2001-01-01

    Developments in the art world seem always to keep one step ahead of philosophical attempts to characterize the nature and value of art. A pessimist may conclude that theories of art are doomed to failure. But those more optimistic about the prospects for progress in philosophy may retort that avant-garde art does philosophers a great service. It…

  10. Data Quality Screening Service

    NASA Technical Reports Server (NTRS)

    Strub, Richard; Lynnes, Christopher; Hearty, Thomas; Won, Young-In; Fox, Peter; Zednik, Stephan

    2013-01-01

    A report describes the Data Quality Screening Service (DQSS), which is designed to help automate the filtering of remote sensing data on behalf of science users. Whereas this process often involves much research through quality documents followed by laborious coding, the DQSS is a Web Service that provides data users with data pre-filtered to their particular criteria, while at the same time guiding the user with filtering recommendations of the cognizant data experts. The DQSS design is based on a formal semantic Web ontology that describes data fields and the quality fields for applying quality control within a data product. The accompanying code base handles several remote sensing datasets and quality control schemes for data products stored in Hierarchical Data Format (HDF), a common format for NASA remote sensing data. Together, the ontology and code support a variety of quality control schemes through the implementation of the Boolean expression with simple, reusable conditional expressions as operands. Additional datasets are added to the DQSS simply by registering instances in the ontology if they follow a quality scheme that is already modeled in the ontology. New quality schemes are added by extending the ontology and adding code for each new scheme.

  11. Crowdsourcing the Verification of Relationships in Biomedical Ontologies

    PubMed Central

    Mortensen, Jonathan M.; Musen, Mark A.; Noy, Natalya F.

    2013-01-01

    Biomedical ontologies are often large and complex, making ontology development and maintenance a challenge. To address this challenge, scientists use automated techniques to alleviate the difficulty of ontology development. However, for many ontology-engineering tasks, human judgment is still necessary. Microtask crowdsourcing, wherein human workers receive remuneration to complete simple, short tasks, is one method to obtain contributions by humans at a large scale. Previously, we developed and refined an effective method to verify ontology hierarchy using microtask crowdsourcing. In this work, we report on applying this method to find errors in the SNOMED CT CORE subset. By using crowdsourcing via Amazon Mechanical Turk with a Bayesian inference model, we correctly verified 86% of the relations from the CORE subset of SNOMED CT in which Rector and colleagues previously identified errors via manual inspection. Our results demonstrate that an ontology developer could deploy this method in order to audit large-scale ontologies quickly and relatively cheaply. PMID:24551391

  12. Domain ontologies for data sharing an example from environmental monitoring using field GIS

    NASA Astrophysics Data System (ADS)

    Pundt, Hardy; Bishr, Yaser

    2002-02-01

    Different geospatial information communities, public authorities as well as private institutions, recognize increasingly the World Wide Web as a medium to distribute their data. With the occurrence of national laws that push authorities to make environmental data accessible, Internet-based services have to be developed to enable the public to obtain information digitally. Dissemination of data is only one side of the coin. The other side is the use of such data. The use requires mechanisms to share data via networks. Lack of semantic interoperability has been identified as the main obstacle for data sharing. Research, however, must develop methods to overcome the problems of sharing data considering their semantics. Ontologies are considered to be one approach to support data sharing. This paper describes the use of ontologies via the Internet based on an example from field GIS supported environmental monitoring. The basic idea is that the members of different information communities get access to the meaning of data if they can approach the ontologies that have been developed by those who collected the data. This might be possible by applying the resource definition framework (RDF) and RDF/Schema. RDF can be used to define and structure terms and vocabulary used in a specific information community. The goal of the paper is to examine the role of ontologies based on the study of a particular application domain, namely stream surveying. The use of RDF/Schema is described related to the example.

  13. Informatics in radiology: radiology gamuts ontology: differential diagnosis for the Semantic Web.

    PubMed

    Budovec, Joseph J; Lam, Cesar A; Kahn, Charles E

    2014-01-01

    The Semantic Web is an effort to add semantics, or "meaning," to empower automated searching and processing of Web-based information. The overarching goal of the Semantic Web is to enable users to more easily find, share, and combine information. Critical to this vision are knowledge models called ontologies, which define a set of concepts and formalize the relations between them. Ontologies have been developed to manage and exploit the large and rapidly growing volume of information in biomedical domains. In diagnostic radiology, lists of differential diagnoses of imaging observations, called gamuts, provide an important source of knowledge. The Radiology Gamuts Ontology (RGO) is a formal knowledge model of differential diagnoses in radiology that includes 1674 differential diagnoses, 19,017 terms, and 52,976 links between terms. Its knowledge is used to provide an interactive, freely available online reference of radiology gamuts ( www.gamuts.net ). A Web service allows its content to be discovered and consumed by other information systems. The RGO integrates radiologic knowledge with other biomedical ontologies as part of the Semantic Web. PMID:24428295

  14. Implementation of a digital optical matrix-vector multiplier using a holographic look-up table and residue arithmetic

    NASA Technical Reports Server (NTRS)

    Habiby, Sarry F.

    1987-01-01

    The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. The objective is to demonstrate the operation of an optical processor designed to minimize computation time in performing a practical computing application. This is done by using the large array of processing elements in a Hughes liquid crystal light valve, and relying on the residue arithmetic representation, a holographic optical memory, and position coded optical look-up tables. In the design, all operations are performed in effectively one light valve response time regardless of matrix size. The features of the design allowing fast computation include the residue arithmetic representation, the mapping approach to computation, and the holographic memory. In addition, other features of the work include a practical light valve configuration for efficient polarization control, a model for recording multiple exposures in silver halides with equal reconstruction efficiency, and using light from an optical fiber for a reference beam source in constructing the hologram. The design can be extended to implement larger matrix arrays without increasing computation time.

  15. The use of three-parameter rating table lookup programs, RDRAT and PARM3, in hydraulic flow models

    USGS Publications Warehouse

    Sanders, C.L., Jr.

    1995-01-01

    Subroutines RDRAT and PARM3 enable computer programs such as the BRANCH open-channel unsteady-flow model to route flows through or over combinations of critical-flow sections, culverts, bridges, road- overflow sections, fixed spillways, and(or) dams. The subroutines also obstruct upstream flow to simulate operation of flapper-type tide gates. A multiplier can be applied by date and time to simulate varying numbers of tide gates being open or alternative construction scenarios for multiple culverts. The subroutines use three-parameter (headwater, tailwater, and discharge) rating table lookup methods. These tables may be manually prepared using other programs that do step-backwater computations or compute flow through bridges and culverts or over dams. The subroutine, therefore, precludes the necessity of incorporating considerable hydraulic computational code into the client program, and provides complete flexibility for users of the model for routing flow through almost any affixed structure or combination of structures. The subroutines are written in Fortran 77 language, and have minimal exchange of information with the BRANCH model or other possible client programs. The report documents the interpolation methodology, data input requirements, and software.

  16. Multiscale climatological albedo look-up maps derived from moderate resolution imaging spectroradiometer BRDF/albedo products

    NASA Astrophysics Data System (ADS)

    Gao, Feng; He, Tao; Wang, Zhuosen; Ghimire, Bardan; Shuai, Yanmin; Masek, Jeffrey; Schaaf, Crystal; Williams, Christopher

    2014-01-01

    Surface albedo determines radiative forcing and is a key parameter for driving Earth's climate. Better characterization of surface albedo for individual land cover types can reduce the uncertainty in estimating changes to Earth's radiation balance due to land cover change. This paper presents albedo look-up maps (LUMs) using a multiscale hierarchical approach based on moderate resolution imaging spectroradiometer (MODIS) bidirectional reflectance distribution function (BRDF)/albedo products and Landsat imagery. Ten years (2001 to 2011) of MODIS BRDF/albedo products were used to generate global albedo climatology. Albedo LUMs of land cover classes defined by the International Geosphere-Biosphere Programme (IGBP) at multiple spatial resolutions were generated. The albedo LUMs included monthly statistics of white-sky (diffuse) and black-sky (direct) albedo for each IGBP class for visible, near-infrared, and shortwave broadband under both snow-free and snow-covered conditions. The albedo LUMs were assessed by using the annual MODIS IGBP land cover map and the projected land use scenarios from the Intergovernmental Panel on Climate Change land-use harmonization project. The comparisons between the reconstructed albedo and the MODIS albedo data product show good agreement. The LUMs provide high temporal and spatial resolution global albedo statistics without gaps for investigating albedo variations under different land cover scenarios and could be used for land surface modeling.

  17. PAV ontology: provenance, authoring and versioning

    PubMed Central

    2013-01-01

    Background Provenance is a critical ingredient for establishing trust of published scientific content. This is true whether we are considering a data set, a computational workflow, a peer-reviewed publication or a simple scientific claim with supportive evidence. Existing vocabularies such as Dublin Core Terms (DC Terms) and the W3C Provenance Ontology (PROV-O) are domain-independent and general-purpose and they allow and encourage for extensions to cover more specific needs. In particular, to track authoring and versioning information of web resources, PROV-O provides a basic methodology but not any specific classes and properties for identifying or distinguishing between the various roles assumed by agents manipulating digital artifacts, such as author, contributor and curator. Results We present the Provenance, Authoring and Versioning ontology (PAV, namespace http://purl.org/pav/): a lightweight ontology for capturing “just enough” descriptions essential for tracking the provenance, authoring and versioning of web resources. We argue that such descriptions are essential for digital scientific content. PAV distinguishes between contributors, authors and curators of content and creators of representations in addition to the provenance of originating resources that have been accessed, transformed and consumed. We explore five projects (and communities) that have adopted PAV illustrating their usage through concrete examples. Moreover, we present mappings that show how PAV extends the W3C PROV-O ontology to support broader interoperability. Method The initial design of the PAV ontology was driven by requirements from the AlzSWAN project with further requirements incorporated later from other projects detailed in this paper. The authors strived to keep PAV lightweight and compact by including only those terms that have demonstrated to be pragmatically useful in existing applications, and by recommending terms from existing ontologies when plausible. Discussion

  18. Developing an Ontology for Ocean Biogeochemistry Data

    NASA Astrophysics Data System (ADS)

    Chandler, C. L.; Allison, M. D.; Groman, R. C.; West, P.; Zednik, S.; Maffei, A. R.

    2010-12-01

    Semantic Web technologies offer great promise for enabling new and better scientific research. However, significant challenges must be met before the promise of the Semantic Web can be realized for a discipline as diverse as oceanography. Evolving expectations for open access to research data combined with the complexity of global ecosystem science research themes present a significant challenge, and one that is best met through an informatics approach. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) is funded by the National Science Foundation Division of Ocean Sciences to work with ocean biogeochemistry researchers to improve access to data resulting from their respective programs. In an effort to improve data access, BCO-DMO staff members are collaborating with researchers from the Tetherless World Constellation (Rensselaer Polytechnic Institute) to develop an ontology that formally describes the concepts and relationships in the data managed by the BCO-DMO. The project required transforming a legacy system of human-readable, flat files of metadata to well-ordered controlled vocabularies to a fully developed ontology. To improve semantic interoperability, terms from the BCO-DMO controlled vocabularies are being mapped to controlled vocabulary terms adopted by other oceanographic data management organizations. While the entire process has proven to be difficult, time-consuming and labor-intensive, the work has been rewarding and is a necessary prerequisite for the eventual incorporation of Semantic Web tools. From the beginning of the project, development of the ontology has been guided by a use case based approach. The use cases were derived from data access related requests received from members of the research community served by the BCO-DMO. The resultant ontology satisfies the requirements of the use cases and reflects the information stored in the metadata database. The BCO-DMO metadata database currently contains information that

  19. Building integrated ontological knowledge structures with efficient approximation algorithms.

    PubMed

    Xiang, Yang; Janga, Sarath Chandra

    2015-01-01

    The integration of ontologies builds knowledge structures which brings new understanding on existing terminologies and their associations. With the steady increase in the number of ontologies, automatic integration of ontologies is preferable over manual solutions in many applications. However, available works on ontology integration are largely heuristic without guarantees on the quality of the integration results. In this work, we focus on the integration of ontologies with hierarchical structures. We identified optimal structures in this problem and proposed optimal and efficient approximation algorithms for integrating a pair of ontologies. Furthermore, we extend the basic problem to address the integration of a large number of ontologies, and correspondingly we proposed an efficient approximation algorithm for integrating multiple ontologies. The empirical study on both real ontologies and synthetic data demonstrates the effectiveness of our proposed approaches. In addition, the results of integration between gene ontology and National Drug File Reference Terminology suggest that our method provides a novel way to perform association studies between biomedical terms. PMID:26550571

  20. An OGSA Middleware for managing medical images using ontologies.

    PubMed

    Espert, Ignacio Blanquer; Garcáa, Vicente Hernández; Quilis, J Damià Segrelles

    2005-10-01

    This article presents a Middleware based on Grid Technologies that addresses the problem of sharing, transferring and processing DICOM medical images in a distributed environment using an ontological schema to create virtual communities and to define common targets. It defines a distributed storage that builds-up virtual repositories integrating different individual image repositories providing global searching, progressive transmission, automatic encryption and pseudo-anonimisation and a link to remote processing services. Users from a Virtual Organisation can share the cases that are relevant for their communities or research areas, epidemiological studies or even deeper analysis of complex individual cases. Software architecture has been defined for solving the problems that has been exposed before. Briefly, the architecture comprises five layers (from the more physical layer to the more logical layer) based in Grid Technologies. The lowest level layers (Core Middleware Layer and Server Services sc layer) are composed of Grid Services that implement the global managing of resources. The Middleware Components Layer provides a transparent view of the Grid environment and it has been the main objective of this work. Finally, the highest layer (the Application Layer) comprises the applications, and a simple application has been implemented for testing the components developed in the Components Middleware Layer. Other side-results of this work are the services developed in the Middleware Components Layer for managing DICOM images, creating virtual DICOM storages, progressive transmission, automatic encryption and pseudo-anonimisation depending on the ontologies. Other results, such as the Grid Services developed in the lowest layers, are also described in this article. Finally a brief performance analysis and several snapshots from the applications developed are shown. The performance analysis proves that the components developed in this work provide image processing

  1. Development and Evaluation of an Adolescents' Depression Ontology for Analyzing Social Data.

    PubMed

    Jung, Hyesil; Park, Hyeoun-Ae; Song, Tae-Min

    2016-01-01

    This study aims to develop and evaluate an ontology for adolescents' depression to be used for collecting and analyzing social data. The ontology was developed according to the 'ontology development 101' methodology. Concepts were extracted from clinical practice guidelines and related literatures. The ontology is composed of five sub-ontologies which represent risk factors, sign and symptoms, measurement, diagnostic result and management care. The ontology was evaluated in four different ways: First, we examined the frequency of ontology concept appeared in social data; Second, the content coverage of ontology was evaluated by comparing ontology concepts with concepts extracted from the youth depression counseling records; Third, the structural and representational layer of the ontology were evaluated by 5 ontology and psychiatric nursing experts; Fourth, the scope of the ontology was examined by answering 59 competency questions. The ontology was improved by adding new concepts and synonyms and revising the level of structure. PMID:27332239

  2. Designing and implementing a geologic information system using a spatiotemporal ontology model for a geologic map of Korea

    NASA Astrophysics Data System (ADS)

    Hwang, Jaehong; Nam, Kwang Woo; Ryu, Keun Ho

    2012-11-01

    A geologic information system was utilized for geologic mapping in Korea using a spatiotemporal ontology model. Five steps were required to make the GIS representation of the geologic map information. The first step was to limit the geologic mapping to Korean area. The second step was to extract the rock units with spatial objects from the geologic map and the geologic time units with temporal objects. The third step was to standardize the geologic terms in Korean and English for both the spatial and temporal objects. The fourth step was to conceptualize the classified objects in the geologic map units and the formation of guidelines for the specification of a spatiotemporal ontology model. Finally, we constructed a spatiotemporal retrieval system and an ontology system related to the geologic map of Korea, which were applied to the spatiotemporal ontology model. The spatiotemporal ontology model was defined as a sophisticated model that provides for the evolution from a data base to a knowledge base. This ontology model can be conceptualized as a well-defined set of terms used for expressing spatial objects in rock units and temporal objects in geologic time units, as well as a system of contents and structures. In addition, it includes symbology units such as color and pattern symbols mapped one-to-one with the spatiotemporal concepts. The existing information retrieval services provide information that is limited to the user's knowledge, whereas our geologic ontology system provides a broad range of information in graphical form, including locations and interrelationships. In this way, the information can be upgraded to the level of knowledge. A geologic term tree was designed, based on the existing classification schemes, with the goal of creating an accessible internet source.

  3. Ontology-Based Search of Genomic Metadata.

    PubMed

    Fernandez, Javier D; Lenzerini, Maurizio; Masseroli, Marco; Venco, Francesco; Ceri, Stefano

    2016-01-01

    The Encyclopedia of DNA Elements (ENCODE) is a huge and still expanding public repository of more than 4,000 experiments and 25,000 data files, assembled by a large international consortium since 2007; unknown biological knowledge can be extracted from these huge and largely unexplored data, leading to data-driven genomic, transcriptomic, and epigenomic discoveries. Yet, search of relevant datasets for knowledge discovery is limitedly supported: metadata describing ENCODE datasets are quite simple and incomplete, and not described by a coherent underlying ontology. Here, we show how to overcome this limitation, by adopting an ENCODE metadata searching approach which uses high-quality ontological knowledge and state-of-the-art indexing technologies. Specifically, we developed S.O.S. GeM (http://www.bioinformatics.deib.polimi.it/SOSGeM/), a system supporting effective semantic search and retrieval of ENCODE datasets. First, we constructed a Semantic Knowledge Base by starting with concepts extracted from ENCODE metadata, matched to and expanded on biomedical ontologies integrated in the well-established Unified Medical Language System. We prove that this inference method is sound and complete. Then, we leveraged the Semantic Knowledge Base to semantically search ENCODE data from arbitrary biologists' queries. This allows correctly finding more datasets than those extracted by a purely syntactic search, as supported by the other available systems. We empirically show the relevance of found datasets to the biologists' queries. PMID:26529777

  4. Ontological Approach to Reduce Complexity in Polypharmacy

    PubMed Central

    Farrish, Susan; Grando, Adela

    2013-01-01

    Patients that are on many medications are often non-compliant due to the complexity of the medication regimen; consequently, a patient that is non-compliant can have poor medical outcomes. Providers are not always aware of the complexity of their patient’s prescriptions. Methods have been developed to calculate the complexity for a patient’s regimen but there are no widely available automated tools that will do this for a provider. Given that ontologies are known to provide well-principled, sharable, setting-independent and machine-interpretable declarative specification frameworks for modeling and reasoning on biomedical problems, we have explored their use in the context of reducing medication complexity. Previously we proposed an Ontology for modeling drug-related knowledge and a repository for complexity scoring. Here we tested the Ontology with patient data from the University of California San Diego Epic database, and we built a decision aide that computes the complexity and recommends changes to reduce the complexity, if needed. PMID:24551346

  5. Automated Database Mediation Using Ontological Metadata Mappings

    PubMed Central

    Marenco, Luis; Wang, Rixin; Nadkarni, Prakash

    2009-01-01

    Objective To devise an automated approach for integrating federated database information using database ontologies constructed from their extended metadata. Background One challenge of database federation is that the granularity of representation of equivalent data varies across systems. Dealing effectively with this problem is analogous to dealing with precoordinated vs. postcoordinated concepts in biomedical ontologies. Model Description The authors describe an approach based on ontological metadata mapping rules defined with elements of a global vocabulary, which allows a query specified at one granularity level to fetch data, where possible, from databases within the federation that use different granularities. This is implemented in OntoMediator, a newly developed production component of our previously described Query Integrator System. OntoMediator's operation is illustrated with a query that accesses three geographically separate, interoperating databases. An example based on SNOMED also illustrates the applicability of high-level rules to support the enforcement of constraints that can prevent inappropriate curator or power-user actions. Summary A rule-based framework simplifies the design and maintenance of systems where categories of data must be mapped to each other, for the purpose of either cross-database query or for curation of the contents of compositional controlled vocabularies. PMID:19567801

  6. Using ontology network structure in text mining.

    PubMed

    Berndt, Donald J; McCart, James A; Luther, Stephen L

    2010-01-01

    Statistical text mining treats documents as bags of words, with a focus on term frequencies within documents and across document collections. Unlike natural language processing (NLP) techniques that rely on an engineered vocabulary or a full-featured ontology, statistical approaches do not make use of domain-specific knowledge. The freedom from biases can be an advantage, but at the cost of ignoring potentially valuable knowledge. The approach proposed here investigates a hybrid strategy based on computing graph measures of term importance over an entire ontology and injecting the measures into the statistical text mining process. As a starting point, we adapt existing search engine algorithms such as PageRank and HITS to determine term importance within an ontology graph. The graph-theoretic approach is evaluated using a smoking data set from the i2b2 National Center for Biomedical Computing, cast as a simple binary classification task for categorizing smoking-related documents, demonstrating consistent improvements in accuracy. PMID:21346937

  7. Ontological knowledge structure of intuitive biology

    NASA Astrophysics Data System (ADS)

    Martin, Suzanne Michele

    It has become increasingly important for individuals to understand infections disease, as there has been a tremendous rise in viral and bacterial disease. This research examines systematic misconceptions regarding the characteristics of viruses and bacteria present in individuals previously educated in biological sciences at a college level. 90 pre-nursing students were administered the Knowledge Acquisition Device (KAD) which consists of 100 True/False items that included statements about the possible attributes of four entities: bacteria, virus, amoeba, and protein. Thirty pre-nursing students, who incorrectly stated that viruses were alive, were randomly assigned to three conditions. (1) exposed to information about the ontological nature of viruses, (2) Information about viruses, (3) control. In the condition that addressed the ontological nature of a virus, all of those participants were able to classify viruses correctly as not alive; however any items that required inferences, such as viruses come in male and female forms or viruses breed with each other to make baby viruses were still incorrectly answered by all conditions in the posttest. It appears that functional knowledge, ex. If a virus is alive or dead, or how it is structured, is not enough for an individual to have a full and accurate understanding of viruses. Ontological knowledge information may alter the functional knowledge but underlying inferences remain systematically incorrect.

  8. Ontological System for Context Artifacts and Resources

    NASA Astrophysics Data System (ADS)

    Huang, T.; Chung, N. T.; Mukherjee, R. M.

    2012-12-01

    The Adaptive Vehicle Make (AVM) program is a portfolio of programs, managed by the Defense Advanced Research Projects Agency (DARPA). It was established to revolutionize how DoD designs, verifies, and manufactures complex defense systems and vehicles. The Component, Context, and Manufacturing Model Library (C2M2L; pronounced "camel") seeks to develop domain-specific models needed to enable design, verification, and fabrication of the Fast Adaptable Next-Generation (FANG) infantry fighting vehicle using in its overall infrastructure. Terrain models are being developed to represent the surface/fluid that an amphibious infantry fighting vehicle would traverse, ranging from paved road surfaces to rocky, mountainous terrain, slope, discrete obstacles, mud, sand snow, and water fording. Context models are being developed to provide additional data for environmental factors, such as: humidity, wind speed, particulate presence and character, solar radiation, cloud cover, precipitation, and more. The Ontological System for Context Artifacts and Resources (OSCAR) designed and developed at the Jet Propulsion Laboratory is semantic web data system that enables context artifacts to be registered and searched according to their meaning, rather than indexed according to their syntactic structure alone (as in the case for traditional search engines). The system leverages heavily on the Semantic Web for Earth and Environmental Terminology (SWEET) ontologies to model physical terrain environment and context model characteristics. In this talk, we focus on the application of the SWEET ontologies and the design of the OSCAR system architecture.

  9. Relationship auditing of the FMA ontology

    PubMed Central

    Gu, Huanying (Helen); Wei, Duo; Mejino, Jose L.V.; Elhanan, Gai

    2010-01-01

    The Foundational Model of Anatomy (FMA) ontology is a domain reference ontology based on a disciplined modeling approach. Due to its large size, semantic complexity and manual data entry process, errors and inconsistencies are unavoidable and might remain within the FMA structure without detection. In this paper, we present computable methods to highlight candidate concepts for various relationship assignment errors. The process starts with locating structures formed by transitive structural relationships (part_of, tributary_of, branch_of) and examine their assignments in the context of the IS-A hierarchy. The algorithms were designed to detect five major categories of possible incorrect relationship assignments: circular, mutually exclusive, redundant, inconsistent, and missed entries. A domain expert reviewed samples of these presumptive errors to confirm the findings. Seven thousand and fifty-two presumptive errors were detected, the largest proportion related to part_of relationship assignments. The results highlight the fact that errors are unavoidable in complex ontologies and that well designed algorithms can help domain experts to focus on concepts with high likelihood of errors and maximize their effort to ensure consistency and reliability. In the future similar methods might be integrated with data entry processes to offer real-time error detection. PMID:19475727

  10. A Novel Way to Relate Ontology Classes

    PubMed Central

    Choksi, Ami T.; Jinwala, Devesh C.

    2015-01-01

    The existing ontologies in the semantic web typically have anonymous union and intersection classes. The anonymous classes are limited in scope and may not be part of the whole inference process. The tools, namely, the pellet, the jena, and the protégé, interpret collection classes as (a) equivalent/subclasses of union class and (b) superclasses of intersection class. As a result, there is a possibility that the tools will produce error prone inference results for relations, namely, sub-, union, intersection, equivalent relations, and those dependent on these relations, namely, complement. To verify whether a class is complement of other involves utilization of sub- and equivalent relations. Motivated by the same, we (i) refine the test data set of the conference ontology by adding named, union, and intersection classes and (ii) propose a match algorithm to (a) calculate corrected subclasses list, (b) correctly relate intersection and union classes with their collection classes, and (c) match union, intersection, sub-, complement, and equivalent classes in a proper sequence, to avoid error prone match results. We compare the results of our algorithms with those of a candidate reasoner, namely, the pellet reasoner. To the best of our knowledge, ours is a unique attempt in establishing a novel way to relate ontology classes. PMID:25984560

  11. A Uniform Ontology for Software Interfaces

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan

    2002-01-01

    It is universally the case that computer users who are not also computer specialists prefer to deal with computers' in terms of a familiar ontology, namely that of their application domains. For example, the well-known Windows ontology assumes that the user is an office worker, and therefore should be presented with a "desktop environment" featuring entities such as (virtual) file folders, documents, appointment calendars, and the like, rather than a world of machine registers and machine language instructions, or even the DOS command level. The central theme of this research has been the proposition that the user interacting with a software system should have at his disposal both the ontology underlying the system, as well as a model of the system. This information is necessary for the understanding of the system in use, as well as for the automatic generation of assistance for the user, both in solving the problem for which the application is designed, and for providing guidance in the capabilities and use of the system.

  12. Cyber Forensics Ontology for Cyber Criminal Investigation

    NASA Astrophysics Data System (ADS)

    Park, Heum; Cho, Sunho; Kwon, Hyuk-Chul

    We developed Cyber Forensics Ontology for the criminal investigation in cyber space. Cyber crime is classified into cyber terror and general cyber crime, and those two classes are connected with each other. The investigation of cyber terror requires high technology, system environment and experts, and general cyber crime is connected with general crime by evidence from digital data and cyber space. Accordingly, it is difficult to determine relational crime types and collect evidence. Therefore, we considered the classifications of cyber crime, the collection of evidence in cyber space and the application of laws to cyber crime. In order to efficiently investigate cyber crime, it is necessary to integrate those concepts for each cyber crime-case. Thus, we constructed a cyber forensics domain ontology for criminal investigation in cyber space, according to the categories of cyber crime, laws, evidence and information of criminals. This ontology can be used in the process of investigating of cyber crime-cases, and for data mining of cyber crime; classification, clustering, association and detection of crime types, crime cases, evidences and criminals.

  13. Development of National Map ontologies for organization and orchestration of hydrologic observations

    NASA Astrophysics Data System (ADS)

    Lieberman, J. E.

    2014-12-01

    usefulness of the developed ontology components includes both solicitation of feedback on prototype applications, and provision of a query / mediation service for feature-linked data to facilitate development of additional third-party applications.

  14. Ontology development for provenance tracing in National Climate Assessment of the US Global Change Research Program

    NASA Astrophysics Data System (ADS)

    Ma, X.; Zheng, J. G.; Goldstein, J.; Duggan, B.; Xu, J.; Du, C.; Akkiraju, A.; Aulenbach, S.; Tilmes, C.; Fox, P. A.

    2013-12-01

    The periodical National Climate Assessment (NCA) of the US Global Change Research Program (USGCRP) [1] produces reports about findings of global climate change and the impacts of climate change on the United States. Those findings are of great public and academic concerns and are used in policy and management decisions, which make the provenance information of findings in those reports especially important. The USGCRP is developing a Global Change Information System (GCIS), in which the NCA reports and associated provenance information are the primary records. We were modeling and developing Semantic Web applications for the GCIS. By applying a use case-driven iterative methodology [2], we developed an ontology [3] to represent the content structure of a report and the associated provenance information. We also mapped the classes and properties in our ontology into the W3C PROV-O ontology [4] to realize the formal presentation of provenance. We successfully implemented the ontology in several pilot systems for a recent National Climate Assessment report (i.e., the NCA3). They provide users the functionalities to browse and search provenance information with topics of interest. Provenance information of the NCA3 has been made structured and interoperable by applying the developed ontology. Besides the pilot systems we developed, other tools and services are also able to interact with the data in the context of the 'Web of data' and thus create added values. Our research shows that the use case-driven iterative method bridges the gap between Semantic Web researchers and earth and environmental scientists and is able to be deployed rapidly for developing Semantic Web applications. Our work also provides first-hand experience for re-using the W3C PROV-O ontology in the field of earth and environmental sciences, as the PROV-O ontology is recently ratified (on 04/30/2013) by the W3C as a recommendation and relevant applications are still rare. [1] http

  15. Bio-ontologies: current trends and future directions

    PubMed Central

    Bodenreider, Olivier; Stevens, Robert

    2006-01-01

    In recent years, as a knowledge-based discipline, bioinformatics has been made more computationally amenable. After its beginnings as a technology advocated by computer scientists to overcome problems of heterogeneity, ontology has been taken up by biologists themselves as a means to consistently annotate features from genotype to phenotype. In medical informatics, artifacts called ontologies have been used for a longer period of time to produce controlled lexicons for coding schemes. In this article, we review the current position in ontologies and how they have become institutionalized within biomedicine. As the field has matured, the much older philosophical aspects of ontology have come into play. With this and the institutionalization of ontology has come greater formality. We review this trend and what benefits it might bring to ontologies and their use within biomedicine. PMID:16899495

  16. Enabling Ontology Based Semantic Queries in Biomedical Database Systems.

    PubMed

    Zheng, Shuai; Wang, Fusheng; Lu, James; Saltz, Joel

    2012-01-01

    While current biomedical ontology repositories offer primitive query capabilities, it is difficult or cumbersome to support ontology based semantic queries directly in semantically annotated biomedical databases. The problem may be largely attributed to the mismatch between the models of the ontologies and the databases, and the mismatch between the query interfaces of the two systems. To fully realize semantic query capabilities based on ontologies, we develop a system DBOntoLink to provide unified semantic query interfaces by extending database query languages. With DBOntoLink, semantic queries can be directly and naturally specified as extended functions of the database query languages without any programming needed. DBOntoLink is adaptable to different ontologies through customizations and supports major biomedical ontologies hosted at the NCBO BioPortal. We demonstrate the use of DBOntoLink in a real world biomedical database with semantically annotated medical image annotations. PMID:23404054

  17. A Science Ontology for Goal Driven Datamining in Astronomy

    NASA Astrophysics Data System (ADS)

    Shaya, E.; Thomas, B.; Teuben, P.; Huang, Z.

    2005-12-01

    An ontology, in the computer science sense, is a formal description of objects, their properties and the relationship between properties. Ontology based systems are able to reason and draw inferences. An important facility of ontological networks is an ability to calculate paths to find all paths that lead to a given goal. Ontology can be used to tag or describe data (tables, columns, rows, data files, etc) in a powerful new way that paves the way for high level query, ie. science based rather than datacentric. We will present the Science ontology in the Web Ontology Language (http://archive.astro.umd.edu/ont/Science.owl) and describe how it will be employed at the UMD Astronomical Data Center (http://adc.astro.umd.edu and http://archive.astro.umd.edu/archive) for goal driven datamining and metadata enhancement.

  18. Natural Language Processing Methods and Systems for Biomedical Ontology Learning

    PubMed Central

    Liu, Kaihong; Hogan, William R.; Crowley, Rebecca S.

    2010-01-01

    While the biomedical informatics community widely acknowledges the utility of domain ontologies, there remain many barriers to their effective use. One important requirement of domain ontologies is that they must achieve a high degree of coverage of the domain concepts and concept relationships. However, the development of these ontologies is typically a manual, time-consuming, and often error-prone process. Limited resources result in missing concepts and relationships as well as difficulty in updating the ontology as knowledge changes. Methodologies developed in the fields of natural language processing, information extraction, information retrieval and machine learning provide techniques for automating the enrichment of an ontology from free-text documents. In this article, we review existing methodologies and developed systems, and discuss how existing methods can benefit the development of biomedical ontologies. PMID:20647054

  19. Meeting report: advancing practical applications of biodiversity ontologies

    PubMed Central

    2014-01-01

    We describe the outcomes of three recent workshops aimed at advancing development of the Biological Collections Ontology (BCO), the Population and Community Ontology (PCO), and tools to annotate data using those and other ontologies. The first workshop gathered use cases to help grow the PCO, agreed upon a format for modeling challenging concepts such as ecological niche, and developed ontology design patterns for defining collections of organisms and population-level phenotypes. The second focused on mapping datasets to ontology terms and converting them to Resource Description Framework (RDF), using the BCO. To follow-up, a BCO hackathon was held concurrently with the 16th Genomics Standards Consortium Meeting, during which we converted additional datasets to RDF, developed a Material Sample Core for the Global Biodiversity Information Framework, created a Web Ontology Language (OWL) file for importing Darwin Core classes and properties into BCO, and developed a workflow for converting biodiversity data among formats.

  20. An Ontology for Modeling Complex Inter-relational Organizations

    NASA Astrophysics Data System (ADS)

    Wautelet, Yves; Neysen, Nicolas; Kolp, Manuel

    This paper presents an ontology for organizational modeling through multiple complementary aspects. The primary goal of the ontology is to dispose of an adequate set of related concepts for studying complex organizations involved in a lot of relationships at the same time. In this paper, we define complex organizations as networked organizations involved in a market eco-system that are playing several roles simultaneously. In such a context, traditional approaches focus on the macro analytic level of transactions; this is supplemented here with a micro analytic study of the actors' rationale. At first, the paper overviews enterprise ontologies literature to position our proposal and exposes its contributions and limitations. The ontology is then brought to an advanced level of formalization: a meta-model in the form of a UML class diagram allows to overview the ontology concepts and their relationships which are formally defined. Finally, the paper presents the case study on which the ontology has been validated.

  1. Exploring optimal design of look-up table for PROSAIL model inversion with multi-angle MODIS data

    NASA Astrophysics Data System (ADS)

    He, Wei; Yang, Hua; Pan, Jingjing; Xu, Peipei

    2012-10-01

    Physical remote sensing model inversion based on look-up table (LUT) technique is promising for its good precision, high efficiency and easily-realization. However, scheme of the LUT is difficult to be well designed, as lacking a thorough investigation of its mechanism for different designs, for instance, the way the parameter space is sampled. To studying this problem, experiments on several LUT design schemes are performed and their effects on inversion results are analyzed in this paper. 1,000 groups of randomly generated parameters of PROSAIL model are taken to simulate multi-angle observations with the observation angles of MODIS sensor to be inversion data. The correlation coefficient (R2) and root mean square error (RMSE) of input LAIs for simulation and estimated LAIs were calculated. The results show that, LUT size is a key factor, and the RMSE is lower than 0.25 when the size reaches 100,000; Selecting no more than 0.1% cases of the LUT as the solution with a size of 100,000 is usually valid and the RMSE is usually increased with the increasing of the percentage of selected cases; Taking the median of the selected solutions as the final solution is better than the mean or the "best" whose cost function value is the least; Different parameter distributions have a certain impact on the inversion results, and the results get better when using a normal distribution. Finally, winter wheat LAI of one research area in Xinxiang City, Henan Province of China is estimated with MODIS daily reflectance data, the validate result shows it works well.

  2. Maximum allowable low-frequency platform vibrations in high resolution satellite missions: challenges and look-up figures

    NASA Astrophysics Data System (ADS)

    Haghshenas, Javad

    2015-09-01

    Performance of high resolution remote sensing payloads is often limited due to satellite platform vibrations. Effects of Linear and high frequency vibrations on the overall MTF are known exactly in closed form but the low frequency vibration effect is a random process and must be considered statistically. It should be considered in system level payload designing to know whether or not the overall MTF is limited by the vibration blur radius. Usually the vibration MTF budget is defined based on the mission requirements and the overall MTF limitations. With a good understanding of harmful vibration frequencies and amplitudes in the system preliminary design phase, their effects could be removed totally or partially. This procedure is cost effective and let designer to just eliminate the harmful vibrations and avoids over-designing. In this paper we have analyzed the effects of low-frequency platform vibrations on the payload's modulation transfer function. We have used a statistical analysis to find the probability of imaging with a MTF greater or equal to a pre-defined budget for different missions. After some discussions on the worst and average cases, we have proposed some "look-up figures" which would help the remote sensing payload designers to avoid the vibration effects. Using these figures, designer can choose the electro-optical parameters in such a way, that vibration effects be less than its pre-defined budget. Furthermore, using the results, we can propose a damping profile based on which vibration frequencies and amplitudes must be eliminated to stabilize the payload system.

  3. Theory and ontology for sharing temporal knowledge

    NASA Technical Reports Server (NTRS)

    Loganantharaj, Rasiah

    1996-01-01

    Using current technology, the sharing or re-using of knowledge-bases is very difficult, if not impossible. ARPA has correctly recognized the problem and funded a knowledge sharing initiative. One of the outcomes of this project is a formal language called Knowledge Interchange Format (KIF) for representing knowledge that could be translated into other languages. Capturing and representing design knowledge and reasoning with them have become very important for NASA who is a pioneer of innovative design of unique products. For upgrading an existing design for changing technology, needs, or requirements, it is essential to understand the design rationale, design choices, options and other relevant information associated with the design. Capturing such information and presenting them in the appropriate form are part of the ongoing Design Knowledge Capture project of NASA. The behavior of an object and various other aspects related to time are captured by the appropriate temporal knowledge. The captured design knowledge will be represented in such a way that various groups of NASA who are interested in various aspects of the design cycle should be able to access and use the design knowledge effectively. To facilitate knowledge sharing among these groups, one has to develop a very well defined ontology. Ontology is a specification of conceptualization. In the literature several specific domains were studied and some well defined ontologies were developed for such domains. However, very little, or no work has been done in the area of representing temporal knowledge to facilitate sharing. During the ASEE summer program, I have investigated several temporal models and have proposed a theory for time that is flexible to accommodate the time elements, such as, points and intervals, and is capable of handling the qualitative and quantitative temporal constraints. I have also proposed a primitive temporal ontology using which other relevant temporal ontologies can be built. I

  4. The teleost anatomy ontology: anatomical representation for the genomics age.

    PubMed

    Dahdul, Wasila M; Lundberg, John G; Midford, Peter E; Balhoff, James P; Lapp, Hilmar; Vision, Todd J; Haendel, Melissa A; Westerfield, Monte; Mabee, Paula M

    2010-07-01

    The rich knowledge of morphological variation among organisms reported in the systematic literature has remained in free-text format, impractical for use in large-scale synthetic phylogenetic work. This noncomputable format has also precluded linkage to the large knowledgebase of genomic, genetic, developmental, and phenotype data in model organism databases. We have undertaken an effort to prototype a curated, ontology-based evolutionary morphology database that maps to these genetic databases (http://kb.phenoscape.org) to facilitate investigation into the mechanistic basis and evolution of phenotypic diversity. Among the first requirements in establishing this database was the development of a multispecies anatomy ontology with the goal of capturing anatomical data in a systematic and computable manner. An ontology is a formal representation of a set of concepts with defined relationships between those concepts. Multispecies anatomy ontologies in particular are an efficient way to represent the diversity of morphological structures in a clade of organisms, but they present challenges in their development relative to single-species anatomy ontologies. Here, we describe the Teleost Anatomy Ontology (TAO), a multispecies anatomy ontology for teleost fishes derived from the Zebrafish Anatomical Ontology (ZFA) for the purpose of annotating varying morphological features across species. To facilitate interoperability with other anatomy ontologies, TAO uses the Common Anatomy Reference Ontology as a template for its upper level nodes, and TAO and ZFA are synchronized, with zebrafish terms specified as subtypes of teleost terms. We found that the details of ontology architecture have ramifications for querying, and we present general challenges in developing a multispecies anatomy ontology, including refinement of definitions, taxon-specific relationships among terms, and representation of taxonomically variable developmental pathways. PMID:20547776

  5. AmiGO: online access to ontology and annotation data

    SciTech Connect

    Carbon, Seth; Ireland, Amelia; Mungall, Christopher J.; Shu, ShengQiang; Marshall, Brad; Lewis, Suzanna

    2009-01-15

    AmiGO is a web application that allows users to query, browse, and visualize ontologies and related gene product annotation (association) data. AmiGO can be used online at the Gene Ontology (GO) website to access the data provided by the GO Consortium; it can also be downloaded and installed to browse local ontologies and annotations. AmiGO is free open source software developed and maintained by the GO Consortium.

  6. Evaluating Health Information Systems Using Ontologies

    PubMed Central

    Anderberg, Peter; Larsson, Tobias C; Fricker, Samuel A; Berglund, Johan

    2016-01-01

    Background There are several frameworks that attempt to address the challenges of evaluation of health information systems by offering models, methods, and guidelines about what to evaluate, how to evaluate, and how to report the evaluation results. Model-based evaluation frameworks usually suggest universally applicable evaluation aspects but do not consider case-specific aspects. On the other hand, evaluation frameworks that are case specific, by eliciting user requirements, limit their output to the evaluation aspects suggested by the users in the early phases of system development. In addition, these case-specific approaches extract different sets of evaluation aspects from each case, making it challenging to collectively compare, unify, or aggregate the evaluation of a set of heterogeneous health information systems. Objectives The aim of this paper is to find a method capable of suggesting evaluation aspects for a set of one or more health information systems—whether similar or heterogeneous—by organizing, unifying, and aggregating the quality attributes extracted from those systems and from an external evaluation framework. Methods On the basis of the available literature in semantic networks and ontologies, a method (called Unified eValuation using Ontology; UVON) was developed that can organize, unify, and aggregate the quality attributes of several health information systems into a tree-style ontology structure. The method was extended to integrate its generated ontology with the evaluation aspects suggested by model-based evaluation frameworks. An approach was developed to extract evaluation aspects from the ontology that also considers evaluation case practicalities such as the maximum number of evaluation aspects to be measured or their required degree of specificity. The method was applied and tested in Future Internet Social and Technological Alignment Research (FI-STAR), a project of 7 cloud-based eHealth applications that were developed and

  7. Extending TOPS: Ontology-driven Anomaly Detection and Analysis System

    NASA Astrophysics Data System (ADS)

    Votava, P.; Nemani, R. R.; Michaelis, A.

    2010-12-01

    Terrestrial Observation and Prediction System (TOPS) is a flexible modeling software system that integrates ecosystem models with frequent satellite and surface weather observations to produce ecosystem nowcasts (assessments of current conditions) and forecasts useful in natural resources management, public health and disaster management. We have been extending the Terrestrial Observation and Prediction System (TOPS) to include a capability for automated anomaly detection and analysis of both on-line (streaming) and off-line data. In order to best capture the knowledge about data hierarchies, Earth science models and implied dependencies between anomalies and occurrences of observable events such as urbanization, deforestation, or fires, we have developed an ontology to serve as a knowledge base. We can query the knowledge base and answer questions about dataset compatibilities, similarities and dependencies so that we can, for example, automatically analyze similar datasets in order to verify a given anomaly occurrence in multiple data sources. We are further extending the system to go beyond anomaly detection towards reasoning about possible causes of anomalies that are also encoded in the knowledge base as either learned or implied knowledge. This enables us to scale up the analysis by eliminating a large number of anomalies early on during the processing by either failure to verify them from other sources, or matching them directly with other observable events without having to perform an extensive and time-consuming exploration and analysis. The knowledge is captured using OWL ontology language, where connections are defined in a schema that is later extended by including specific instances of datasets and models. The information is stored using Sesame server and is accessible through both Java API and web services using SeRQL and SPARQL query languages. Inference is provided using OWLIM component integrated with Sesame.

  8. Ontology Alignment Repair through Modularization and Confidence-Based Heuristics.

    PubMed

    Santos, Emanuel; Faria, Daniel; Pesquita, Catia; Couto, Francisco M

    2015-01-01

    Ontology Matching aims at identifying a set of semantic correspondences, called an alignment, between related ontologies. In recent years, there has been a growing interest in efficient and effective matching methods for large ontologies. However, alignments produced for large ontologies are often logically incoherent. It was only recently that the use of repair techniques to improve the coherence of ontology alignments began to be explored. This paper presents a novel modularization technique for ontology alignment repair which extracts fragments of the input ontologies that only contain the necessary classes and relations to resolve all detectable incoherences. The paper presents also an alignment repair algorithm that uses a global repair strategy to minimize both the degree of incoherence and the number of mappings removed from the alignment, while overcoming the scalability problem by employing the proposed modularization technique. Our evaluation shows that our modularization technique produces significantly small fragments of the ontologies and that our repair algorithm produces more complete alignments than other current alignment repair systems, while obtaining an equivalent degree of incoherence. Additionally, we also present a variant of our repair algorithm that makes use of the confidence values of the mappings to improve alignment repair. Our repair algorithm was implemented as part of AgreementMakerLight, a free and open-source ontology matching system. PMID:26710335

  9. An Approach for Learning Expressive Ontologies in Medical Domain.

    PubMed

    Rios-Alvarado, Ana B; Lopez-Arevalo, Ivan; Tello-Leal, Edgar; Sosa-Sosa, Victor J

    2015-08-01

    The access to medical information (journals, blogs, web-pages, dictionaries, and texts) has been increased due to availability of many digital media. In particular, finding an appropriate structure that represents the information contained in texts is not a trivial task. One of the structures for modeling the knowledge are ontologies. An ontology refers to a conceptualization of a specific domain of knowledge. Ontologies are especially useful because they support the exchange and sharing of information as well as reasoning tasks. The usage of ontologies in medicine is mainly focussed in the representation and organization of medical terminologies. Ontology learning techniques have emerged as a set of techniques to get ontologies from unstructured information. This paper describes a new ontology learning approach that consists of a method for the acquisition of concepts and its corresponding taxonomic relations, where also axioms disjointWith and equivalentClass are learned from text without human intervention. The source of knowledge involves files about medical domain. Our approach is divided into two stages, the first part corresponds to discover hierarchical relations and the second part to the axiom extraction. Our automatic ontology learning approach shows better results compared against previous work, giving rise to more expressive ontologies. PMID:26077127

  10. Ontology Alignment Repair through Modularization and Confidence-Based Heuristics

    PubMed Central

    Santos, Emanuel; Faria, Daniel; Pesquita, Catia; Couto, Francisco M.

    2015-01-01

    Ontology Matching aims at identifying a set of semantic correspondences, called an alignment, between related ontologies. In recent years, there has been a growing interest in efficient and effective matching methods for large ontologies. However, alignments produced for large ontologies are often logically incoherent. It was only recently that the use of repair techniques to improve the coherence of ontology alignments began to be explored. This paper presents a novel modularization technique for ontology alignment repair which extracts fragments of the input ontologies that only contain the necessary classes and relations to resolve all detectable incoherences. The paper presents also an alignment repair algorithm that uses a global repair strategy to minimize both the degree of incoherence and the number of mappings removed from the alignment, while overcoming the scalability problem by employing the proposed modularization technique. Our evaluation shows that our modularization technique produces significantly small fragments of the ontologies and that our repair algorithm produces more complete alignments than other current alignment repair systems, while obtaining an equivalent degree of incoherence. Additionally, we also present a variant of our repair algorithm that makes use of the confidence values of the mappings to improve alignment repair. Our repair algorithm was implemented as part of AgreementMakerLight, a free and open-source ontology matching system. PMID:26710335

  11. Knowledge Representation and Management. From Ontology to Annotation

    PubMed Central

    Darmoni, S.J.

    2015-01-01

    Summary Objective To summarize the best papers in the field of Knowledge Representation and Management (KRM). Methods A comprehensive review of medical informatics literature was performed to select some of the most interesting papers of KRM published in 2014. Results Four articles were selected, two focused on annotation and information retrieval using an ontology. The two others focused mainly on ontologies, one dealing with the usage of a temporal ontology in order to analyze the content of narrative document, one describing a methodology for building multilingual ontologies. Conclusion Semantic models began to show their efficiency, coupled with annotation tools. PMID:26293860

  12. A 2013 workshop: vaccine and drug ontology studies (VDOS 2013)

    PubMed Central

    2014-01-01

    The 2013 “Vaccine and Drug Ontology Studies” (VDOS 2013) international workshop series focuses on vaccine- and drug-related ontology modeling and applications. Drugs and vaccines have contributed to dramatic improvements in public health worldwide. Over the last decade, tremendous efforts have been made in the biomedical ontology community to ontologically represent various areas associated with vaccines and drugs – extending existing clinical terminology systems such as SNOMED, RxNorm, NDF-RT, and MedDRA, as well as developing new models such as Vaccine Ontology. The VDOS workshop series provides a platform for discussing innovative solutions as well as the challenges in the development and applications of biomedical ontologies for representing and analyzing drugs and vaccines, their administration, host immune responses, adverse events, and other related topics. The six full-length papers included in this thematic issue focuses on three main areas: (i) ontology development and representation, (ii) ontology mapping, maintaining and auditing, and (iii) ontology applications. PMID:24650607

  13. FMA-RadLex: An application ontology of radiological anatomy derived from the foundational model of anatomy reference ontology.

    PubMed

    Mejino, Jose L V; Rubin, Daniel L; Brinkley, James F

    2008-01-01

    Domain reference ontologies are being developed to serve as generalizable and reusable sources designed to support any application specific to the domain. The challenge is how to develop ways to derive or adapt pertinent portions of reference ontologies into application ontologies. In this paper we demonstrate how a subset of anatomy relevant to the domain of radiology can be derived from an anatomy reference ontology, the Foundational Model of Anatomy (FMA) Ontology, to create an application ontology that is robust and expressive enough to incorporate and accommodate all salient anatomical knowledge necessary to support existing and emerging systems for managing anatomical information related to radiology. The principles underlying this work are applicable to domains beyond radiology, so our results could be extended to other areas of biomedicine in the future. PMID:18999035

  14. Cross-Ontological Analytics: Combining Associative and Hierarchical Relations in the Gene Ontologies to Assess Gene Product Similarity

    SciTech Connect

    Posse, Christian; Sanfilippo, Antonio P.; Gopalan, Banu; Riensche, Roderick M.; Beagley, Nathaniel; Baddeley, Bob L.

    2006-05-28

    Gene and gene product similarity is a fundamental diagnostic measure in analyzing biological data and constructing predictive models for functional genomics. With the rising influence of the gene ontologies, two complementary approaches have emerged where the similarity between two genes/gene products is obtained by comparing gene ontology (GO) annotations associated with the gene/gene products. One approach captures GO-based similarity in terms of hierarchical relations within each gene ontology. The other approach identifies GO-based similarity in terms of associative relations across the three gene ontologies. We propose a novel methodology where the two approaches can be merged with ensuing benefits in coverage and accuracy.

  15. An Ontology Driven Information Architecture for Big Data and Diverse Domains

    NASA Astrophysics Data System (ADS)

    Hughes, John S.; Crichton, Dan; Hardman, Sean; Joyner, Ron; Ramirez, Paul

    2013-04-01

    The Planetary Data System's has just released the PDS4 system for first use. Its architecture is comprised of three principle parts, an ontology that captures knowledge from the planetary science domain, a federated registry/repository system for product identification, versioning, tracking, and storage, and a REST-based service layer for search, retrieval, and distribution. An ontology modeling tool is used to prescriptively capture product definitions that adhere to object-oriented principles and that are compliant with specific registry, archive, and data dictionary reference models. The resulting information model is product centric, allowing all information to be packaged into products and tracked in the registry. The flexibility required in a diverse domain is provided through the use of object-oriented extensions and a hierarchical governance scheme with common, discipline, and mission levels. Finally all PDS4 data standards are generated or derived from the information model. The federated registry provides identification, versioning, and tracking functionality across federated repositories and is configured for deployment using configuration files generated from the ontology. Finally a REST-based service layer provides for metadata harvest, product transformation, packaging, and search, and portal hosting. A model driven architecture allows the data and software engineering teams to develop in parallel with minimal team interaction. The resulting software remains relatively stable as the domain evolves. Finally the development of a single shared ontology promotes interoperability and data correlation and helps meet the expectations of modern scientists for science data discovery, access and use. This presentation will provide an overview of PDS4 focusing on the data standards, how they were developed, how they are now being used, and will present some of the lessons learned while developing in a diverse scientific community. Copyright 2013 California

  16. From classification to epilepsy ontology and informatics.

    PubMed

    Zhang, Guo-Qiang; Sahoo, Satya S; Lhatoo, Samden D

    2012-07-01

    The 2010 International League Against Epilepsy (ILAE) classification and terminology commission report proposed a much needed departure from previous classifications to incorporate advances in molecular biology, neuroimaging, and genetics. It proposed an interim classification and defined two key requirements that need to be satisfied. The first is the ability to classify epilepsy in dimensions according to a variety of purposes including clinical research, patient care, and drug discovery. The second is the ability of the classification system to evolve with new discoveries. Multidimensionality and flexibility are crucial to the success of any future classification. In addition, a successful classification system must play a central role in the rapidly growing field of epilepsy informatics. An epilepsy ontology, based on classification, will allow information systems to facilitate data-intensive studies and provide a proven route to meeting the two foregoing key requirements. Epilepsy ontology will be a structured terminology system that accommodates proposed and evolving ILAE classifications, the National Institutes of Health/National Institute of Neurological Disorders and Stroke (NIH/NINDS) Common Data Elements, the International Classification of Diseases (ICD) systems and explicitly specifies all known relationships between epilepsy concepts in a proper framework. This will aid evidence-based epilepsy diagnosis, investigation, treatment and research for a diverse community of clinicians and researchers. Benefits range from systematization of electronic patient records to multimodal data repositories for research and training manuals for those involved in epilepsy care. Given the complexity, heterogeneity, and pace of research advances in the epilepsy domain, such an ontology must be collaboratively developed by key stakeholders in the epilepsy community and experts in knowledge engineering and computer science. PMID:22765502

  17. From Classification to Epilepsy Ontology and Informatics

    PubMed Central

    Zhang, Guo-Qiang; Sahoo, Satya S; Lhatoo, Samden D

    2012-01-01

    Summary The 2010 International League Against Epilepsy (ILAE) classification and terminology commission report proposed a much needed departure from previous classifications to incorporate advances in molecular biology, neuroimaging, and genetics. It proposed an interim classification and defined two key requirements that need to be satisfied. The first is the ability to classify epilepsy in dimensions according to a variety of purposes including clinical research, patient care, and drug discovery. The second is the ability of the classification system to evolve with new discoveries. Multi-dimensionality and flexibility are crucial to the success of any future classification. In addition, a successful classification system must play a central role in the rapidly growing field of epilepsy informatics. An epilepsy ontology, based on classification, will allow information systems to facilitate data-intensive studies and provide a proven route to meeting the two foregoing key requirements. Epilepsy ontology will be a structured terminology system that accommodates proposed and evolving ILAE classifications, the NIH/NINDS Common Data Elements, the ICD systems and explicitly specifies all known relationships between epilepsy concepts in a proper framework. This will aid evidence based epilepsy diagnosis, investigation, treatment and research for a diverse community of clinicians and researchers. Benefits range from systematization of electronic patient records to multi-modal data repositories for research and training manuals for those involved in epilepsy care. Given the complexity, heterogeneity and pace of research advances in the epilepsy domain, such an ontology must be collaboratively developed by key stakeholders in the epilepsy community and experts in knowledge engineering and computer science. PMID:22765502

  18. Applications of Ontologies in Knowledge Management Systems

    NASA Astrophysics Data System (ADS)

    Rehman, Zobia; Kifor, Claudiu V.

    2014-12-01

    Enterprises are realizing that their core asset in 21st century is knowledge. In an organization knowledge resides in databases, knowledge bases, filing cabinets and peoples' head. Organizational knowledge is distributed in nature and its poor management causes repetition of activities across the enterprise. To get true benefits from this asset, it is important for an organization to "know what they know". That's why many organizations are investing a lot in managing their knowledge. Artificial intelligence techniques have a huge contribution in organizational knowledge management. In this article we are reviewing the applications of ontologies in knowledge management realm

  19. An ontology design pattern for surface water features

    USGS Publications Warehouse

    Sinha, Gaurav; Mark, David; Kolas, Dave; Varanka, Dalia; Romero, Boleslo E.; Feng, Chen-Chieh; Usery, E. Lynn; Liebermann, Joshua; Sorokine, Alexandre

    2014-01-01

    Surface water is a primary concept of human experience but concepts are captured in cultures and languages in many different ways. Still, many commonalities exist due to the physical basis of many of the properties and categories. An abstract ontology of surface water features based only on those physical properties of landscape features has the best potential for serving as a foundational domain ontology for other more context-dependent ontologies. The Surface Water ontology design pattern was developed both for domain knowledge distillation and to serve as a conceptual building-block for more complex or specialized surface water ontologies. A fundamental distinction is made in this ontology between landscape features that act as containers (e.g., stream channels, basins) and the bodies of water (e.g., rivers, lakes) that occupy those containers. Concave (container) landforms semantics are specified in a Dry module and the semantics of contained bodies of water in a Wet module. The pattern is implemented in OWL, but Description Logic axioms and a detailed explanation is provided in this paper. The OWL ontology will be an important contribution to Semantic Web vocabulary for annotating surface water feature datasets. Also provided is a discussion of why there is a need to complement the pattern with other ontologies, especially the previously developed Surface Network pattern. Finally, the practical value of the pattern in semantic querying of surface water datasets is illustrated through an annotated geospatial dataset and sample queries using the classes of the Surface Water pattern.

  20. The Rise of Ontologies or the Reinvention of Classification.

    ERIC Educational Resources Information Center

    Soergel, Dagobert

    1999-01-01

    Classifications/ontologies, thesauri, and dictionaries serve many functions, which are summarized in this article. As a result of this multiplicity of functions, classifications--often called ontologies--are developed in many communities of research and practice. Unfortunately, there is little communication and mutual learning; thus, efforts are…

  1. War of Ontology Worlds: Mathematics, Computer Code, or Esperanto?

    PubMed Central

    Rzhetsky, Andrey; Evans, James A.

    2011-01-01

    The use of structured knowledge representations—ontologies and terminologies—has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies. PMID:21980276

  2. Linking Assessment and Instruction Using Ontologies. CSE Technical Report 693

    ERIC Educational Resources Information Center

    Chung, Gregory K. W. K.; Delacruz, Girlie C.; Dionne, Gary B.; Bewley, William L.

    2006-01-01

    In this study we report on a test of a method that uses ontologies to individualize instruction by directly linking assessment results to the delivery of relevant content. Our sample was 2nd Lieutenants undergoing entry-level training on rifle marksmanship. Ontologies are explicit expressions of the concepts in a domain, the links among the…

  3. An ontology of astronomical object types for the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Derriere, Sebastian; Richard, André; Preite-Martinez, Andrea

    2007-08-01

    The Semantic Web and ontologies are emerging technologies that enable advanced knowledge management and sharing. Their application to Astronomy can offer new ways of sharing information between astronomers, but also between machines or software components and allow inference engines to perform reasoning on an astronomical knowledge base. The first examples of astronomy-related ontologies are being developed in the european VOTech project.

  4. Developing a Diagnosis Aiding Ontology Based on Hysteroscopy Image Processing

    NASA Astrophysics Data System (ADS)

    Poulos, Marios; Korfiatis, Nikolaos

    In this paper we describe an ontology design process which will introduce the steps and mechanisms required in order to create and develop an ontology which will be able to represent and describe the contents and attributes of hysteroscopy images, as well as their relationships, thus providing a useful ground for the development of tools related with medical diagnosis from physicians.

  5. Research on spatio-temporal ontology based on description logic

    NASA Astrophysics Data System (ADS)

    Huang, Yongqi; Ding, Zhimin; Zhao, Zhui; Ouyang, Fucheng

    2008-10-01

    DL, short for Description Logic, is aimed at getting a balance between describing ability and reasoning complexity. Users can adopt DL to write clear and formalized concept description for domain model, which makes ontology description possess well-defined syntax and semantics and helps to resolve the problem of spatio-temporal reasoning based on ontology. This paper studies on basic theory of DL and relationship between DL and OWL at first. By analyzing spatio-temporal concepts and relationship of spatio-temporal GIS, the purpose of this paper is adopting ontology language based on DL to express spatio-temporal ontology, and employing suitable ontology-building tool to build spatio-temporal ontology. With regard to existing spatio-temporal ontology based on first-order predicate logic, we need to transform it into spatio-temporal ontology based on DL so as to make the best of existing research fruits. This paper also makes a research on translating relationships between DL and first-order predicate logic.

  6. Ontology alignment architecture for semantic sensor Web integration.

    PubMed

    Fernandez, Susel; Marsa-Maestre, Ivan; Velasco, Juan R; Alarcos, Bernardo

    2013-01-01

    Sensor networks are a concept that has become very popular in data acquisition and processing for multiple applications in different fields such as industrial, medicine, home automation, environmental detection, etc. Today, with the proliferation of small communication devices with sensors that collect environmental data, semantic Web technologies are becoming closely related with sensor networks. The linking of elements from Semantic Web technologies with sensor networks has been called Semantic Sensor Web and has among its main features the use of ontologies. One of the key challenges of using ontologies in sensor networks is to provide mechanisms to integrate and exchange knowledge from heterogeneous sources (that is, dealing with semantic heterogeneity). Ontology alignment is the process of bringing ontologies into mutual agreement by the automatic discovery of mappings between related concepts. This paper presents a system for ontology alignment in the Semantic Sensor Web which uses fuzzy logic techniques to combine similarity measures between entities of different ontologies. The proposed approach focuses on two key elements: the terminological similarity, which takes into account the linguistic and semantic information of the context of the entity's names, and the structural similarity, based on both the internal and relational structure of the concepts. This work has been validated using sensor network ontologies and the Ontology Alignment Evaluation Initiative (OAEI) tests. The results show that the proposed techniques outperform previous approaches in terms of precision and recall. PMID:24051523

  7. Ontology Alignment Architecture for Semantic Sensor Web Integration

    PubMed Central

    Fernandez, Susel; Marsa-Maestre, Ivan; Velasco, Juan R.; Alarcos, Bernardo

    2013-01-01

    Sensor networks are a concept that has become very popular in data acquisition and processing for multiple applications in different fields such as industrial, medicine, home automation, environmental detection, etc. Today, with the proliferation of small communication devices with sensors that collect environmental data, semantic Web technologies are becoming closely related with sensor networks. The linking of elements from Semantic Web technologies with sensor networks has been called Semantic Sensor Web and has among its main features the use of ontologies. One of the key challenges of using ontologies in sensor networks is to provide mechanisms to integrate and exchange knowledge from heterogeneous sources (that is, dealing with semantic heterogeneity). Ontology alignment is the process of bringing ontologies into mutual agreement by the automatic discovery of mappings between related concepts. This paper presents a system for ontology alignment in the Semantic Sensor Web which uses fuzzy logic techniques to combine similarity measures between entities of different ontologies. The proposed approach focuses on two key elements: the terminological similarity, which takes into account the linguistic and semantic information of the context of the entity's names, and the structural similarity, based on both the internal and relational structure of the concepts. This work has been validated using sensor network ontologies and the Ontology Alignment Evaluation Initiative (OAEI) tests. The results show that the proposed techniques outperform previous approaches in terms of precision and recall. PMID:24051523

  8. Disease Ontology: a backbone for disease semantic integration

    PubMed Central

    Schriml, Lynn Marie; Arze, Cesar; Nadendla, Suvarna; Chang, Yu-Wei Wayne; Mazaitis, Mark; Felix, Victor; Feng, Gang; Kibbe, Warren Alden

    2012-01-01

    The Disease Ontology (DO) database (http://disease-ontology.org) represents a comprehensive knowledge base of 8043 inherited, developmental and acquired human diseases (DO version 3, revision 2510). The DO web browser has been designed for speed, efficiency and robustness through the use of a graph database. Full-text contextual searching functionality using Lucene allows the querying of name, synonym, definition, DOID and cross-reference (xrefs) with complex Boolean search strings. The DO semantically integrates disease and medical vocabularies through extensive cross mapping and integration of MeSH, ICD, NCI's thesaurus, SNOMED CT and OMIM disease-specific terms and identifiers. The DO is utilized for disease annotation by major biomedical databases (e.g. Array Express, NIF, IEDB), as a standard representation of human disease in biomedical ontologies (e.g. IDO, Cell line ontology, NIFSTD ontology, Experimental Factor Ontology, Influenza Ontology), and as an ontological cross mappings resource between DO, MeSH and OMIM (e.g. GeneWiki). The DO project (http://diseaseontology.sf.net) has been incorporated into open source tools (e.g. Gene Answers, FunDO) to connect gene and disease biomedical data through the lens of human disease. The next iteration of the DO web browser will integrate DO's extended relations and logical definition representation along with these biomedical resource cross-mappings. PMID:22080554

  9. Automatic Background Knowledge Selection for Matching Biomedical Ontologies

    PubMed Central

    Faria, Daniel; Pesquita, Catia; Santos, Emanuel; Cruz, Isabel F.; Couto, Francisco M.

    2014-01-01

    Ontology matching is a growing field of research that is of critical importance for the semantic web initiative. The use of background knowledge for ontology matching is often a key factor for success, particularly in complex and lexically rich domains such as the life sciences. However, in most ontology matching systems, the background knowledge sources are either predefined by the system or have to be provided by the user. In this paper, we present a novel methodology for automatically selecting background knowledge sources for any given ontologies to match. This methodology measures the usefulness of each background knowledge source by assessing the fraction of classes mapped through it over those mapped directly, which we call the mapping gain. We implemented this methodology in the AgreementMakerLight ontology matching framework, and evaluate it using the benchmark biomedical ontology matching tasks from the Ontology Alignment Evaluation Initiative (OAEI) 2013. In each matching problem, our methodology consistently identified the sources of background knowledge that led to the highest improvements over the baseline alignment (i.e., without background knowledge). Furthermore, our proposed mapping gain parameter is strongly correlated with the F-measure of the produced alignments, thus making it a good estimator for ontology matching techniques based on background knowledge. PMID:25379899

  10. Automatic background knowledge selection for matching biomedical ontologies.

    PubMed

    Faria, Daniel; Pesquita, Catia; Santos, Emanuel; Cruz, Isabel F; Couto, Francisco M

    2014-01-01

    Ontology matching is a growing field of research that is of critical importance for the semantic web initiative. The use of background knowledge for ontology matching is often a key factor for success, particularly in complex and lexically rich domains such as the life sciences. However, in most ontology matching systems, the background knowledge sources are either predefined by the system or have to be provided by the user. In this paper, we present a novel methodology for automatically selecting background knowledge sources for any given ontologies to match. This methodology measures the usefulness of each background knowledge source by assessing the fraction of classes mapped through it over those mapped directly, which we call the mapping gain. We implemented this methodology in the AgreementMakerLight ontology matching framework, and evaluate it using the benchmark biomedical ontology matching tasks from the Ontology Alignment Evaluation Initiative (OAEI) 2013. In each matching problem, our methodology consistently identified the sources of background knowledge that led to the highest improvements over the baseline alignment (i.e., without background knowledge). Furthermore, our proposed mapping gain parameter is strongly correlated with the F-measure of the produced alignments, thus making it a good estimator for ontology matching techniques based on background knowledge. PMID:25379899

  11. IDEF5 Ontology Description Capture Method: Concept Paper

    NASA Technical Reports Server (NTRS)

    Menzel, Christopher P.; Mayer, Richard J.

    1990-01-01

    The results of research towards an ontology capture method referred to as IDEF5 are presented. Viewed simply as the study of what exists in a domain, ontology is an activity that can be understood to be at work across the full range of human inquiry prompted by the persistent effort to understand the world in which it has found itself - and which it has helped to shape. In the contest of information management, ontology is the task of extracting the structure of a given engineering, manufacturing, business, or logistical domain and storing it in an usable representational medium. A key to effective integration is a system ontology that can be accessed and modified across domains and which captures common features of the overall system relevant to the goals of the disparate domains. If the focus is on information integration, then the strongest motivation for ontology comes from the need to support data sharing and function interoperability. In the correct architecture, an enterprise ontology base would allow th e construction of an integrated environment in which legacy systems appear to be open architecture integrated resources. If the focus is on system/software development, then support for the rapid acquisition of reliable systems is perhaps the strongest motivation for ontology. Finally, ontological analysis was demonstrated to be an effective first step in the construction of robust knowledge based systems.

  12. Ontology Extraction Tools: An Empirical Study with Educators

    ERIC Educational Resources Information Center

    Hatala, M.; Gasevic, D.; Siadaty, M.; Jovanovic, J.; Torniai, C.

    2012-01-01

    Recent research in Technology-Enhanced Learning (TEL) demonstrated several important benefits that semantic technologies can bring to the TEL domain. An underlying assumption for most of these research efforts is the existence of a domain ontology. The second unspoken assumption follows that educators will build domain ontologies for their…

  13. Ontology-Based Annotation of Learning Object Content

    ERIC Educational Resources Information Center

    Gasevic, Dragan; Jovanovic, Jelena; Devedzic, Vladan

    2007-01-01

    The paper proposes a framework for building ontology-aware learning object (LO) content. Previously ontologies were exclusively employed for enriching LOs' metadata. Although such an approach is useful, as it improves retrieval of relevant LOs from LO repositories, it does not enable one to reuse components of a LO, nor to incorporate an explicit…

  14. ExO: An Ontology for Exposure Science

    EPA Science Inventory

    An ontology is a formal representation of knowledge within a domain and typically consists of classes, the properties of those classes, and the relationships between them. Ontologies are critically important for specifying data of interest in a consistent manner, thereby enablin...

  15. An Agent-Based Data Mining System for Ontology Evolution

    NASA Astrophysics Data System (ADS)

    Hadzic, Maja; Dillon, Darshan

    We have developed an evidence-based mental health ontological model that represents mental health in multiple dimensions. The ongoing addition of new mental health knowledge requires a continual update of the Mental Health Ontology. In this paper, we describe how the ontology evolution can be realized using a multi-agent system in combination with data mining algorithms. We use the TICSA methodology to design this multi-agent system which is composed of four different types of agents: Information agent, Data Warehouse agent, Data Mining agents and Ontology agent. We use UML 2.1 sequence diagrams to model the collaborative nature of the agents and a UML 2.1 composite structure diagram to model the structure of individual agents. The Mental Heath Ontology has the potential to underpin various mental health research experiments of a collaborative nature which are greatly needed in times of increasing mental distress and illness.

  16. A broken symmetry ontology: Quantum mechanics as a broken symmetry

    SciTech Connect

    Buschmann, J.E.

    1988-01-01

    The author proposes a new broken symmetry ontology to be used to analyze the quantum domain. This ontology is motivated and grounded in a critical epistemological analysis, and an analysis of the basic role of symmetry in physics. Concurrently, he is led to consider nonheterogeneous systems, whose logical state space contains equivalence relations not associated with the causal relation. This allows him to find a generalized principle of symmetry and a generalized symmetry-conservation formalisms. In particular, he clarifies the role of Noether's theorem in field theory. He shows how a broken symmetry ontology already operates in a description of the weak interactions. Finally, by showing how a broken symmetry ontology operates in the quantum domain, he accounts for the interpretational problem and the essential incompleteness of quantum mechanics. He proposes that the broken symmetry underlying this ontological domain is broken dilation invariance.

  17. Quality of Computationally Inferred Gene Ontology Annotations

    PubMed Central

    Škunca, Nives; Altenhoff, Adrian; Dessimoz, Christophe

    2012-01-01

    Gene Ontology (GO) has established itself as the undisputed standard for protein function annotation. Most annotations are inferred electronically, i.e. without individual curator supervision, but they are widely considered unreliable. At the same time, we crucially depend on those automated annotations, as most newly sequenced genomes are non-model organisms. Here, we introduce a methodology to systematically and quantitatively evaluate electronic annotations. By exploiting changes in successive releases of the UniProt Gene Ontology Annotation database, we assessed the quality of electronic annotations in terms of specificity, reliability, and coverage. Overall, we not only found that electronic annotations have significantly improved in recent years, but also that their reliability now rivals that of annotations inferred by curators when they use evidence other than experiments from primary literature. This work provides the means to identify the subset of electronic annotations that can be relied upon—an important outcome given that >98% of all annotations are inferred without direct curation. PMID:22693439

  18. Heidegger, ontological death, and the healing professions.

    PubMed

    Aho, Kevin A

    2016-03-01

    In Being and Time, Martin Heidegger introduces a unique interpretation of death as a kind of world-collapse or breakdown of meaning that strips away our ability to understand and make sense of who we are. This is an 'ontological death' in the sense that we cannot be anything because the intelligible world that we draw on to fashion our identities and sustain our sense of self has lost all significance. On this account, death is not only an event that we can physiologically live through; it can happen numerous times throughout the finite span of our lives. This paper draws on Arthur Frank's (At the will of the body: reflections on illness. Houghton, Boston, 1991) narrative of critical illness to concretize the experience of 'ontological death' and illuminate the unique challenges it poses for health care professionals. I turn to Heidegger's conception of 'resoluteness' (Entschlossenheit) to address these challenges, arguing for the need of health care professionals to help establish a discursive context whereby the critically ill can begin to meaningfully express and interpret their experience of self-loss in a way that acknowledges the structural vulnerability of their own identities and is flexible enough to let go of those that have lost their significance or viability. PMID:25845817

  19. Spatial cyberinfrastructures, ontologies, and the humanities

    PubMed Central

    Sieber, Renee E.; Wellen, Christopher C.; Jin, Yuan

    2011-01-01

    We report on research into building a cyberinfrastructure for Chinese biographical and geographic data. Our cyberinfrastructure contains (i) the McGill-Harvard-Yenching Library Ming Qing Women's Writings database (MQWW), the only online database on historical Chinese women's writings, (ii) the China Biographical Database, the authority for Chinese historical people, and (iii) the China Historical Geographical Information System, one of the first historical geographic information systems. Key to this integration is that linked databases retain separate identities as bases of knowledge, while they possess sufficient semantic interoperability to allow for multidatabase concepts and to support cross-database queries on an ad hoc basis. Computational ontologies create underlying semantics for database access. This paper focuses on the spatial component in a humanities cyberinfrastructure, which includes issues of conflicting data, heterogeneous data models, disambiguation, and geographic scale. First, we describe the methodology for integrating the databases. Then we detail the system architecture, which includes a tier of ontologies and schema. We describe the user interface and applications that allow for cross-database queries. For instance, users should be able to analyze the data, examine hypotheses on spatial and temporal relationships, and generate historical maps with datasets from MQWW for research, teaching, and publication on Chinese women writers, their familial relations, publishing venues, and the literary and social communities. Last, we discuss the social side of cyberinfrastructure development, as people are considered to be as critical as the technical components for its success. PMID:21444819

  20. Representing Kidney Development Using the Gene Ontology

    PubMed Central

    Alam-Faruque, Yasmin; Hill, David P.; Dimmer, Emily C.; Harris, Midori A.; Foulger, Rebecca E.; Tweedie, Susan; Attrill, Helen; Howe, Douglas G.; Thomas, Stephen Randall; Davidson, Duncan; Woolf, Adrian S.; Blake, Judith A.; Mungall, Christopher J.; O’Donovan, Claire; Apweiler, Rolf; Huntley, Rachael P.

    2014-01-01

    Gene Ontology (GO) provides dynamic controlled vocabularies to aid in the description of the functional biological attributes and subcellular locations of gene products from all taxonomic groups (www.geneontology.org). Here we describe collaboration between the renal biomedical research community and the GO Consortium to improve the quality and quantity of GO terms describing renal development. In the associated annotation activity, the new and revised terms were associated with gene products involved in renal development and function. This project resulted in a total of 522 GO terms being added to the ontology and the creation of approximately 9,600 kidney-related GO term associations to 940 UniProt Knowledgebase (UniProtKB) entries, covering 66 taxonomic groups. We demonstrate the impact of these improvements on the interpretation of GO term analyses performed on genes differentially expressed in kidney glomeruli affected by diabetic nephropathy. In summary, we have produced a resource that can be utilized in the interpretation of data from small- and large-scale experiments investigating molecular mechanisms of kidney function and development and thereby help towards alleviating renal disease. PMID:24941002

  1. Web information retrieval based on ontology

    NASA Astrophysics Data System (ADS)

    Zhang, Jian

    2013-03-01

    The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.

  2. Ontology-Based Analysis of Microarray Data.

    PubMed

    Giuseppe, Agapito; Milano, Marianna

    2016-01-01

    The importance of semantic-based methods and algorithms for the analysis and management of biological data is growing for two main reasons. From a biological side, knowledge contained in ontologies is more and more accurate and complete, from a computational side, recent algorithms are using in a valuable way such knowledge. Here we focus on semantic-based management and analysis of protein interaction networks referring to all the approaches of analysis of protein-protein interaction data that uses knowledge encoded into biological ontologies. Semantic approaches for studying high-throughput data have been largely used in the past to mine genomic and expression data. Recently, the emergence of network approaches for investigating molecular machineries has stimulated in a parallel way the introduction of semantic-based techniques for analysis and management of network data. The application of these computational approaches to the study of microarray data can broad the application scenario of them and simultaneously can help the understanding of disease development and progress. PMID:25971913

  3. A Formal Ontology of Subcellular Neuroanatomy

    PubMed Central

    Larson, Stephen D.; Fong, Lisa L.; Gupta, Amarnath; Condit, Christopher; Bug, William J.; Martone, Maryann E.

    2007-01-01

    The complexity of the nervous system requires high-resolution microscopy to resolve the detailed 3D structure of nerve cells and supracellular domains. The analysis of such imaging data to extract cellular surfaces and cell components often requires the combination of expert human knowledge with carefully engineered software tools. In an effort to make better tools to assist humans in this endeavor, create a more accessible and permanent record of their data, and to aid the process of constructing complex and detailed computational models, we have created a core of formalized knowledge about the structure of the nervous system and have integrated that core into several software applications. In this paper, we describe the structure and content of a formal ontology whose scope is the subcellular anatomy of the nervous system (SAO), covering nerve cells, their parts, and interactions between these parts. Many applications of this ontology to image annotation, content-based retrieval of structural data, and integration of shared data across scales and researchers are also described. PMID:18974798

  4. Multicriteria analysis of ontologically represented information

    NASA Astrophysics Data System (ADS)

    Wasielewska, K.; Ganzha, M.; Paprzycki, M.; Bǎdicǎ, C.; Ivanovic, M.; Lirkov, I.

    2014-11-01

    Our current work concerns the development of a decision support system for the software selection problem. The main idea is to utilize expert knowledge to help the user in selecting the best software / method / computational resource to solve a computational problem. Obviously, this involves multicriterial decision making and the key open question is: which method to choose. The context of the work is provided by the Agents in Grid (AiG) project, where the software selection (and thus multicriterial analysis) is to be realized when all information concerning the problem, the hardware and the software is ontologically represented. Initially, we have considered the Analytical Hierarchy Process (AHP), which is well suited for the hierarchical data structures (e.g., such that have been formulated in terms of ontologies). However, due to its well-known shortcomings, we have decided to extend our search for the multicriterial analysis method best suited for the problem in question. In this paper we report results of our search, which involved: (i) TOPSIS (Technique for Order Preference by Similarity to Ideal Solution), (ii) PROMETHEE, and (iii) GRIP (Generalized Regression with Intensities of Preference). We also briefly argue why other methods have not been considered as valuable candidates.

  5. Discovering Diabetes Complications: an Ontology Based Model

    PubMed Central

    Daghistani, Tahani; Shammari, Riyad Al; Razzak, Muhammad Imran

    2015-01-01

    Background: Diabetes is a serious disease that spread in the world dramatically. The diabetes patient has an average of risk to experience complications. Take advantage of recorded information to build ontology as information technology solution will help to predict patients who have average of risk level with certain complication. It is helpful to search and present patient’s history regarding different risk factors. Discovering diabetes complications could be useful to prevent or delay the complications. Method: We designed ontology based model, using adult diabetes patients’ data, to discover the rules of diabetes with its complications in disease to disease relationship. Result: Various rules between different risk factors of diabetes Patients and certain complications generated. Furthermore, new complications (diseases) might be discovered as new finding of this study, discovering diabetes complications could be useful to prevent or delay the complications. Conclusion: The system can identify the patients who are suffering from certain risk factors such as high body mass index (obesity) and starting controlling and maintaining plan. PMID:26862251

  6. Ontology patterns for complex topographic feature yypes

    USGS Publications Warehouse

    Varanka, Dalia E.

    2011-01-01

    Complex feature types are defined as integrated relations between basic features for a shared meaning or concept. The shared semantic concept is difficult to define in commonly used geographic information systems (GIS) and remote sensing technologies. The role of spatial relations between complex feature parts was recognized in early GIS literature, but had limited representation in the feature or coverage data models of GIS. Spatial relations are more explicitly specified in semantic technology. In this paper, semantics for topographic feature ontology design patterns (ODP) are developed as data models for the representation of complex features. In the context of topographic processes, component assemblages are supported by resource systems and are found on local landscapes. The topographic ontology is organized across six thematic modules that can account for basic feature types, resource systems, and landscape types. Types of complex feature attributes include location, generative processes and physical description. Node/edge networks model standard spatial relations and relations specific to topographic science to represent complex features. To demonstrate these concepts, data from The National Map of the U. S. Geological Survey was converted and assembled into ODP.

  7. Semi-automated ontology generation within OBO-Edit

    PubMed Central

    Wächter, Thomas; Schroeder, Michael

    2010-01-01

    Motivation: Ontologies and taxonomies have proven highly beneficial for biocuration. The Open Biomedical Ontology (OBO) Foundry alone lists over 90 ontologies mainly built with OBO-Edit. Creating and maintaining such ontologies is a labour-intensive, difficult, manual process. Automating parts of it is of great importance for the further development of ontologies and for biocuration. Results: We have developed the Dresden Ontology Generator for Directed Acyclic Graphs (DOG4DAG), a system which supports the creation and extension of OBO ontologies by semi-automatically generating terms, definitions and parent–child relations from text in PubMed, the web and PDF repositories. DOG4DAG is seamlessly integrated into OBO-Edit. It generates terms by identifying statistically significant noun phrases in text. For definitions and parent–child relations it employs pattern-based web searches. We systematically evaluate each generation step using manually validated benchmarks. The term generation leads to high-quality terms also found in manually created ontologies. Up to 78% of definitions are valid and up to 54% of child–ancestor relations can be retrieved. There is no other validated system that achieves comparable results. By combining the prediction of high-quality terms, definitions and parent–child relations with the ontology editor OBO-Edit we contribute a thoroughly validated tool for all OBO ontology engineers. Availability: DOG4DAG is available within OBO-Edit 2.1 at http://www.oboedit.org Contact: thomas.waechter@biotec.tu-dresden.de; Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:20529942

  8. Understanding and using the meaning of statements in a bio-ontology: recasting the Gene Ontology in OWL.

    PubMed

    Aranguren, Mikel Egaña; Bechhofer, Sean; Lord, Phillip; Sattler, Ulrike; Stevens, Robert

    2007-01-01

    The bio-ontology community falls into two camps: first we have biology domain experts, who actually hold the knowledge we wish to capture in ontologies; second, we have ontology specialists, who hold knowledge about techniques and best practice on ontology development. In the bio-ontology domain, these two camps have often come into conflict, especially where pragmatism comes into conflict with perceived best practice. One of these areas is the insistence of computer scientists on a well-defined semantic basis for the Knowledge Representation language being used. In this article, we will first describe why this community is so insistent. Second, we will illustrate this by examining the semantics of the Web Ontology Language and the semantics placed on the Directed Acyclic Graph as used by the Gene Ontology. Finally we will reconcile the two representations, including the broader Open Biomedical Ontologies format. The ability to exchange between the two representations means that we can capitalise on the features of both languages. Such utility can only arise by the understanding of the semantics of the languages being used. By this illustration of the usefulness of a clear, well-defined language semantics, we wish to promote a wider understanding of the computer science perspective amongst potential users within the biological community. PMID:17311682

  9. Understanding and using the meaning of statements in a bio-ontology: recasting the Gene Ontology in OWL

    PubMed Central

    Aranguren, Mikel Egaña; Bechhofer, Sean; Lord, Phillip; Sattler, Ulrike; Stevens, Robert

    2007-01-01

    The bio-ontology community falls into two camps: first we have biology domain experts, who actually hold the knowledge we wish to capture in ontologies; second, we have ontology specialists, who hold knowledge about techniques and best practice on ontology development. In the bio-ontology domain, these two camps have often come into conflict, especially where pragmatism comes into conflict with perceived best practice. One of these areas is the insistence of computer scientists on a well-defined semantic basis for the Knowledge Representation language being used. In this article, we will first describe why this community is so insistent. Second, we will illustrate this by examining the semantics of the Web Ontology Language and the semantics placed on the Directed Acyclic Graph as used by the Gene Ontology. Finally we will reconcile the two representations, including the broader Open Biomedical Ontologies format. The ability to exchange between the two representations means that we can capitalise on the features of both languages. Such utility can only arise by the understanding of the semantics of the languages being used. By this illustration of the usefulness of a clear, well-defined language semantics, we wish to promote a wider understanding of the computer science perspective amongst potential users within the biological community. PMID:17311682

  10. Using analytic hierarchy process approach in ontological multicriterial decision making - Preliminary considerations

    NASA Astrophysics Data System (ADS)

    Wasielewska, K.; Ganzha, M.

    2012-10-01

    In this paper we consider combining ontologically demarcated information with Saaty's Analytic Hierarchy Process (AHP) [1] for the multicriterial assessment of offers during contract negotiations. The context for the proposal is provided by the Agents in Grid project (AiG; [2]), which aims at development of an agent-based infrastructure for efficient resource management in the Grid. In the AiG project, software agents representing users can either (1) join a team and earn money, or (2) find a team to execute a job. Moreover, agents form teams, managers of which negotiate with clients and workers terms of potential collaboration. Here, ontologically described contracts (Service Level Agreements) are the results of autonomous multiround negotiations. Therefore, taking into account relatively complex nature of the negotiated contracts, multicriterial assessment of proposals plays a crucial role. The AHP method is based on pairwise comparisons of criteria and relies on the judgement of a panel of experts. It measures how well does an offer serve the objective of a decision maker. In this paper, we propose how the AHP method can be used to assess ontologically described contract proposals.

  11. Constructing a Geology Ontology Using a Relational Database

    NASA Astrophysics Data System (ADS)

    Hou, W.; Yang, L.; Yin, S.; Ye, J.; Clarke, K.

    2013-12-01

    In geology community, the creation of a common geology ontology has become a useful means to solve problems of data integration, knowledge transformation and the interoperation of multi-source, heterogeneous and multiple scale geological data. Currently, human-computer interaction methods and relational database-based methods are the primary ontology construction methods. Some human-computer interaction methods such as the Geo-rule based method, the ontology life cycle method and the module design method have been proposed for applied geological ontologies. Essentially, the relational database-based method is a reverse engineering of abstracted semantic information from an existing database. The key is to construct rules for the transformation of database entities into the ontology. Relative to the human-computer interaction method, relational database-based methods can use existing resources and the stated semantic relationships among geological entities. However, two problems challenge the development and application. One is the transformation of multiple inheritances and nested relationships and their representation in an ontology. The other is that most of these methods do not measure the semantic retention of the transformation process. In this study, we focused on constructing a rule set to convert the semantics in a geological database into a geological ontology. According to the relational schema of a geological database, a conversion approach is presented to convert a geological spatial database to an OWL-based geological ontology, which is based on identifying semantics such as entities, relationships, inheritance relationships, nested relationships and cluster relationships. The semantic integrity of the transformation was verified using an inverse mapping process. In a geological ontology, an inheritance and union operations between superclass and subclass were used to present the nested relationship in a geochronology and the multiple inheritances

  12. Ion Channel ElectroPhysiology Ontology (ICEPO) – a case study of text mining assisted ontology development

    PubMed Central

    Elayavilli, Ravikumar Komandur; Liu, Hongfang

    2016-01-01

    Background Computational modeling of biological cascades is of great interest to quantitative biologists. Biomedical text has been a rich source for quantitative information. Gathering quantitative parameters and values from biomedical text is one significant challenge in the early steps of computational modeling as it involves huge manual effort. While automatically extracting such quantitative information from bio-medical text may offer some relief, lack of ontological representation for a subdomain serves as impedance in normalizing textual extractions to a standard representation. This may render textual extractions less meaningful to the domain experts. Methods In this work, we propose a rule-based approach to automatically extract relations involving quantitative data from biomedical text describing ion channel electrophysiology. We further translated the quantitative assertions extracted through text mining to a formal representation that may help in constructing ontology for ion channel events using a rule based approach. We have developed Ion Channel ElectroPhysiology Ontology (ICEPO) by integrating the information represented in closely related ontologies such as, Cell Physiology Ontology (CPO), and Cardiac Electro Physiology Ontology (CPEO) and the knowledge provided by domain experts. Results The rule-based system achieved an overall F-measure of 68.93% in extracting the quantitative data assertions system on an independently annotated blind data set. We further made an initial attempt in formalizing the quantitative data assertions extracted from the biomedical text into a formal representation that offers potential to facilitate the integration of text mining into ontological workflow, a novel aspect of this study. Conclusions This work is a case study where we created a platform that provides formal interaction between ontology development and text mining. We have achieved partial success in extracting quantitative assertions from the biomedical text

  13. Semantics in support of biodiversity knowledge discovery: an introduction to the biological collections ontology and related ontologies.

    PubMed

    Walls, Ramona L; Deck, John; Guralnick, Robert; Baskauf, Steve; Beaman, Reed; Blum, Stanley; Bowers, Shawn; Buttigieg, Pier Luigi; Davies, Neil; Endresen, Dag; Gandolfo, Maria Alejandra; Hanner, Robert; Janning, Alyssa; Krishtalka, Leonard; Matsunaga, Andréa; Midford, Peter; Morrison, Norman; Ó Tuama, Éamonn; Schildhauer, Mark; Smith, Barry; Stucky, Brian J; Thomer, Andrea; Wieczorek, John; Whitacre, Jamie; Wooley, John

    2014-01-01

    The study of biodiversity spans many disciplines and includes data pertaining to species distributions and abundances, genetic sequences, trait measurements, and ecological niches, complemented by information on collection and measurement protocols. A review of the current landscape of metadata standards and ontologies in biodiversity science suggests that existing standards such as the Darwin Core terminology are inadequate for describing biodiversity data in a semantically meaningful and computationally useful way. Existing ontologies, such as the Gene Ontology and others in the Open Biological and Biomedical Ontologies (OBO) Foundry library, provide a semantic structure but lack many of the necessary terms to describe biodiversity data in all its dimensions. In this paper, we describe the motivation for and ongoing development of a new Biological Collections Ontology, the Environment Ontology, and the Population and Community Ontology. These ontologies share the aim of improving data aggregation and integration across the biodiversity domain and can be used to describe physical samples and sampling processes (for example, collection, extraction, and preservation techniques), as well as biodiversity observations that involve no physical sampling. Together they encompass studies of: 1) individual organisms, including voucher specimens from ecological studies and museum specimens, 2) bulk or environmental samples (e.g., gut contents, soil, water) that include DNA, other molecules, and potentially many organisms, especially microbes, and 3) survey-based ecological observations. We discuss how these ontologies can be applied to biodiversity use cases that span genetic, organismal, and ecosystem levels of organization. We argue that if adopted as a standard and rigorously applied and enriched by the biodiversity community, these ontologies would significantly reduce barriers to data discovery, integration, and exchange among biodiversity resources and researchers

  14. Semantics in Support of Biodiversity Knowledge Discovery: An Introduction to the Biological Collections Ontology and Related Ontologies

    PubMed Central

    Baskauf, Steve; Blum, Stanley; Bowers, Shawn; Davies, Neil; Endresen, Dag; Gandolfo, Maria Alejandra; Hanner, Robert; Janning, Alyssa; Krishtalka, Leonard; Matsunaga, Andréa; Midford, Peter; Tuama, Éamonn Ó.; Schildhauer, Mark; Smith, Barry; Stucky, Brian J.; Thomer, Andrea; Wieczorek, John; Whitacre, Jamie; Wooley, John

    2014-01-01

    The study of biodiversity spans many disciplines and includes data pertaining to species distributions and abundances, genetic sequences, trait measurements, and ecological niches, complemented by information on collection and measurement protocols. A review of the current landscape of metadata standards and ontologies in biodiversity science suggests that existing standards such as the Darwin Core terminology are inadequate for describing biodiversity data in a semantically meaningful and computationally useful way. Existing ontologies, such as the Gene Ontology and others in the Open Biological and Biomedical Ontologies (OBO) Foundry library, provide a semantic structure but lack many of the necessary terms to describe biodiversity data in all its dimensions. In this paper, we describe the motivation for and ongoing development of a new Biological Collections Ontology, the Environment Ontology, and the Population and Community Ontology. These ontologies share the aim of improving data aggregation and integration across the biodiversity domain and can be used to describe physical samples and sampling processes (for example, collection, extraction, and preservation techniques), as well as biodiversity observations that involve no physical sampling. Together they encompass studies of: 1) individual organisms, including voucher specimens from ecological studies and museum specimens, 2) bulk or environmental samples (e.g., gut contents, soil, water) that include DNA, other molecules, and potentially many organisms, especially microbes, and 3) survey-based ecological observations. We discuss how these ontologies can be applied to biodiversity use cases that span genetic, organismal, and ecosystem levels of organization. We argue that if adopted as a standard and rigorously applied and enriched by the biodiversity community, these ontologies would significantly reduce barriers to data discovery, integration, and exchange among biodiversity resources and researchers

  15. Representations of spacetime: Formalism and ontological commitment

    NASA Astrophysics Data System (ADS)

    Bain, Jonathan Stanley

    This dissertation consists of two parts. The first is on the relation between formalism and ontological commitment in the context of theories of spacetime, and the second is on scientific realism. The first part begins with a look at how the substantivalist/relationist debate over the ontological status of spacetime has been influenced by a particular mathematical formalism, that of tensor analysis on differential manifolds (TADM). This formalism has motivated the substantivalist position known as manifold substantivalism. Chapter 1 focuses on the hole argument which maintains that manifold substantivalism is incompatible with determinism. I claim that the realist motivations underlying manifold substantivalism can be upheld, and the hole argument avoided, by adopting structural realism with respect to spacetime. In this context, this is the claim that it is the structure that spacetime points enter into that warrants belief and not the points themselves. In Chapter 2, an elimination principle is defined by means of which a distinction can be made between surplus structure and essential structure with respect to formulations of a theory in two distinct mathematical formulations and some prior ontological commitments. This principle is then used to demonstrate that manifold points may be considered surplus structure in the formulation of field theories. This suggests that, if we are disposed to read field theories literally, then, at most, it should be the essential structure common to all alternative formulations of such theories that should be taken literally. I also investigate how the adoption of alternative formalisms informs other issues in the philosophy of spacetime. Chapter 3 offers a realist position which takes a semantic moral from the preceding investigation and an epistemic moral from work done on reliability. The semantic moral advises us to read only the essential structure of our theories literally. The epistemic moral shows us that such structure

  16. Real-time look-up table-based color correction for still image stabilization of digital cameras without using frame memory

    NASA Astrophysics Data System (ADS)

    Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha

    2012-09-01

    Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.

  17. Self-Supervised Chinese Ontology Learning from Online Encyclopedias

    PubMed Central

    Shao, Zhiqing; Ruan, Tong

    2014-01-01

    Constructing ontology manually is a time-consuming, error-prone, and tedious task. We present SSCO, a self-supervised learning based chinese ontology, which contains about 255 thousand concepts, 5 million entities, and 40 million facts. We explore the three largest online Chinese encyclopedias for ontology learning and describe how to transfer the structured knowledge in encyclopedias, including article titles, category labels, redirection pages, taxonomy systems, and InfoBox modules, into ontological form. In order to avoid the errors in encyclopedias and enrich the learnt ontology, we also apply some machine learning based methods. First, we proof that the self-supervised machine learning method is practicable in Chinese relation extraction (at least for synonymy and hyponymy) statistically and experimentally and train some self-supervised models (SVMs and CRFs) for synonymy extraction, concept-subconcept relation extraction, and concept-instance relation extraction; the advantages of our methods are that all training examples are automatically generated from the structural information of encyclopedias and a few general heuristic rules. Finally, we evaluate SSCO in two aspects, scale and precision; manual evaluation results show that the ontology has excellent precision, and high coverage is concluded by comparing SSCO with other famous ontologies and knowledge bases; the experiment results also indicate that the self-supervised models obviously enrich SSCO. PMID:24715819

  18. Concept Learning for Achieving Personalized Ontologies: An Active Learning Approach

    NASA Astrophysics Data System (ADS)

    Şensoy, Murat; Yolum, Pinar

    In many multiagent approaches, it is usual to assume the existence of a common ontology among agents. However, in dynamic systems, the existence of such an ontology is unrealistic and its maintenance is cumbersome. Burden of maintaining a common ontology can be alleviated by enabling agents to evolve their ontologies personally. However, with different ontologies, agents are likely to run into communication problems since their vocabularies are different from each other. Therefore, to achieve personalized ontologies, agents must have a means to understand the concepts used by others. Consequently, this paper proposes an approach that enables agents to teach each other concepts from their ontologies using examples. Unlike other concept learning approaches, our approach enables the learner to elicit most informative examples interactively from the teacher. Hence, the learner participates to the learning process actively. We empirically compare the proposed approach with the previous concept learning approaches. Our experiments show that using the proposed approach, agents can learn new concepts successfully and with fewer examples.

  19. Semi Automatic Ontology Instantiation in the domain of Risk Management

    NASA Astrophysics Data System (ADS)

    Makki, Jawad; Alquier, Anne-Marie; Prince, Violaine

    One of the challenging tasks in the context of Ontological Engineering is to automatically or semi-automatically support the process of Ontology Learning and Ontology Population from semi-structured documents (texts). In this paper we describe a Semi-Automatic Ontology Instantiation method from natural language text, in the domain of Risk Management. This method is composed from three steps 1 ) Annotation with part-of-speech tags, 2) Semantic Relation Instances Extraction, 3) Ontology instantiation process. It's based on combined NLP techniques using human intervention between steps 2 and 3 for control and validation. Since it heavily relies on linguistic knowledge it is not domain dependent which is a good feature for portability between the different fields of risk management application. The proposed methodology uses the ontology of the PRIMA1 project (supported by the European community) as a Generic Domain Ontology and populates it via an available corpus. A first validation of the approach is done through an experiment with Chemical Fact Sheets from Environmental Protection Agency2.

  20. Self-supervised Chinese ontology learning from online encyclopedias.

    PubMed

    Hu, Fanghuai; Shao, Zhiqing; Ruan, Tong

    2014-01-01

    Constructing ontology manually is a time-consuming, error-prone, and tedious task. We present SSCO, a self-supervised learning based chinese ontology, which contains about 255 thousand concepts, 5 million entities, and 40 million facts. We explore the three largest online Chinese encyclopedias for ontology learning and describe how to transfer the structured knowledge in encyclopedias, including article titles, category labels, redirection pages, taxonomy systems, and InfoBox modules, into ontological form. In order to avoid the errors in encyclopedias and enrich the learnt ontology, we also apply some machine learning based methods. First, we proof that the self-supervised machine learning method is practicable in Chinese relation extraction (at least for synonymy and hyponymy) statistically and experimentally and train some self-supervised models (SVMs and CRFs) for synonymy extraction, concept-subconcept relation extraction, and concept-instance relation extraction; the advantages of our methods are that all training examples are automatically generated from the structural information of encyclopedias and a few general heuristic rules. Finally, we evaluate SSCO in two aspects, scale and precision; manual evaluation results show that the ontology has excellent precision, and high coverage is concluded by comparing SSCO with other famous ontologies and knowledge bases; the experiment results also indicate that the self-supervised models obviously enrich SSCO. PMID:24715819

  1. Inexact Matching of Ontology Graphs Using Expectation-Maximization

    PubMed Central

    Doshi, Prashant; Kolli, Ravikanth; Thomas, Christopher

    2009-01-01

    We present a new method for mapping ontology schemas that address similar domains. The problem of ontology matching is crucial since we are witnessing a decentralized development and publication of ontological data. We formulate the problem of inferring a match between two ontologies as a maximum likelihood problem, and solve it using the technique of expectation-maximization (EM). Specifically, we adopt directed graphs as our model for ontology schemas and use a generalized version of EM to arrive at a map between the nodes of the graphs. We exploit the structural, lexical and instance similarity between the graphs, and differ from the previous approaches in the way we utilize them to arrive at, a possibly inexact, match. Inexact matching is the process of finding a best possible match between the two graphs when exact matching is not possible or is computationally difficult. In order to scale the method to large ontologies, we identify the computational bottlenecks and adapt the generalized EM by using a memory bounded partitioning scheme. We provide comparative experimental results in support of our method on two well-known ontology alignment benchmarks and discuss their implications. PMID:20160892

  2. A unified anatomy ontology of the vertebrate skeletal system.

    PubMed

    Dahdul, Wasila M; Balhoff, James P; Blackburn, David C; Diehl, Alexander D; Haendel, Melissa A; Hall, Brian K; Lapp, Hilmar; Lundberg, John G; Mungall, Christopher J; Ringwald, Martin; Segerdell, Erik; Van Slyke, Ceri E; Vickaryous, Matthew K; Westerfield, Monte; Mabee, Paula M

    2012-01-01

    The skeleton is of fundamental importance in research in comparative vertebrate morphology, paleontology, biomechanics, developmental biology, and systematics. Motivated by research questions that require computational access to and comparative reasoning across the diverse skeletal phenotypes of vertebrates, we developed a module of anatomical concepts for the skeletal system, the Vertebrate Skeletal Anatomy Ontology (VSAO), to accommodate and unify the existing skeletal terminologies for the species-specific (mouse, the frog Xenopus, zebrafish) and multispecies (teleost, amphibian) vertebrate anatomy ontologies. Previous differences between these terminologies prevented even simple queries across databases pertaining to vertebrate morphology. This module of upper-level and specific skeletal terms currently includes 223 defined terms and 179 synonyms that integrate skeletal cells, tissues, biological processes, organs (skeletal elements such as bones and cartilages), and subdivisions of the skeletal system. The VSAO is designed to integrate with other ontologies, including the Common Anatomy Reference Ontology (CARO), Gene Ontology (GO), Uberon, and Cell Ontology (CL), and it is freely available to the community to be updated with additional terms required for research. Its structure accommodates anatomical variation among vertebrate species in development, structure, and composition. Annotation of diverse vertebrate phenotypes with this ontology will enable novel inquiries across the full spectrum of phenotypic diversity. PMID:23251424

  3. A Unified Anatomy Ontology of the Vertebrate Skeletal System

    PubMed Central

    Dahdul, Wasila M.; Balhoff, James P.; Blackburn, David C.; Diehl, Alexander D.; Haendel, Melissa A.; Hall, Brian K.; Lapp, Hilmar; Lundberg, John G.; Mungall, Christopher J.; Ringwald, Martin; Segerdell, Erik; Van Slyke, Ceri E.; Vickaryous, Matthew K.; Westerfield, Monte; Mabee, Paula M.

    2012-01-01

    The skeleton is of fundamental importance in research in comparative vertebrate morphology, paleontology, biomechanics, developmental biology, and systematics. Motivated by research questions that require computational access to and comparative reasoning across the diverse skeletal phenotypes of vertebrates, we developed a module of anatomical concepts for the skeletal system, the Vertebrate Skeletal Anatomy Ontology (VSAO), to accommodate and unify the existing skeletal terminologies for the species-specific (mouse, the frog Xenopus, zebrafish) and multispecies (teleost, amphibian) vertebrate anatomy ontologies. Previous differences between these terminologies prevented even simple queries across databases pertaining to vertebrate morphology. This module of upper-level and specific skeletal terms currently includes 223 defined terms and 179 synonyms that integrate skeletal cells, tissues, biological processes, organs (skeletal elements such as bones and cartilages), and subdivisions of the skeletal system. The VSAO is designed to integrate with other ontologies, including the Common Anatomy Reference Ontology (CARO), Gene Ontology (GO), Uberon, and Cell Ontology (CL), and it is freely available to the community to be updated with additional terms required for research. Its structure accommodates anatomical variation among vertebrate species in development, structure, and composition. Annotation of diverse vertebrate phenotypes with this ontology will enable novel inquiries across the full spectrum of phenotypic diversity. PMID:23251424

  4. The Domain Shared by Computational and Digital Ontology: A Phenomenological Exploration and Analysis

    ERIC Educational Resources Information Center

    Compton, Bradley Wendell

    2009-01-01

    The purpose of this dissertation is to explore and analyze a domain of research thought to be shared by two areas of philosophy: computational and digital ontology. Computational ontology is philosophy used to develop information systems also called computational ontologies. Digital ontology is philosophy dealing with our understanding of Being…

  5. Dovetailing biology and chemistry: integrating the Gene Ontology with the ChEBI chemical ontology

    PubMed Central

    2013-01-01

    Background The Gene Ontology (GO) facilitates the description of the action of gene products in a biological context. Many GO terms refer to chemical entities that participate in biological processes. To facilitate accurate and consistent systems-wide biological representation, it is necessary to integrate the chemical view of these entities with the biological view of GO functions and processes. We describe a collaborative effort between the GO and the Chemical Entities of Biological Interest (ChEBI) ontology developers to ensure that the representation of chemicals in the GO is both internally consistent and in alignment with the chemical expertise captured in ChEBI. Results We have examined and integrated the ChEBI structural hierarchy into the GO resource through computationally-assisted manual curation of both GO and ChEBI. Our work has resulted in the creation of computable definitions of GO terms that contain fully defined semantic relationships to corresponding chemical terms in ChEBI. Conclusions The set of logical definitions using both the GO and ChEBI has already been used to automate aspects of GO development and has the potential to allow the integration of data across the domains of biology and chemistry. These logical definitions are available as an extended version of the ontology from http://purl.obolibrary.org/obo/go/extensions/go-plus.owl. PMID:23895341

  6. The Application of Ontological Methods toward Coastal Restoration

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Movva, S.; Hardin, D.

    2007-12-01

    At the fall 2006 AGU meeting the Information Technology and Systems Center at the University of Alabama in Huntsville debuted a tool for ontology based search and resource aggregation called Noesis. Since that time Noesis, with a new ontology for seagrass habitats in the Gulf of Mexico, has been utilized to support evaluations of potential seagrass restoration sites. The seagrass ontology was generated from a standard stressor conceptual model description for five species of seagrass common to the Northern Gulf of Mexico. Coupling the seagrass ontology with the existing atmospheric science ontology allowed scientists to locate and retrieve substantial information about the seagrass habitat as well as stressors that impact the habitat induced by climate change and short term atmospheric phenomena. A domain specific catalog of seagrass resources was constructed and an application ontology developed that mapped the keywords of the catalog to the combined (atmospheric and seagrass) ontologies of Noesis. Noesis uses domain ontologies to help the user scope the search queries to ensure that the search results are both accurate and complete. The domain ontologies guide the user to refine their search query and thereby reduce the user's burden of experimenting with different search strings. Semantics are captured by refining the query terms to cover synonyms, specializations, generalizations and related concepts. As a resource aggregator Noesis categorizes search results from different online resources such as education materials, publications, datasets, web search engines that might be of interest to the user. This presentation will give an overview of Noesis and describe how it has been applied to coastal restoration investigations.

  7. Exact score distribution computation for ontological similarity searches

    PubMed Central

    2011-01-01

    Background Semantic similarity searches in ontologies are an important component of many bioinformatic algorithms, e.g., finding functionally related proteins with the Gene Ontology or phenotypically similar diseases with the Human Phenotype Ontology (HPO). We have recently shown that the performance of semantic similarity searches can be improved by ranking results according to the probability of obtaining a given score at random rather than by the scores themselves. However, to date, there are no algorithms for computing the exact distribution of semantic similarity scores, which is necessary for computing the exact P-value of a given score. Results In this paper we consider the exact computation of score distributions for similarity searches in ontologies, and introduce a simple null hypothesis which can be used to compute a P-value for the statistical significance of similarity scores. We concentrate on measures based on Resnik's definition of ontological similarity. A new algorithm is proposed that collapses subgraphs of the ontology graph and thereby allows fast score distribution computation. The new algorithm is several orders of magnitude faster than the naive approach, as we demonstrate by computing score distributions for similarity searches in the HPO. It is shown that exact P-value calculation improves clinical diagnosis using the HPO compared to approaches based on sampling. Conclusions The new algorithm enables for the first time exact P-value calculation via exact score distribution computation for ontology similarity searches. The approach is applicable to any ontology for which the annotation-propagation rule holds and can improve any bioinformatic method that makes only use of the raw similarity scores. The algorithm was implemented in Java, supports any ontology in OBO format, and is available for non-commercial and academic usage under: https://compbio.charite.de/svn/hpo/trunk/src/tools/significance/ PMID:22078312

  8. Fundamental physical theories: Mathematical structures grounded on a primitive ontology

    NASA Astrophysics Data System (ADS)

    Allori, Valia

    In my dissertation I analyze the structure of fundamental physical theories. I start with an analysis of what an adequate primitive ontology is, discussing the measurement problem in quantum mechanics and theirs solutions. It is commonly said that these theories have little in common. I argue instead that the moral of the measurement problem is that the wave function cannot represent physical objects and a common structure between these solutions can be recognized: each of them is about a clear three-dimensional primitive ontology that evolves according to a law determined by the wave function. The primitive ontology is what matter is made of while the wave function tells the matter how to move. One might think that what is important in the notion of primitive ontology is their three-dimensionality. If so, in a theory like classical electrodynamics electromagnetic fields would be part of the primitive ontology. I argue that, reflecting on what the purpose of a fundamental physical theory is, namely to explain the behavior of objects in three-dimensional space, one can recognize that a fundamental physical theory has a particular architecture. If so, electromagnetic fields play a different role in the theory than the particles and therefore should be considered, like the wave function, as part of the law. Therefore, we can characterize the general structure of a fundamental physical theory as a mathematical structure grounded on a primitive ontology. I explore this idea to better understand theories like classical mechanics and relativity, emphasizing that primitive ontology is crucial in the process of building new theories, being fundamental in identifying the symmetries. Finally, I analyze what it means to explain the word around us in terms of the notion of primitive ontology in the case of regularities of statistical character. Here is where the notion of typicality comes into play: we have explained a phenomenon if the typical histories of the primitive

  9. A Personalist Ontological Approach to Synthetic Biology.

    PubMed

    Gómez-Tatay, Lucía; Hernández-Andreu, José Miguel; Aznar, Justo

    2016-07-01

    Although synthetic biology is a promising discipline, it also raises serious ethical questions that must be addressed in order to prevent unwanted consequences and to ensure that its progress leads toward the good of all. Questions arise about the role of this discipline in a possible redefinition of the concept of life and its creation. With regard to the products of synthetic biology, the moral status that they should be given as well as the ethically correct way to behave towards them are not clear. Moreover, risks that could result from a misuse of this technology or from an accidental release of synthetic organisms into the environment cannot be ignored; concerns about biosecurity and biosafety appear. Here we discuss these and other questions from a personalist ontological framework, which defends human life as an essential value and proposes a set of principles to ensure the safeguarding of this and other values that are based on it. PMID:26644292

  10. Using ontology for domain specific information retrieval

    NASA Astrophysics Data System (ADS)

    Shashirekha, H. L.; Murali, S.; Nagabhushan, P.

    2010-02-01

    This paper presents a system for retrieving information from a domain specific document collection made up of data rich unnatural language text documents. Instead of conventional keyword based retrieval, our system makes use of domain ontology to retrieve the information from a collection of documents. The system addresses the problem of representing unnatural language text documents and constructing a classifier model that helps in the efficient retrieval of relevant information. Query to this system may be either the key phrases in terms of concepts or a domain specific unnatural language text document. The classifier used in this system can also be used to assign multiple labels to the previously unseen text document belonging to the same domain. An empirical evaluation of the system is conducted on the domain of text documents describing the classified matrimonial advertisements to determine its performance.

  11. Misinterpretive phenomenology: Heidegger, ontology and nursing research.

    PubMed

    Paley, J

    1998-04-01

    This paper argues that Heidegger's phenomenology does not have the methodological implications usually ascribed to it in nursing literature. The Heidegger of Being and Time is not in any sense antagonistic to science, nor does he think that everydayness is more authentic, more genuine, than scientific enquiry or theoretical cognition. It is true that social science must rest on interpretive foundations, acknowledging the self-interpreting nature of human beings, but it does not follow from this that hermeneutics exhausts all the possibilities. Positivist approaches to social science are certainly inconsistent with Heidegger's ontology, but realist approaches are not and structuration theory, in particular, can be seen as a sociological translation of his ideas. Social enquiry in nursing is not therefore confined to studies of lived experience. Indeed, lived experience research constitutes not a realization, but rather a betrayal, of Heidegger's phenomenology, being thoroughly Cartesian in spirit. PMID:9578213

  12. Arthrogryposis as a Syndrome: Gene Ontology Analysis.

    PubMed

    Hall, Judith G; Kiefer, Jeff

    2016-07-01

    Arthrogryposis by definition has multiple congenital contractures. All types of arthrogryposis have decreased in utero fetal movement. Because so many things are involved in normal fetal movement, there are many causes and processes that can go awry. In this era of molecular genetics, we have tried to place the known mutated genes seen in genetic forms of arthrogryposis into biological processes or cellular functions as defined by gene ontology. We hope this leads to better identification of all interacting pathways and processes involved in the development of fetal movement in order to improve diagnosis of the genetic forms of arthrogryposis, to lead to the development of molecular therapies, and to help better define the natural history of various types of arthrogryposis. PMID:27587986

  13. Clustering of gene ontology terms in genomes.

    PubMed

    Tiirikka, Timo; Siermala, Markku; Vihinen, Mauno

    2014-10-25

    Although protein coding genes occupy only a small fraction of genomes in higher species, they are not randomly distributed within or between chromosomes. Clustering of genes with related function(s) and/or characteristics has been evident at several different levels. To study how common the clustering of functionally related genes is and what kind of functions the end products of these genes are involved, we collected gene ontology (GO) terms for complete genomes and developed a method to detect previously undefined gene clustering. Exhaustive analysis was performed for seven widely studied species ranging from human to Escherichia coli. To overcome problems related to varying gene lengths and densities, a novel method was developed and a fixed number of genes were analyzed irrespective of the genome span covered. Statistically very significant GO term clustering was apparent in all the investigated genomes. The analysis window, which ranged from 5 to 50 consecutive genes, revealed extensive GO term clusters for genes with widely varying functions. Here, the most interesting and significant results are discussed and the complete dataset for each analyzed species is available at the GOme database at http://bioinf.uta.fi/GOme. The results indicated that clusters of genes with related functions are very common, not only in bacteria, in which operons are frequent, but also in all the studied species irrespective of how complex they are. There are some differences between species but in all of them GO term clusters are common and of widely differing sizes. The presented method can be applied to analyze any genome or part of a genome for which descriptive features are available, and thus is not restricted to ontology terms. This method can also be applied to investigate gene and protein expression patterns. The results pave a way for further studies of mechanisms that shape genome structure and evolutionary forces related to them. PMID:24995610

  14. Computational algorithms to predict Gene Ontology annotations

    PubMed Central

    2015-01-01

    Background Gene function annotations, which are associations between a gene and a term of a controlled vocabulary describing gene functional features, are of paramount importance in modern biology. Datasets of these annotations, such as the ones provided by the Gene Ontology Consortium, are used to design novel biological experiments and interpret their results. Despite their importance, these sources of information have some known issues. They are incomplete, since biological knowledge is far from being definitive and it rapidly evolves, and some erroneous annotations may be present. Since the curation process of novel annotations is a costly procedure, both in economical and time terms, computational tools that can reliably predict likely annotations, and thus quicken the discovery of new gene annotations, are very useful. Methods We used a set of computational algorithms and weighting schemes to infer novel gene annotations from a set of known ones. We used the latent semantic analysis approach, implementing two popular algorithms (Latent Semantic Indexing and Probabilistic Latent Semantic Analysis) and propose a novel method, the Semantic IMproved Latent Semantic Analysis, which adds a clustering step on the set of considered genes. Furthermore, we propose the improvement of these algorithms by weighting the annotations in the input set. Results We tested our methods and their weighted variants on the Gene Ontology annotation sets of three model organism genes (Bos taurus, Danio rerio and Drosophila melanogaster ). The methods showed their ability in predicting novel gene annotations and the weighting procedures demonstrated to lead to a valuable improvement, although the obtained results vary according to the dimension of the input annotation set and the considered algorithm. Conclusions Out of the three considered methods, the Semantic IMproved Latent Semantic Analysis is the one that provides better results. In particular, when coupled with a proper

  15. Controlled vocabularies and ontologies in proteomics: overview, principles and practice.

    PubMed

    Mayer, Gerhard; Jones, Andrew R; Binz, Pierre-Alain; Deutsch, Eric W; Orchard, Sandra; Montecchi-Palazzi, Luisa; Vizcaíno, Juan Antonio; Hermjakob, Henning; Oveillero, David; Julian, Randall; Stephan, Christian; Meyer, Helmut E; Eisenacher, Martin

    2014-01-01

    This paper focuses on the use of controlled vocabularies (CVs) and ontologies especially in the area of proteomics, primarily related to the work of the Proteomics Standards Initiative (PSI). It describes the relevant proteomics standard formats and the ontologies used within them. Software and tools for working with these ontology files are also discussed. The article also examines the "mapping files" used to ensure correct controlled vocabulary terms that are placed within PSI standards and the fulfillment of the MIAPE (Minimum Information about a Proteomics Experiment) requirements. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. PMID:23429179

  16. On Constructing, Grouping and Using Topical Ontology for Semantic Matching

    NASA Astrophysics Data System (ADS)

    Tang, Yan; de Baer, Peter; Zhao, Gang; Meersman, Robert

    An ontology topic is used to group concepts from different contexts (or even from different domain ontologies). This paper presents a pattern-driven modeling methodology for constructing and grouping topics in an ontology (PAD-ON methodology), which is used for matching similarities between competences in the human resource management (HRM) domain. The methodology is supported by a tool called PAD-ON. This paper demonstrates our recent achievement in the work from the EC Prolix project. The paper approach is applied to the training processes at British Telecom as the test bed.

  17. Simplified ontologies allowing comparison of developmental mammalian gene expression

    PubMed Central

    Kruger, Adele; Hofmann, Oliver; Carninci, Piero; Hayashizaki, Yoshihide; Hide, Winston

    2007-01-01

    Model organisms represent an important resource for understanding the fundamental aspects of mammalian biology. Mapping of biological phenomena between model organisms is complex and if it is to be meaningful, a simplified representation can be a powerful means for comparison. The Developmental eVOC ontologies presented here are simplified orthogonal ontologies describing the temporal and spatial distribution of developmental human and mouse anatomy. We demonstrate the ontologies by identifying genes showing a bias for developmental brain expression in human and mouse. PMID:17961239

  18. Reuse of termino-ontological resources and text corpora for building a multilingual domain ontology: an application to Alzheimer's disease.

    PubMed

    Dramé, Khadim; Diallo, Gayo; Delva, Fleur; Dartigues, Jean François; Mouillet, Evelyne; Salamon, Roger; Mougin, Fleur

    2014-04-01

    Ontologies are useful tools for sharing and exchanging knowledge. However ontology construction is complex and often time consuming. In this paper, we present a method for building a bilingual domain ontology from textual and termino-ontological resources intended for semantic annotation and information retrieval of textual documents. This method combines two approaches: ontology learning from texts and the reuse of existing terminological resources. It consists of four steps: (i) term extraction from domain specific corpora (in French and English) using textual analysis tools, (ii) clustering of terms into concepts organized according to the UMLS Metathesaurus, (iii) ontology enrichment through the alignment of French and English terms using parallel corpora and the integration of new concepts, (iv) refinement and validation of results by domain experts. These validated results are formalized into a domain ontology dedicated to Alzheimer's disease and related syndromes which is available online (http://lesim.isped.u-bordeaux2.fr/SemBiP/ressources/ontoAD.owl). The latter currently includes 5765 concepts linked by 7499 taxonomic relationships and 10,889 non-taxonomic relationships. Among these results, 439 concepts absent from the UMLS were created and 608 new synonymous French terms were added. The proposed method is sufficiently flexible to be applied to other domains. PMID:24382429

  19. Proposed actions are no actions: re-modeling an ontology design pattern with a realist top-level ontology

    PubMed Central

    2012-01-01

    Background Ontology Design Patterns (ODPs) are representational artifacts devised to offer solutions for recurring ontology design problems. They promise to enhance the ontology building process in terms of flexibility, re-usability and expansion, and to make the result of ontology engineering more predictable. In this paper, we analyze ODP repositories and investigate their relation with upper-level ontologies. In particular, we compare the BioTop upper ontology to the Action ODP from the NeOn an ODP repository. In view of the differences in the respective approaches, we investigate whether the Action ODP can be embedded into BioTop. We demonstrate that this requires re-interpreting the meaning of classes of the NeOn Action ODP in the light of the precepts of realist ontologies. Results As a result, the re-design required clarifying the ontological commitment of the ODP classes by assigning them to top-level categories. Thus, ambiguous definitions are avoided. Classes of real entities are clearly distinguished from classes of information artifacts. The proposed approach avoids the commitment to the existence of unclear future entities which underlies the NeOn Action ODP. Our re-design is parsimonious in the sense that existing BioTop content proved to be largely sufficient to define the different types of actions and plans. Conclusions The proposed model demonstrates that an expressive upper-level ontology provides enough resources and expressivity to represent even complex ODPs, here shown with the different flavors of Action as proposed in the NeOn ODP. The advantage of ODP inclusion into a top-level ontology is the given predetermined dependency of each class, an existing backbone structure and well-defined relations. Our comparison shows that the use of some ODPs is more likely to cause problems for ontology developers, rather than to guide them. Besides the structural properties, the explanation of classification results were particularly hard to grasp for

  20. Missing the (question) mark? What is a turn to ontology?

    PubMed

    Woolgar, Steve; Lezaun, Javier

    2015-06-01

    Our introductory essay in this journal's 2013 Special Issue on the 'turn to ontology' examined the shift from epistemology to ontology in science and technology studies and explored the implications of the notion of enactment. Three responses to that Special Issue argue that (I) there is no fundamental qualitative difference between the ontological turn and social constructivism, (2) we need to be wary of overly generic use of the term 'ontology' and (3) the language of 'turns' imposes constraints on the richness and diversity of science and technology studies. In this brief reply, we show how each of those critiques varies in its commitment to circumspection about making objective determinations of reality and to resisting reification. We illustrate our point by considering overlapping discussions in anthropology. This brings out the crucial difference between the science and technology studies slogan 'it could be otherwise' and the multinaturalist motto 'it actually is otherwise'. PMID:26477203

  1. ICD-11 and SNOMED CT Common Ontology: circulatory system.

    PubMed

    Rodrigues, Jean-Marie; Schulz, Stefan; Rector, Alan; Spackman, Kent; Millar, Jane; Campbell, James; Ustün, Bedirhan; Chute, Christopher G; Solbrig, Harold; Della Mea, Vincenzo; Persson, Kristina Brand

    2014-01-01

    The improvement of semantic interoperability between data in electronic health records and aggregated data for health statistics requires efforts to carefully align the two domain terminologies ICD and SNOMED CT. Both represent a new generation of ontology-based terminologies and classifications. The proposed alignment of these two systems and, in consequence, the validity of their cross-utilisation, requires a specific resource, named Common Ontology. We present the ICD-11 SNOMED CT Common Ontology building process including: a) the principles proposed for aligning the two systems with the help of a common model of meaning, b) the design of this common ontology, and c) preliminary results of the application to the diseases of the circulatory system. PMID:25160347

  2. Towards an ontological theory of substance intolerance and hypersensitivity.

    PubMed

    Hogan, William R

    2011-02-01

    A proper ontological treatment of intolerance--including hypersensitivity--to various substances is critical to patient care and research. However, existing methods and standards for documenting these conditions have flaws that inhibit these goals, especially translational research that bridges the two activities. In response, I outline a realist approach to the ontology of substance intolerance, including hypersensitivity conditions. I defend a view of these conditions as a subtype of disease. Specifically, a substance intolerance is a disease whose pathological process(es) are realized upon exposure to a quantity of substance of a particular type, and this quantity would normally not cause the realization of the pathological process(es). To develop this theory, it was necessary to build pieces of a theory of pathological processes. Overall, however, the framework of the Ontology for General Medical Science (which uses Basic Formal Ontology as its uppermost level) was a more-than-adequate foundation on which to build the theory. PMID:20152933

  3. The unexpected high practical value of medical ontologies.

    PubMed

    Pinciroli, Francesco; Pisanelli, Domenico M

    2006-01-01

    Ontology is no longer a mere research topic, but its relevance has been recognized in several practical fields. Current applications areas include natural language translation, e-commerce, geographic information systems, legal information systems and biology and medicine. It is the backbone of solid and effective applications in health care and can help to build more powerful and more interoperable medical information systems. The design and implementation of ontologies in medicine is mainly focused on the re-organization of medical terminologies. This is obviously a difficult task and requires a deep analysis of the structure and the concepts of such terminologies, in order to define domain ontologies able to provide both flexibility and consistency to medical information systems. The aim of this special issue of Computers in Biology and Medicine is to report the current evolution of research in biomedical ontologies, presenting both papers devoted to methodological issues and works with a more applicative emphasis. PMID:16182274

  4. The environment ontology: contextualising biological and biomedical entities

    PubMed Central

    2013-01-01

    As biological and biomedical research increasingly reference the environmental context of the biological entities under study, the need for formalisation and standardisation of environment descriptors is growing. The Environment Ontology (ENVO; http://www.environmentontology.org) is a community-led, open project which seeks to provide an ontology for specifying a wide range of environments relevant to multiple life science disciplines and, through an open participation model, to accommodate the terminological requirements of all those needing to annotate data using ontology classes. This paper summarises ENVO’s motivation, content, structure, adoption, and governance approach. The ontology is available from http://purl.obolibrary.org/obo/envo.owl - an OBO format version is also available by switching the file suffix to “obo”. PMID:24330602

  5. Semantator: annotating clinical narratives with semantic web ontologies.

    PubMed

    Song, Dezhao; Chute, Christopher G; Tao, Cui

    2012-01-01

    To facilitate clinical research, clinical data needs to be stored in a machine processable and understandable way. Manual annotating clinical data is time consuming. Automatic approaches (e.g., Natural Language Processing systems) have been adopted to convert such data into structured formats; however, the quality of such automatically extracted data may not always be satisfying. In this paper, we propose Semantator, a semi-automatic tool for document annotation with Semantic Web ontologies. With a loaded free text document and an ontology, Semantator supports the creation/deletion of ontology instances for any document fragment, linking/disconnecting instances with the properties in the ontology, and also enables automatic annotation by connecting to the NCBO annotator and cTAKES. By representing annotations in Semantic Web standards, Semantator supports reasoning based upon the underlying semantics of the owl:disjointWith and owl:equivalentClass predicates. We present discussions based on user experiences of using Semantator. PMID:22779043

  6. Semantator: Annotating Clinical Narratives with Semantic Web Ontologies

    PubMed Central

    Song, Dezhao; Chute, Christopher G.; Tao, Cui

    2012-01-01

    To facilitate clinical research, clinical data needs to be stored in a machine processable and understandable way. Manual annotating clinical data is time consuming. Automatic approaches (e.g., Natural Language Processing systems) have been adopted to convert such data into structured formats; however, the quality of such automatically extracted data may not always be satisfying. In this paper, we propose Semantator, a semi-automatic tool for document annotation with Semantic Web ontologies. With a loaded free text document and an ontology, Semantator supports the creation/deletion of ontology instances for any document fragment, linking/disconnecting instances with the properties in the ontology, and also enables automatic annotation by connecting to the NCBO annotator and cTAKES. By representing annotations in Semantic Web standards, Semantator supports reasoning based upon the underlying semantics of the owl:disjointWith and owl:equivalentClass predicates. We present discussions based on user experiences of using Semantator. PMID:22779043

  7. Fuzzy ontologies for semantic interpretation of remotely sensed images

    NASA Astrophysics Data System (ADS)

    Djerriri, Khelifa; Malki, Mimoun

    2015-10-01

    Object-based image classification consists in the assignment of object that share similar attributes to object categories. To perform such a task the remote sensing expert uses its personal knowledge, which is rarely formalized. Ontologies have been proposed as solution to represent domain knowledge agreed by domain experts in a formal and machine readable language. Classical ontology languages are not appropriate to deal with imprecision or vagueness in knowledge. Fortunately, Description Logics for the semantic web has been enhanced by various approaches to handle such knowledge. This paper presents the extension of the traditional ontology-based interpretation with fuzzy ontology of main land-cover classes in Landsat8-OLI scenes (vegetation, built-up areas, water bodies, shadow, clouds, forests) objects. A good classification of image objects was obtained and the results highlight the potential of the method to be replicated over time and space in the perspective of transferability of the procedure.

  8. CLOnE: Controlled Language for Ontology Editing

    NASA Astrophysics Data System (ADS)

    Funk, Adam; Tablan, Valentin; Bontcheva, Kalina; Cunningham, Hamish; Davis, Brian; Handschuh, Siegfried

    This paper presents a controlled language for ontology editing and a software implementation, based partly on standard NLP tools, for processing that language and manipulating an ontology. The input sentences are analysed deterministically and compositionally with respect to a given ontology, which the software consults in order to interpret the input's semantics; this allows the user to learn fewer syntactic structures since some of them can be used to refer to either classes or instances, for example. A repeated-measures, task-based evaluation has been carried out in comparison with a well-known ontology editor; our software received favourable results for basic tasks. The paper also discusses work in progress and future plans for developing this language and tool.

  9. Mapping the UMLS Semantic Network into general ontologies.

    PubMed

    Burgun, A; Bodenreider, O

    2001-01-01

    In this study, we analyzed the compatibility between an ontology of the biomedical domain (the UMLS Semantic Network) and two other ontologies: the Upper Cyc Ontology (UCO) and WordNet. 1) We manually mapped UMLS Semantic Types to UCO. One fifth of the UMLS Semantic Types had exact mapping to UCO types. UCO provides generic concepts and a structure that relies on a larger number of categories, despite its lack of depth in the biomedical domain. 2) We compared semantic classes in the UMLS and WordNet. 2% of the UMLS concepts from the Health Disorder class were present in WordNet, and compatibility between classes was 48%. WordNet, as a general language-oriented ontology is a source of lay knowledge, particularly important for consumer health applications. PMID:11833483

  10. Towards an Ontological Theory of Substance Intolerance and Hypersensitivity

    PubMed Central

    Hogan, William R.

    2010-01-01

    A proper ontological treatment of intolerance—including hypersensitivity—to various substances is critical to patient care and research. However, existing methods and standards for documenting these conditions have flaws that inhibit these goals, especially translational research that bridges the two activities. In response, I outline a realist approach to the ontology of substance intolerance, including hypersensitivity conditions. I defend a view of these conditions as a subtype of disease. Specifically, a substance intolerance is a disease whose pathological process(es) are realized upon exposure to a quantity of substance of a particular type, and this quantity would normally not cause the realization of the pathological process(es). To develop this theory, it was necessary to build pieces of a theory of pathological processes. Overall, however, the framework of the Ontology for General Medical Science (which uses Basic Formal Ontology as its uppermost level) was a more-than-adequate foundation on which to build the theory. PMID:20152933

  11. An ontology approach to comparative phenomics in plants

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Plant phenotypes (observable characteristics) are described using many different formats and specialized vocabularies or "ontologies". Similar phenotypes in different species may be given different names. These differences in terms complicate phenotype comparisons across species. This research descr...

  12. Ontological leveling and elicitation for complex industrial transactions

    SciTech Connect

    Phillips, L.R.; Goldsmith, S.Y.; Spires, S.V.

    1998-11-01

    The authors present an agent-oriented mechanism that uses a central ontology as a means to conduct complex distributed transactions. This is done by instantiating a template object motivated solely by the ontology, then automatically and explicitly linking each temple element to an independently constructed interface component. Validation information is attached directly to the links so that the agent need not know a priori the semantics of data validity, merely how to execute a general validation process to satisfy the conditions given in the link. Ontological leveling is critical: all terms presented to informants must be semantically coherent within the central ontology. To illustrate this approach in an industrial setting, they discuss an existing implementation that conducted international commercial transactions on the World-Wide Web. Agents operating within a federated architecture construct, populate by Web-based elicitation, and manipulate a distributed composite transaction object to effect transport of goods over the US/Mexico border.

  13. Geo-Ontology: Empowering new Discoveries in Earth Sciences

    NASA Astrophysics Data System (ADS)

    Sinha, A.; Lin, K.; Raskin, R.; Barnes, C.; McGuinness, D.; Najdi, J.

    2005-12-01

    The rapid growth of data-rich resources associated with Earth and other planetary studies, including maps created by in-situ and remote sensing techniques, as well as spatial and aspatial relational databases, is driving new requirements for an information infrastructure that will facilitate scientific discovery. Ongoing research suggests that an ontology-based framework will facilitate registration, management, integration and analysis of databases and other data objects in a web-based environment. For earth scientists, ontologies can be viewed as a representation paradigm that can be used to capture formal declarative specifications of geologic objects, phenomena, and their interrelationships (e.g. subclass, part of, above, etc.). Ontologies may be used to capture classification schemes such as those for minerals, rocks, geologic time scale, or geologic structures, and thereby provide an organizational structure for automatically classifying earth science data. This is only possible because ontologies contain explicit definitions of terms used by scientists to associate meaning to the data or relationships between datasets. Ongoing development and growth of an ontology-based framework for the solid earth requires utilization of existing community-accepted high level ontologies such as SWEET (Semantic Web for Earth and Environmental Terminology) and NADM (North American Geological Data Model). The high level SWEET ontology contains formal definitions for terms used in earth and space sciences, and it encodes structure that recognizes the spatial distribution of earth environments (earth realm) and the interfaces between different realms. These earth realms have associated properties with appropriate units and provide an extensible upper level terminology. Extension of these concepts to high-resolution ontologies where data reside is well underway. For example, we have developed new ontology-based packages containing Planetary Materials (elements, isotopes, rocks

  14. Ontology-Based Modelling of Ocean Satellite Images

    NASA Astrophysics Data System (ADS)

    Almendros-Jiménez, Jesús M.; Piedra, José A.; Cantón, Manuel

    In this paper we will define an ontology about the semantic content of ocean satellite images in which we are able to represent types of ocean structures, spatial and morphological concepts, and knowledge about measures of temperature, chrolophyll concentration, among others. Such ontology will provide the basis of a classification system based on the low-level features of images. We have tested our approach using the Protegé semantic web tool.

  15. HuPSON: the human physiology simulation ontology

    PubMed Central

    2013-01-01

    Background Large biomedical simulation initiatives, such as the Virtual Physiological Human (VPH), are substantially dependent on controlled vocabularies to facilitate the exchange of information, of data and of models. Hindering these initiatives is a lack of a comprehensive ontology that covers the essential concepts of the simulation domain. Results We propose a first version of a newly constructed ontology, HuPSON, as a basis for shared semantics and interoperability of simulations, of models, of algorithms and of other resources in this domain. The ontology is based on the Basic Formal Ontology, and adheres to the MIREOT principles; the constructed ontology has been evaluated via structural features, competency questions and use case scenarios. The ontology is freely available at: http://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads.html (owl files) and http://bishop.scai.fraunhofer.de/scaiview/ (browser). Conclusions HuPSON provides a framework for a) annotating simulation experiments, b) retrieving relevant information that are required for modelling, c) enabling interoperability of algorithmic approaches used in biomedical simulation, d) comparing simulation results and e) linking knowledge-based approaches to simulation-based approaches. It is meant to foster a more rapid uptake of semantic technologies in the modelling and simulation domain, with particular focus on the VPH domain. PMID:24267822

  16. Knowledge Discovery from Biomedical Ontologies in Cross Domains

    PubMed Central

    Shen, Feichen; Lee, Yugyung

    2016-01-01

    In recent years, there is an increasing demand for sharing and integration of medical data in biomedical research. In order to improve a health care system, it is required to support the integration of data by facilitating semantic interoperability systems and practices. Semantic interoperability is difficult to achieve in these systems as the conceptual models underlying datasets are not fully exploited. In this paper, we propose a semantic framework, called Medical Knowledge Discovery and Data Mining (MedKDD), that aims to build a topic hierarchy and serve the semantic interoperability between different ontologies. For the purpose, we fully focus on the discovery of semantic patterns about the association of relations in the heterogeneous information network representing different types of objects and relationships in multiple biological ontologies and the creation of a topic hierarchy through the analysis of the discovered patterns. These patterns are used to cluster heterogeneous information networks into a set of smaller topic graphs in a hierarchical manner and then to conduct cross domain knowledge discovery from the multiple biological ontologies. Thus, patterns made a greater contribution in the knowledge discovery across multiple ontologies. We have demonstrated the cross domain knowledge discovery in the MedKDD framework using a case study with 9 primary biological ontologies from Bio2RDF and compared it with the cross domain query processing approach, namely SLAP. We have confirmed the effectiveness of the MedKDD framework in knowledge discovery from multiple medical ontologies. PMID:27548262

  17. Indivisibility, Complementarity and Ontology: A Bohrian Interpretation of Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Roldán-Charria, Jairo

    2014-12-01

    The interpretation of quantum mechanics presented in this paper is inspired by two ideas that are fundamental in Bohr's writings: indivisibility and complementarity. Further basic assumptions of the proposed interpretation are completeness, universality and conceptual economy. In the interpretation, decoherence plays a fundamental role for the understanding of measurement. A general and precise conception of complementarity is proposed. It is fundamental in this interpretation to make a distinction between ontological reality, constituted by everything that does not depend at all on the collectivity of human beings, nor on their decisions or limitations, nor on their existence, and empirical reality constituted by everything that not being ontological is, however, intersubjective. According to the proposed interpretation, neither the dynamical properties, nor the constitutive properties of microsystems like mass, charge and spin, are ontological. The properties of macroscopic systems and space-time are also considered to belong to empirical reality. The acceptance of the above mentioned conclusion does not imply a total rejection of the notion of ontological reality. In the paper, utilizing the Aristotelian ideas of general cause and potentiality, a relation between ontological reality and empirical reality is proposed. Some glimpses of ontological reality, in the form of what can be said about it, are finally presented.

  18. Biomedical imaging ontologies: A survey and proposal for future work

    PubMed Central

    Smith, Barry; Arabandi, Sivaram; Brochhausen, Mathias; Calhoun, Michael; Ciccarese, Paolo; Doyle, Scott; Gibaud, Bernard; Goldberg, Ilya; Kahn, Charles E.; Overton, James; Tomaszewski, John; Gurcan, Metin

    2015-01-01

    Background: Ontology is one strategy for promoting interoperability of heterogeneous data through consistent tagging. An ontology is a controlled structured vocabulary consisting of general terms (such as “cell” or “image” or “tissue” or “microscope”) that form the basis for such tagging. These terms are designed to represent the types of entities in the domain of reality that the ontology has been devised to capture; the terms are provided with logical definitions thereby also supporting reasoning over the tagged data. Aim: This paper provides a survey of the biomedical imaging ontologies that have been developed thus far. It outlines the challenges, particularly faced by ontologies in the fields of histopathological imaging and image analysis, and suggests a strategy for addressing these challenges in the example domain of quantitative histopathology imaging. Results and Conclusions: The ultimate goal is to support the multiscale understanding of disease that comes from using interoperable ontologies to integrate imaging data with clinical and genomics data. PMID:26167381

  19. A task-based approach for Gene Ontology evaluation

    PubMed Central

    2013-01-01

    Background The Gene Ontology and its associated annotations are critical tools for interpreting lists of genes. Here, we introduce a method for evaluating the Gene Ontology annotations and structure based on the impact they have on gene set enrichment analysis, along with an example implementation. This task-based approach yields quantitative assessments grounded in experimental data and anchored tightly to the primary use of the annotations. Results Applied to specific areas of biological interest, our framework allowed us to understand the progress of annotation and structural ontology changes from 2004 to 2012. Our framework was also able to determine that the quality of annotations and structure in the area under test have been improving in their ability to recall underlying biological traits. Furthermore, we were able to distinguish between the impact of changes to the annotation sets and ontology structure. Conclusion Our framework and implementation lay the groundwork for a powerful tool in evaluating the usefulness of the Gene Ontology. We demonstrate both the flexibility and the power of this approach in evaluating the current and past state of the Gene Ontology as well as its applicability in developing new methods for creating gene annotations. PMID:23734599

  20. Knowledge Discovery from Biomedical Ontologies in Cross Domains.

    PubMed

    Shen, Feichen; Lee, Yugyung

    2016-01-01

    In recent years, there is an increasing demand for sharing and integration of medical data in biomedical research. In order to improve a health care system, it is required to support the integration of data by facilitating semantic interoperability systems and practices. Semantic interoperability is difficult to achieve in these systems as the conceptual models underlying datasets are not fully exploited. In this paper, we propose a semantic framework, called Medical Knowledge Discovery and Data Mining (MedKDD), that aims to build a topic hierarchy and serve the semantic interoperability between different ontologies. For the purpose, we fully focus on the discovery of semantic patterns about the association of relations in the heterogeneous information network representing different types of objects and relationships in multiple biological ontologies and the creation of a topic hierarchy through the analysis of the discovered patterns. These patterns are used to cluster heterogeneous information networks into a set of smaller topic graphs in a hierarchical manner and then to conduct cross domain knowledge discovery from the multiple biological ontologies. Thus, patterns made a greater contribution in the knowledge discovery across multiple ontologies. We have demonstrated the cross domain knowledge discovery in the MedKDD framework using a case study with 9 primary biological ontologies from Bio2RDF and compared it with the cross domain query processing approach, namely SLAP. We have confirmed the effectiveness of the MedKDD framework in knowledge discovery from multiple medical ontologies. PMID:27548262

  1. Structuring an event ontology for disease outbreak detection

    PubMed Central

    Kawazoe, Ai; Chanlekha, Hutchatai; Shigematsu, Mika; Collier, Nigel

    2008-01-01

    Background This paper describes the design of an event ontology being developed for application in the machine understanding of infectious disease-related events reported in natural language text. This event ontology is designed to support timely detection of disease outbreaks and rapid judgment of their alerting status by 1) bridging a gap between layman's language used in disease outbreak reports and public health experts' deep knowledge, and 2) making multi-lingual information available. Construction and content This event ontology integrates a model of experts' knowledge for disease surveillance, and at the same time sets of linguistic expressions which denote disease-related events, and formal definitions of events. In this ontology, rather general event classes, which are suitable for application to language-oriented tasks such as recognition of event expressions, are placed on the upper-level, and more specific events of the experts' interest are in the lower level. Each class is related to other classes which represent participants of events, and linked with multi-lingual synonym sets and axioms. Conclusions We consider that the design of the event ontology and the methodology introduced in this paper are applicable to other domains which require integration of natural language information and machine support for experts to assess them. The first version of the ontology, with about 40 concepts, will be available in March 2008. PMID:18426553

  2. Building a global normalized ontology for integrating geographic data sources

    NASA Astrophysics Data System (ADS)

    Buccella, Agustina; Cechich, Alejandra; Gendarmi, Domenico; Lanubile, Filippo; Semeraro, Giovanni; Colagrossi, Attilio

    2011-07-01

    Nowadays, the proliferation of geographic information systems has caused great interest in integration. However, an integration process is not as simple as joining several systems, since any effort at information sharing runs into the problem of semantic heterogeneity, which requires the identification and representation of all semantics useful in performing schema integration. On several research lines, including research on geographic information system integration, ontologies have been introduced to facilitate knowledge sharing among various agents. Particularly, one of the aspects of ontology sharing is performing some sort of mapping between ontology constructs. Further, some research suggests that we should also be able to combine ontologies where the product of this combination will be, at the very least, the intersection of the two given ontologies. However, few approaches built integrations upon standard and normalized information, which might improve accuracy of mappings and therefore commitment and understandability of the integration. In this work, we propose a novel system (called GeoMergeP) to integrate geographic sources by formalizing their information as normalized ontologies. Our integral merging process—including structural, syntactic and semantic aspects—assists users in finding the more suitable correspondences. The system has been empirically tested in the context of projects of the Italian Institute for Environmental Protection and Research (ISPRA, ex APAT), providing a consistent and complete integration of their sources.

  3. Suggesting Missing Relations in Biomedical Ontologies Based on Lexical Regularities.

    PubMed

    Quesada-Martínez, Manuel; Fernández-Breis, Jesualdo Tomás; Karlsson, Daniel

    2016-01-01

    The number of biomedical ontologies has increased significantly in recent years. Many of such ontologies are the result of efforts of communities of domain experts and ontology engineers. The development and application of quality assurance (QA) methods should help these communities to develop useful ontologies for both humans and machines. According to previous studies, biomedical ontologies are rich in natural language content, but most of them are not so rich in axiomatic terms. Here, we are interested in studying the relation between content in natural language and content in axiomatic form. The analysis of the labels of the classes permits to identify lexical regularities (LRs), which are sets of words that are shared by labels of different classes. Our assumption is that the classes exhibiting an LR should be logically related through axioms, which is used to propose an algorithm to detect missing relations in the ontology. Here, we analyse a lexical regularity of SNOMED CT, congenital stenosis, which is reported as problematic by the SNOMED CT maintenance team. PMID:27577409

  4. Effects of Guideline-Based Training on the Quality of Formal Ontologies: A Randomized Controlled Trial

    PubMed Central

    Boeker, Martin; Jansen, Ludger; Grewe, Niels; Röhl, Johannes; Schober, Daniel; Seddig-Raufie, Djamila; Schulz, Stefan

    2013-01-01

    Background The importance of ontologies in the biomedical domain is generally recognized. However, their quality is often too poor for large-scale use in critical applications, at least partially due to insufficient training of ontology developers. Objective To show the efficacy of guideline-based ontology development training on the performance of ontology developers. The hypothesis was that students who received training on top-level ontologies and design patterns perform better than those who only received training in the basic principles of formal ontology engineering. Methods A curriculum was implemented based on a guideline for ontology design. A randomized controlled trial on the efficacy of this curriculum was performed with 24 students from bioinformatics and related fields. After joint training on the fundamentals of ontology development the students were randomly allocated to two groups. During the intervention, each group received training on different topics in ontology development. In the assessment phase, all students were asked to solve modeling problems on topics taught differentially in the intervention phase. Primary outcome was the similarity of the students’ ontology artefacts compared with gold standard ontologies developed by the authors before the experiment; secondary outcome was the intra-group similarity of group members’ ontologies. Results The experiment showed no significant effect of the guideline-based training on the performance of ontology developers (a) the ontologies developed after specific training were only slightly but not significantly closer to the gold standard ontologies than the ontologies developed without prior specific training; (b) although significant differences for certain ontologies were detected, the intra-group similarity was not consistently influenced in one direction by the differential training. Conclusion Methodologically limited, this study cannot be interpreted as a general failure of a guideline

  5. Retrieval of aerosol optical depth from surface solar radiation measurements using machine learning algorithms, non-linear regression and a radiative transfer-based look-up table

    NASA Astrophysics Data System (ADS)

    Huttunen, Jani; Kokkola, Harri; Mielonen, Tero; Esa Juhani Mononen, Mika; Lipponen, Antti; Reunanen, Juha; Vilhelm Lindfors, Anders; Mikkonen, Santtu; Erkki Juhani Lehtinen, Kari; Kouremeti, Natalia; Bais, Alkiviadis; Niska, Harri; Arola, Antti

    2016-07-01

    In order to have a good estimate of the current forcing by anthropogenic aerosols, knowledge on past aerosol levels is needed. Aerosol optical depth (AOD) is a good measure for aerosol loading. However, dedicated measurements of AOD are only available from the 1990s onward. One option to lengthen the AOD time series beyond the 1990s is to retrieve AOD from surface solar radiation (SSR) measurements taken with pyranometers. In this work, we have evaluated several inversion methods designed for this task. We compared a look-up table method based on radiative transfer modelling, a non-linear regression method and four machine learning methods (Gaussian process, neural network, random forest and support vector machine) with AOD observations carried out with a sun photometer at an Aerosol Robotic Network (AERONET) site in Thessaloniki, Greece. Our results show that most of the machine learning methods produce AOD estimates comparable to the look-up table and non-linear regression methods. All of the applied methods produced AOD values that corresponded well to the AERONET observations with the lowest correlation coefficient value being 0.87 for the random forest method. While many of the methods tended to slightly overestimate low AODs and underestimate high AODs, neural network and support vector machine showed overall better correspondence for the whole AOD range. The differences in producing both ends of the AOD range seem to be caused by differences in the aerosol composition. High AODs were in most cases those with high water vapour content which might affect the aerosol single scattering albedo (SSA) through uptake of water into aerosols. Our study indicates that machine learning methods benefit from the fact that they do not constrain the aerosol SSA in the retrieval, whereas the LUT method assumes a constant value for it. This would also mean that machine learning methods could have potential in reproducing AOD from SSR even though SSA would have changed during

  6. Determining Fitness-For-Use of Ontologies Through Change Management, Versioning and Publication Best Practices

    NASA Astrophysics Data System (ADS)

    West, P.; Zednik, S.; Fu, L.; Ma, X.; Fox, P. A.

    2015-12-01

    There is a large and growing number of domain ontologies available for researchers to leverage in their applications. When evaluating the use of an ontology it is important to not only consider whether the concepts and relationships defined in the ontology meet the requirements for purpose of use, but also how the change management, versioning and publication practices followed by the ontology publishers affect the maturity, stability, and long-term fitness-for-use of the ontology. In this presentation we share our experiences and a list of best practices we have developed when determining fitness for use of existing ontologies, and the process we follow when developing of our own ontologies and extensions to existing ontologies. Our experience covers domains such as solar terrestrial physics, geophysics and oceanography; and the use of general purpose ontologies such as those with representations of people, organizations, data catalogs, observations and measurements and provenance. We will cover how we determine ontology scope, manage ontology change, specify ontology version, and what best practices we follow for ontology publication and use. The implications of following these best practices is that the ontologies we use and develop are mature, stable, have a well-defined scope, and are published in accordance with linked data principles.

  7. Ontological aspects of the Casimir Effect

    NASA Astrophysics Data System (ADS)

    Simpson, William M. R.

    2014-11-01

    The role of the vacuum, in the Casimir Effect, is a matter of some dispute: the Casimir force has been variously described as a phenomenon resulting "from the alteration, by the boundaries, of the zero-point electromagnetic energy" (Bordag, Mohideen, & Mostepanenko, 2001), or a "van der Waals force between the metal plates" that can be "computed without reference to zero point energies" (Jaffe, 2005). Neither of these descriptions is grounded in a consistently quantum mechanical treatment of matter interacting with the electromagnetic field. However, the Casimir Effect has been canonically described within the framework of macroscopic quantum electrodynamics (Philbin, 2010). On this general account, the force is seen to arise due to the coupling of fluctuating currents to the zero-point radiation, and it is in this restricted sense that the phenomenon requires the existence of zero-point fields. The conflicting descriptions of the Casimir Effect, on the other hand, appear to arise from ontologies in which an unwarranted metaphysical priority is assigned either to the matter or the fields, and this may have a direct bearing on the problem of the cosmological constant.

  8. Spatial relation query based on geographic ontology

    NASA Astrophysics Data System (ADS)

    Du, Chong; Xu, Jun; Zhang, Jing; Si, Wangli; Liu, Bao; Zhang, Dapeng

    2010-11-01

    The description of a spatial relation is the reflection of human's cognition of spatial objects. It is not only affected by topology and metric, but also affected by geographic semantics, such as the categories of geographic entities and contexts. Currently, the researches about language aspects of spatial relations mostly focus on natural-language formalization, parsing of query sentences, and natural-language query interface. However, geographic objects are not simple geometric points, lines or polygons. In order to get a sound answer according with human cognition in spatial relation queries, we have to take geographic semantics into account. In this paper, the functions of natural-language spatial terms are designed based on previous work on natural-language formalization and human-subject tests. Then, the paper builds a geographic knowledge base based on geographic ontology using Protégé for discriminating geographic semantics. Finally, using the geographic knowledge in the knowledge base, a prototype of a query system is implemented on GIS platform.

  9. Bohmian mechanics without wave function ontology

    NASA Astrophysics Data System (ADS)

    Solé, Albert

    2013-11-01

    In this paper, I critically assess different interpretations of Bohmian mechanics that are not committed to an ontology based on the wave function being an actual physical object that inhabits configuration space. More specifically, my aim is to explore the connection between the denial of configuration space realism and another interpretive debate that is specific to Bohmian mechanics: the quantum potential versus guidance approaches. Whereas defenders of the quantum potential approach to the theory claim that Bohmian mechanics is better formulated as quasi-Newtonian, via the postulation of forces proportional to acceleration; advocates of the guidance approach defend the notion that the theory is essentially first-order and incorporates some concepts akin to those of Aristotelian physics. Here I analyze whether the desideratum of an interpretation of Bohmian mechanics that is both explanatorily adequate and not committed to configuration space realism favors one of these two approaches to the theory over the other. Contrary to some recent claims in the literature, I argue that the quasi-Newtonian approach based on the idea of a quantum potential does not come out the winner.

  10. Evaluating the Good Ontology Design Guideline (GoodOD) with the Ontology Quality Requirements and Evaluation Method and Metrics (OQuaRE)

    PubMed Central

    Duque-Ramos, Astrid; Boeker, Martin; Jansen, Ludger; Schulz, Stefan; Iniesta, Miguela; Fernández-Breis, Jesualdo Tomás

    2014-01-01

    Objective To (1) evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2) informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. Background In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. Methods In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. Results Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a) a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b) both positive and negative training effects with respect to the gold standard were found for five topics. Conclusion The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies. PMID:25148262

  11. An Ontology Design Pattern for Surface Water Features

    SciTech Connect

    Sinha, Gaurav; Mark, David; Kolas, Dave; Varanka, Dalia; Romero, Boleslo E; Feng, Chen-Chieh; Usery, Lynn; Liebermann, Joshua; Sorokine, Alexandre

    2014-01-01

    Surface water is a primary concept of human experience but concepts are captured in cultures and languages in many different ways. Still, many commonalities can be found due to the physical basis of many of the properties and categories. An abstract ontology of surface water features based only on those physical properties of landscape features has the best potential for serving as a foundational domain ontology. It can then be used to systematically incor-porate concepts that are specific to a culture, language, or scientific domain. The Surface Water ontology design pattern was developed both for domain knowledge distillation and to serve as a conceptual building-block for more complex surface water ontologies. A fundamental distinction is made in this on-tology between landscape features that act as containers (e.g., stream channels, basins) and the bodies of water (e.g., rivers, lakes) that occupy those containers. Concave (container) landforms semantics are specified in a Dry module and the semantics of contained bodies of water in a Wet module. The pattern is imple-mented in OWL, but Description Logic axioms and a detailed explanation is provided. The OWL ontology will be an important contribution to Semantic Web vocabulary for annotating surface water feature datasets. A discussion about why there is a need to complement the pattern with other ontologies, es-pecially the previously developed Surface Network pattern is also provided. Fi-nally, the practical value of the pattern in semantic querying of surface water datasets is illustrated through a few queries and annotated geospatial datasets.

  12. Evolving BioAssay Ontology (BAO): modularization, integration and applications

    PubMed Central

    2014-01-01

    The lack of established standards to describe and annotate biological assays and screening outcomes in the domain of drug and chemical probe discovery is a severe limitation to utilize public and proprietary drug screening data to their maximum potential. We have created the BioAssay Ontology (BAO) project (http://bioassayontology.org) to develop common reference metadata terms and definitions required for describing relevant information of low-and high-throughput drug and probe screening assays and results. The main objectives of BAO are to enable effective integration, aggregation, retrieval, and analyses of drug screening data. Since we first released BAO on the BioPortal in 2010 we have considerably expanded and enhanced BAO and we have applied the ontology in several internal and external collaborative projects, for example the BioAssay Research Database (BARD). We describe the evolution of BAO with a design that enables modeling complex assays including profile and panel assays such as those in the Library of Integrated Network-based Cellular Signatures (LINCS). One of the critical questions in evolving BAO is the following: how can we provide a way to efficiently reuse and share among various research projects specific parts of our ontologies without violating the integrity of the ontology and without creating redundancies. This paper provides a comprehensive answer to this question with a description of a methodology for ontology modularization using a layered architecture. Our modularization approach defines several distinct BAO components and separates internal from external modules and domain-level from structural components. This approach facilitates the generation/extraction of derived ontologies (or perspectives) that can suit particular use cases or software applications. We describe the evolution of BAO related to its formal structures, engineering approaches, and content to enable modeling of complex assays and integration with other ontologies and

  13. Evolving BioAssay Ontology (BAO): modularization, integration and applications.

    PubMed

    Abeyruwan, Saminda; Vempati, Uma D; Küçük-McGinty, Hande; Visser, Ubbo; Koleti, Amar; Mir, Ahsan; Sakurai, Kunie; Chung, Caty; Bittker, Joshua A; Clemons, Paul A; Brudz, Steve; Siripala, Anosha; Morales, Arturo J; Romacker, Martin; Twomey, David; Bureeva, Svetlana; Lemmon, Vance; Schürer, Stephan C

    2014-01-01

    The lack of established standards to describe and annotate biological assays and screening outcomes in the domain of drug and chemical probe discovery is a severe limitation to utilize public and proprietary drug screening data to their maximum potential. We have created the BioAssay Ontology (BAO) project (http://bioassayontology.org) to develop common reference metadata terms and definitions required for describing relevant information of low-and high-throughput drug and probe screening assays and results. The main objectives of BAO are to enable effective integration, aggregation, retrieval, and analyses of drug screening data. Since we first released BAO on the BioPortal in 2010 we have considerably expanded and enhanced BAO and we have applied the ontology in several internal and external collaborative projects, for example the BioAssay Research Database (BARD). We describe the evolution of BAO with a design that enables modeling complex assays including profile and panel assays such as those in the Library of Integrated Network-based Cellular Signatures (LINCS). One of the critical questions in evolving BAO is the following: how can we provide a way to efficiently reuse and share among various research projects specific parts of our ontologies without violating the integrity of the ontology and without creating redundancies. This paper provides a comprehensive answer to this question with a description of a methodology for ontology modularization using a layered architecture. Our modularization approach defines several distinct BAO components and separates internal from external modules and domain-level from structural components. This approach facilitates the generation/extraction of derived ontologies (or perspectives) that can suit particular use cases or software applications. We describe the evolution of BAO related to its formal structures, engineering approaches, and content to enable modeling of complex assays and integration with other ontologies and

  14. Representing Ontogeny Through Ontology: A Developmental Biologist’s Guide to The Gene Ontology

    PubMed Central

    Hill, David P.; Berardini, Tanya Z.; Howe, Douglas G.; Van Auken, Kimberly M.

    2010-01-01

    Developmental biology, like many other areas of biology, has undergone a dramatic shift in the perspective from which developmental processes are viewed. Instead of focusing on the actions of a handful of genes or functional RNAs, we now consider the interactions of large functional gene networks and study how these complex systems orchestrate the unfolding of an organism, from gametes to adult. Developmental biologists are beginning to realize that understanding ontogeny on this scale requires the utilization of computational methods to capture, store and represent the knowledge we have about the underlying processes. Here we review the use of the Gene Ontology (GO) to study developmental biology. We describe the organization and structure of the GO and illustrate some of the ways we use it to capture the current understanding of many common developmental processes. We also discuss ways in which gene product annotations using the GO have been used to ask and answer developmental questions in a variety of model developmental systems. We provide suggestions as to how the GO might be used in more powerful ways to address questions about development. Our goal is to provide developmental biologists with enough background about the GO that they can begin to think about how they might use the ontology efficiently and in the most powerful ways possible. PMID:19921742

  15. A study on heterogeneous distributed spatial information platform based on semantic Web services

    NASA Astrophysics Data System (ADS)

    Peng, Shuang-yun; Yang, Kun; Xu, Quan-li; Huang, Bang-mei

    2008-10-01

    With the development of Semantic Web technology, the spatial information service based on ontology is an effective way for sharing and interoperation of heterogeneous information resources in the distributed network environment. This paper discusses spatial information sharing and interoperability in the Semantic Web Services architecture. Through using Ontology record spatial information in sharing knowledge system, explicit and formalization expresses the default and the concealment semantic information. It provides the prerequisite for spatial information sharing and interoperability; Through Semantic Web Services technology parses Ontology and intelligent buildings services under network environment, form a network of services. In order to realize the practical applications of spatial information sharing and interoperation in different brunches of CDC system, a prototype system for HIV/AIDS information sharing based on geo-ontology has also been developed by using the methods described above.

  16. Proceedings of a Sickle Cell Disease Ontology workshop - Towards the first comprehensive ontology for Sickle Cell Disease.

    PubMed

    Mulder, Nicola; Nembaware, Victoria; Adekile, Adekunle; Anie, Kofi A; Inusa, Baba; Brown, Biobele; Campbell, Andrew; Chinenere, Furahini; Chunda-Liyoka, Catherine; Derebail, Vimal K; Geard, Amy; Ghedira, Kais; Hamilton, Carol M; Hanchard, Neil A; Haendel, Melissa; Huggins, Wayne; Ibrahim, Muntaser; Jupp, Simon; Kamga, Karen Kengne; Knight-Madden, Jennifer; Lopez-Sall, Philomène; Mbiyavanga, Mamana; Munube, Deogratias; Nirenberg, Damian; Nnodu, Obiageli; Ofori-Acquah, Solomon Fiifi; Ohene-Frempong, Kwaku; Opap, Kenneth Babu; Panji, Sumir; Park, Miriam; Pule, Gift; Royal, Charmaine; Sangeda, Raphael; Tayo, Bamidele; Treadwell, Marsha; Tshilolo, Léon; Wonkam, Ambroise

    2016-06-01

    Sickle cell disease (SCD) is a debilitating single gene disorder caused by a single point mutation that results in physical deformation (i.e. sickling) of erythrocytes at reduced oxygen tensions. Up to 75% of SCD in newborns world-wide occurs in sub-Saharan Africa, where neonatal and childhood mortality from sickle cell related complications is high. While SCD research across the globe is tackling the disease on multiple fronts, advances have yet to significantly impact on the health and quality of life of SCD patients, due to lack of coordination of these disparate efforts. Ensuring data across studies is directly comparable through standardization is a necessary step towards realizing this goal. Such a standardization requires the development and implementation of a disease-specific ontology for SCD that is applicable globally. Ontology development is best achieved by bringing together experts in the domain to contribute their knowledge. The SCD community and H3ABioNet members joined forces at a recent SCD Ontology workshop to develop an ontology covering aspects of SCD under the classes: phenotype, diagnostics, therapeutics, quality of life, disease modifiers and disease stage. The aim of the workshop was for participants to contribute their expertise to development of the structure and contents of the SCD ontology. Here we describe the proceedings of the Sickle Cell Disease Ontology Workshop held in Cape Town South Africa in February 2016 and its outcomes. The objective of the workshop was to bring together experts in SCD from around the world to contribute their expertise to the development of various aspects of the SCD ontology. PMID:27354937

  17. Development and Evaluation of an Ontology for Guiding Appropriate Antibiotic Prescribing

    PubMed Central

    Furuya, E. Yoko; Kuperman, Gilad J.; Cimino, James J.; Bakken, Suzanne

    2011-01-01

    Objectives To develop and apply formal ontology creation methods to the domain of antimicrobial prescribing and to formally evaluate the resulting ontology through intrinsic and extrinsic evaluation studies. Methods We extended existing ontology development methods to create the ontology and implemented the ontology using Protégé-OWL. Correctness of the ontology was assessed using a set of ontology design principles and domain expert review via the laddering technique. We created three artifacts to support the extrinsic evaluation (set of prescribing rules, alerts and an ontology-driven alert module, and a patient database) and evaluated the usefulness of the ontology for performing knowledge management tasks to maintain the ontology and for generating alerts to guide antibiotic prescribing. Results The ontology includes 199 classes, 10 properties, and 1,636 description logic restrictions. Twenty-three Semantic Web Rule Language rules were written to generate three prescribing alerts: 1) antibiotic-microorganism mismatch alert; 2) medication-allergy alert; and 3) non-recommended empiric antibiotic therapy alert. The evaluation studies confirmed the correctness of the ontology, usefulness of the ontology for representing and maintaining antimicrobial treatment knowledge rules, and usefulness of the ontology for generating alerts to provide feedback to clinicians during antibiotic prescribing. Conclusions This study contributes to the understanding of ontology development and evaluation methods and addresses one knowledge gap related to using ontologies as a clinical decision support system component—a need for formal ontology evaluation methods to measure their quality from the perspective of their intrinsic characteristics and their usefulness for specific tasks. PMID:22019377

  18. Describing the Breakbone Fever: IDODEN, an Ontology for Dengue Fever

    PubMed Central

    Mitraka, Elvira; Topalis, Pantelis; Dritsou, Vicky; Dialynas, Emmanuel; Louis, Christos

    2015-01-01

    Background Ontologies represent powerful tools in information technology because they enhance interoperability and facilitate, among other things, the construction of optimized search engines. To address the need to expand the toolbox available for the control and prevention of vector-borne diseases we embarked on the construction of specific ontologies. We present here IDODEN, an ontology that describes dengue fever, one of the globally most important diseases that are transmitted by mosquitoes. Methodology/Principal Findings We constructed IDODEN using open source software, and modeled it on IDOMAL, the malaria ontology developed previously. IDODEN covers all aspects of dengue fever, such as disease biology, epidemiology and clinical features. Moreover, it covers all facets of dengue entomology. IDODEN, which is freely available, can now be used for the annotation of dengue-related data and, in addition to its use for modeling, it can be utilized for the construction of other dedicated IT tools such as decision support systems. Conclusions/Significance The availability of the dengue ontology will enable databases hosting dengue-associated data and decision-support systems for that disease to perform most efficiently and to link their own data to those stored in other independent repositories, in an architecture- and software-independent manner. PMID:25646954

  19. The Plant Ontology: A Tool for Plant Genomics.

    PubMed

    Cooper, Laurel; Jaiswal, Pankaj

    2016-01-01

    The use of controlled, structured vocabularies (ontologies) has become a critical tool for scientists in the post-genomic era of massive datasets. Adoption and integration of common vocabularies and annotation practices enables cross-species comparative analyses and increases data sharing and reusability. The Plant Ontology (PO; http://www.plantontology.org/ ) describes plant anatomy, morphology, and the stages of plant development, and offers a database of plant genomics annotations associated to the PO terms. The scope of the PO has grown from its original design covering only rice, maize, and Arabidopsis, and now includes terms to describe all green plants from angiosperms to green algae.This chapter introduces how the PO and other related ontologies are constructed and organized, including languages and software used for ontology development, and provides an overview of the key features. Detailed instructions illustrate how to search and browse the PO database and access the associated annotation data. Users are encouraged to provide input on the ontology through the online term request form and contribute datasets for integration in the PO database. PMID:26519402

  20. Threat assessment using visual hierarchy and conceptual firearms ontology

    NASA Astrophysics Data System (ADS)

    Arslan, Abdullah N.; Hempelmann, Christian F.; Attardo, Salvatore; Blount, Grady Price; Sirakov, Nikolay Metodiev

    2015-05-01

    The work that established and explored the links between visual hierarchy and conceptual ontology of firearms for the purpose of threat assessment is continued. The previous study used geometrical information to find a target in the visual hierarchy and through the links with the conceptual ontology to derive high-level information that was used to assess a potential threat. Multiple improvements and new contributions are reported. The theoretical basis of the geometric feature extraction method was improved in terms of accuracy. The sample space used for validations is expanded from 31 to 153 firearms. Thus, a new larger and more accurate sequence of visual hierarchies was generated using a modified Gonzalez' clustering algorithm. The conceptual ontology is elaborated as well and more links were created between the two kinds of hierarchies (visual and conceptual). The threat assessment equation is refined around ammunition-related properties and uses high-level information from the conceptual hierarchy. The experiments performed on weapons identification and threat assessment showed that our system recognized 100% of the cases if a weapon already belongs to the ontology and in 90.8% of the cases, determined the correct third ancestor (level concept) if the weapon is unknown to the ontology. To validate the accuracy of identification for a very large data set, we calculated the intervals of confidence for our system.

  1. Enabling Enrichment Analysis with the Human Disease Ontology

    PubMed Central

    LePendu, Paea; Musen, Mark A.; Shah, Nigam H.

    2012-01-01

    Advanced statistical methods used to analyze high-throughput data such as gene-expression assays result in long lists of “significant genes.” One way to gain insight into the significance of altered expression levels is to determine whether Gene Ontology (GO) terms associated with a particular biological process, molecular function, or cellular component are over- or under-represented in the set of genes deemed significant. This process, referred to as enrichment analysis, profiles a gene-set, and is widely used to make sense of the results of high-throughput experiments. Our goal is to develop and apply general enrichment analysis methods to profile other sets of interest, such as patient cohorts from the electronic medical record, using a variety of ontologies including SNOMED CT, MedDRA, RxNorm, and others. Although it is possible to perform enrichment analysis using ontologies other than the GO, a key pre-requisite is the availability of a background set of annotations to enable the enrichment calculation. In the case of the GO, this background set is provided by the Gene Ontology Annotations. In the current work, we describe: (i) a general method that uses hand-curated GO annotations as a starting point for creating background datasets for enrichment analysis using other ontologies; and (ii) a gene–disease background annotation set—that enables disease-based enrichment—to demonstrate feasibility of our method. PMID:21550421

  2. Advancing Science through Mining Libraries, Ontologies, and Communities*

    PubMed Central

    Evans, James A.; Rzhetsky, Andrey

    2011-01-01

    Life scientists today cannot hope to read everything relevant to their research. Emerging text-mining tools can help by identifying topics and distilling statements from books and articles with increased accuracy. Researchers often organize these statements into ontologies, consistent systems of reality claims. Like scientific thinking and interchange, however, text-mined information (even when accurately captured) is complex, redundant, sometimes incoherent, and often contradictory: it is rooted in a mixture of only partially consistent ontologies. We review work that models scientific reason and suggest how computational reasoning across ontologies and the broader distribution of textual statements can assess the certainty of statements and the process by which statements become certain. With the emergence of digitized data regarding networks of scientific authorship, institutions, and resources, we explore the possibility of accounting for social dependences and cultural biases in reasoning models. Computational reasoning is starting to fill out ontologies and flag internal inconsistencies in several areas of bioscience. In the not too distant future, scientists may be able to use statements and rich models of the processes that produced them to identify underexplored areas, resurrect forgotten findings and ideas, deconvolute the spaghetti of underlying ontologies, and synthesize novel knowledge and hypotheses. PMID:21566119

  3. Advancing science through mining libraries, ontologies, and communities.

    PubMed

    Evans, James A; Rzhetsky, Andrey

    2011-07-01

    Life scientists today cannot hope to read everything relevant to their research. Emerging text-mining tools can help by identifying topics and distilling statements from books and articles with increased accuracy. Researchers often organize these statements into ontologies, consistent systems of reality claims. Like scientific thinking and interchange, however, text-mined information (even when accurately captured) is complex, redundant, sometimes incoherent, and often contradictory: it is rooted in a mixture of only partially consistent ontologies. We review work that models scientific reason and suggest how computational reasoning across ontologies and the broader distribution of textual statements can assess the certainty of statements and the process by which statements become certain. With the emergence of digitized data regarding networks of scientific authorship, institutions, and resources, we explore the possibility of accounting for social dependences and cultural biases in reasoning models. Computational reasoning is starting to fill out ontologies and flag internal inconsistencies in several areas of bioscience. In the not too distant future, scientists may be able to use statements and rich models of the processes that produced them to identify underexplored areas, resurrect forgotten findings and ideas, deconvolute the spaghetti of underlying ontologies, and synthesize novel knowledge and hypotheses. PMID:21566119

  4. Ontology Driven Development and Science Information System Interoperability

    NASA Astrophysics Data System (ADS)

    Hughes, J. S.; Crichton, D. J.; Joyner, R. S.; Rye, E. D.; Pds4 Data Standards Team Leads

    2010-12-01

    A domain ontology can be used to drive the development of a science information system and enable system interoperability and science data correlation. A domain ontology defines the data structures, the metadata for the science interpretation of the data, and the metadata that describes the context within which the data was captured, processed, and archived. In addition the ontology defines the organization of the data and their relationships. These definitions can be used to configure a registry-base information system from generic system components, generate schemas for data labeling and validation, and write standards documents for a variety of audiences. The resulting information system catalogs and tracks ingested data and allows the periodic harvesting of the registered metadata for sophisticated web-based search applications. An independent ontology and the data driven paradigm also allow the evolution of the domain’s information model independent from the system’s infrastructure. The Planetary Data System (PDS) is executing a plan to move the PDS to a fully online, federated system. This plan addresses new demands on the system including increasing data volume and complexity and number of missions. This poster provides an overview of the planetary science ontology and the data driven paradigm being used to development the PDS 2010 information system.

  5. The NIFSTD and BIRNLex vocabularies: building comprehensive ontologies for neuroscience.

    PubMed

    Bug, William J; Ascoli, Giorgio A; Grethe, Jeffrey S; Gupta, Amarnath; Fennema-Notestine, Christine; Laird, Angela R; Larson, Stephen D; Rubin, Daniel; Shepherd, Gordon M; Turner, Jessica A; Martone, Maryann E

    2008-09-01

    A critical component of the Neuroscience Information Framework (NIF) project is a consistent, flexible terminology for describing and retrieving neuroscience-relevant resources. Although the original NIF specification called for a loosely structured controlled vocabulary for describing neuroscience resources, as the NIF system evolved, the requirement for a formally structured ontology for neuroscience with sufficient granularity to describe and access a diverse collection of information became obvious. This requirement led to the NIF standardized (NIFSTD) ontology, a comprehensive collection of common neuroscience domain terminologies woven into an ontologically consistent, unified representation of the biomedical domains typically used to describe neuroscience data (e.g., anatomy, cell types, techniques), as well as digital resources (tools, databases) being created throughout the neuroscience community. NIFSTD builds upon a structure established by the BIRNLex, a lexicon of concepts covering clinical neuroimaging research developed by the Biomedical Informatics Research Network (BIRN) project. Each distinct domain module is represented using the Web Ontology Language (OWL). As much as has been practical, NIFSTD reuses existing community ontologies that cover the required biomedical domains, building the more specific concepts required to annotate NIF resources. By following this principle, an extensive vocabulary was assembled in a relatively short period of time for NIF information annotation, organization, and retrieval, in a form that promotes easy extension and modification. We report here on the structure of the NIFSTD, and its predecessor BIRNLex, the principles followed in its construction and provide examples of its use within NIF. PMID:18975148

  6. A Unified Framework for Biomedical Terminologies and Ontologies

    PubMed Central

    Ceusters, Werner; Smith, Barry

    2011-01-01

    The goal of the OBO (Open Biomedical Ontologies) Foundry initiative is to create and maintain an evolving collection of non-overlapping interoperable ontologies that will offer unambiguous representations of the types of entities in biological and biomedical reality. These ontologies are designed to serve non-redundant annotation of data and scientific text. To achieve these ends, the Foundry imposes strict requirements upon the ontologies eligible for inclusion. While these requirements are not met by most existing biomedical terminologies, the latter may nonetheless support the Foundry’s goal of consistent and non-redundant annotation if appropriate mappings of data annotated with their aid can be achieved. To construct such mappings in reliable fashion, however, it is necessary to analyze terminological resources from an ontologically realistic perspective in such a way as to identify the exact import of the ‘concepts’ and associated terms which they contain. We propose a framework for such analysis that is designed to maximize the degree to which legacy terminologies and the data coded with their aid can be successfully used for information-driven clinical and translational research. PMID:20841844

  7. An integrated pharmacokinetics ontology and corpus for text mining

    PubMed Central

    2013-01-01

    Background Drug pharmacokinetics parameters, drug interaction parameters, and pharmacogenetics data have been unevenly collected in different databases and published extensively in the literature. Without appropriate pharmacokinetics ontology and a well annotated pharmacokinetics corpus, it will be difficult to develop text mining tools for pharmacokinetics data collection from the literature and pharmacokinetics data integration from multiple databases. Description A comprehensive pharmacokinetics ontology was constructed. It can annotate all aspects of in vitro pharmacokinetics experiments and in vivo pharmacokinetics studies. It covers all drug metabolism and transportation enzymes. Using our pharmacokinetics ontology, a PK-corpus was constructed to present four classes of pharmacokinetics abstracts: in vivo pharmacokinetics studies, in vivo pharmacogenetic studies, in vivo drug interaction studies, and in vitro drug interaction studies. A novel hierarchical three level annotation scheme was proposed and implemented to tag key terms, drug interaction sentences, and drug interaction pairs. The utility of the pharmacokinetics ontology was demonstrated by annotating three pharmacokinetics studies; and the utility of the PK-corpus was demonstrated by a drug interaction extraction text mining analysis. Conclusions The pharmacokinetics ontology annotates both in vitro pharmacokinetics experiments and in vivo pharmacokinetics studies. The PK-corpus is a highly valuable resource for the text mining of pharmacokinetics parameters and drug interactions. PMID:23374886

  8. A Separable, Dynamically Local Ontological Model of Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Pienaar, Jacques

    2016-01-01

    A model of reality is called separable if the state of a composite system is equal to the union of the states of its parts, located in different regions of space. Spekkens has argued that it is trivial to reproduce the predictions of quantum mechanics using a separable ontological model, provided one allows for arbitrary violations of `dynamical locality'. However, since dynamical locality is strictly weaker than local causality, this leaves open the question of whether an ontological model for quantum mechanics can be both separable and dynamically local. We answer this question in the affirmative, using an ontological model based on previous work by Deutsch and Hayden. Although the original formulation of the model avoids Bell's theorem by denying that measurements result in single, definite outcomes, we show that the model can alternatively be cast in the framework of ontological models, where Bell's theorem does apply. We find that the resulting model violates local causality, but satisfies both separability and dynamical locality, making it a candidate for the `most local' ontological model of quantum mechanics.

  9. Data Ontology and an Information System Realization for Web-Based Management of Image Measurements

    PubMed Central

    Prodanov, Dimiter

    2011-01-01

    Image acquisition, processing, and quantification of objects (morphometry) require the integration of data inputs and outputs originating from heterogeneous sources. Management of the data exchange along this workflow in a systematic manner poses several challenges, notably the description of the heterogeneous meta-data and the interoperability between the software used. The use of integrated software solutions for morphometry and management of imaging data in combination with ontologies can reduce meta-data loss and greatly facilitate subsequent data analysis. This paper presents an integrated information system, called LabIS. The system has the objectives to automate (i) the process of storage, annotation, and querying of image measurements and (ii) to provide means for data sharing with third party applications consuming measurement data using open standard communication protocols. LabIS implements 3-tier architecture with a relational database back-end and an application logic middle tier realizing web-based user interface for reporting and annotation and a web-service communication layer. The image processing and morphometry functionality is backed by interoperability with ImageJ, a public domain image processing software, via integrated clients. Instrumental for the latter feat was the construction of a data ontology representing the common measurement data model. LabIS supports user profiling and can store arbitrary types of measurements, regions of interest, calibrations, and ImageJ settings. Interpretation of the stored measurements is facilitated by atlas mapping and ontology-based markup. The system can be used as an experimental workflow management tool allowing for description and reporting of the performed experiments. LabIS can be also used as a measurements repository that can be transparently accessed by computational environments, such as Matlab. Finally, the system can be used as a data sharing tool. PMID:22275893

  10. The EMBRACE web service collection

    PubMed Central

    Pettifer, Steve; Ison, Jon; Kalaš, Matúš; Thorne, Dave; McDermott, Philip; Jonassen, Inge; Liaquat, Ali; Fernández, José M.; Rodriguez, Jose M.; Partners, INB-; Pisano, David G.; Blanchet, Christophe; Uludag, Mahmut; Rice, Peter; Bartaseviciute, Edita; Rapacki, Kristoffer; Hekkelman, Maarten; Sand, Olivier; Stockinger, Heinz; Clegg, Andrew B.; Bongcam-Rudloff, Erik; Salzemann, Jean; Breton, Vincent; Attwood, Teresa K.; Cameron, Graham; Vriend, Gert

    2010-01-01

    The EMBRACE (European Model for Bioinformatics Research and Community Education) web service collection is the culmination of a 5-year project that set out to investigate issues involved in developing and deploying web services for use in the life sciences. The project concluded that in order for web services to achieve widespread adoption, standards must be defined for the choice of web service technology, for semantically annotating both service function and the data exchanged, and a mechanism for discovering services must be provided. Building on this, the project developed: EDAM, an ontology for describing life science web services; BioXSD, a schema for exchanging data between services; and a centralized registry (http://www.embraceregistry.net) that collects together around 1000 services developed by the consortium partners. This article presents the current status of the collection and its associated recommendations and standards definitions. PMID:20462862

  11. DeMO: An Ontology for Discrete-event Modeling and Simulation

    PubMed Central

    Silver, Gregory A; Miller, John A; Hybinette, Maria; Baramidze, Gregory; York, William S

    2011-01-01

    Several fields have created ontologies for their subdomains. For example, the biological sciences have developed extensive ontologies such as the Gene Ontology, which is considered a great success. Ontologies could provide similar advantages to the Modeling and Simulation community. They provide a way to establish common vocabularies and capture knowledge about a particular domain with community-wide agreement. Ontologies can support significantly improved (semantic) search and browsing, integration of heterogeneous information sources, and improved knowledge discovery capabilities. This paper discusses the design and development of an ontology for Modeling and Simulation called the Discrete-event Modeling Ontology (DeMO), and it presents prototype applications that demonstrate various uses and benefits that such an ontology may provide to the Modeling and Simulation community. PMID:22919114

  12. OWL 2 learn profile: an ontology sublanguage for the learning domain.

    PubMed

    Heiyanthuduwage, Sudath R; Schwitter, Rolf; Orgun, Mehmet A

    2016-01-01

    Many experimental ontologies have been developed for the learning domain for use at different institutions. These ontologies include different OWL/OWL 2 (Web Ontology Language) constructors. However, it is not clear which OWL 2 constructors are the most appropriate ones for designing ontologies for the learning domain. It is possible that the constructors used in these learning domain ontologies match one of the three standard OWL 2 profiles (sublanguages). To investigate whether this is the case, we have analysed a corpus of 14 ontologies designed for the learning domain. We have also compared the constructors used in these ontologies with those of the OWL 2 RL profile, one of the OWL 2 standard profiles. The results of our analysis suggest that the OWL 2 constructors used in these ontologies do not exactly match the standard OWL 2 RL profile, but form a subset of that profile which we call OWL 2 Learn. PMID:27066328

  13. Visualization and Ontology of Geospatial Intelligence

    NASA Astrophysics Data System (ADS)

    Chan, Yupo

    Recent events have deepened our conviction that many human endeavors are best described in a geospatial context. This is evidenced in the prevalence of location-based services, as afforded by the ubiquitous cell phone usage. It is also manifested by the popularity of such internet engines as Google Earth. As we commute to work, travel on business or pleasure, we make decisions based on the geospatial information provided by such location-based services. When corporations devise their business plans, they also rely heavily on such geospatial data. By definition, local, state and federal governments provide services according to geographic boundaries. One estimate suggests that 85 percent of data contain spatial attributes.

  14. Development of an Ontology to Recommend Exercises from Conceptual Maps.

    PubMed

    Ito, Márcia; Ciriaco Pereira, Débora Lina N

    2015-01-01

    The recommendation of exercise plans requires several variables to be considered (e.g., patient's conditions and preferences) and are normally complex to analyze. To facilitate this analysis we proposed the creation of an ontology to assist professionals to recommend exercises. We interviewed 2 experts and this resulted in IDEF diagram and conceptual map. The conceptual map proved to be the preferred way that experts gained more understanding compared with the IDEF diagram. In addition, we also used the conceptual map to validate the formal structure of experts' ideas. From the conceptual map we created an ontology that is being reviewed. After this, we plan to incorporate the ontology into a decision support system that will assist professionals to recommend exercises for their patients. PMID:26262394

  15. Matching Patient Records to Clinical Trials Using Ontologies

    NASA Astrophysics Data System (ADS)

    Patel, Chintan; Cimino, James; Dolby, Julian; Fokoue, Achille; Kalyanpur, Aditya; Kershenbaum, Aaron; Ma, Li; Schonberg, Edith; Srinivas, Kavitha

    This paper describes a large case study that explores the applicability of ontology reasoning to problems in the medical domain. We investigate whether it is possible to use such reasoning to automate common clinical tasks that are currently labor intensive and error prone, and focus our case study on improving cohort selection for clinical trials. An obstacle to automating such clinical tasks is the need to bridge the semantic gulf between raw patient data, such as laboratory tests or specific medications, and the way a clinician interprets this data. Our key insight is that matching patients to clinical trials can be formulated as a problem of semantic retrieval. We describe the technical challenges to building a realistic case study, which include problems related to scalability, the integration of large ontologies, and dealing with noisy, inconsistent data. Our solution is based on the SNOMED CT® ontology, and scales to one year of patient records (approx. 240,000 patients).

  16. Ontology-based federated data access to human studies information.

    PubMed

    Sim, Ida; Carini, Simona; Tu, Samson W; Detwiler, Landon T; Brinkley, James; Mollah, Shamim A; Burke, Karl; Lehmann, Harold P; Chakraborty, Swati; Wittkowski, Knut M; Pollock, Brad H; Johnson, Thomas M; Huser, Vojtech

    2012-01-01

    Human studies are one of the most valuable sources of knowledge in biomedical research, but data about their design and results are currently widely dispersed in siloed systems. Federation of these data is needed to facilitate large-scale data analysis to realize the goals of evidence-based medicine. The Human Studies Database project has developed an informatics infrastructure for federated query of human studies databases, using a generalizable approach to ontology-based data access. Our approach has three main components. First, the Ontology of Clinical Research (OCRe) provides the reference semantics. Second, a data model, automatically derived from OCRe into XSD, maintains semantic synchrony of the underlying representations while facilitating data acquisition using common XML technologies. Finally, the Query Integrator issues queries distributed over the data, OCRe, and other ontologies such as SNOMED in BioPortal. We report on a demonstration of this infrastructure on data acquired from institutional systems and from ClinicalTrials.gov. PMID:23304360

  17. Ontology-Based Federated Data Access to Human Studies Information

    PubMed Central

    Sim, Ida; Carini, Simona; Tu, Samson W.; Detwiler, Landon T.; Brinkley, James; Mollah, Shamim A.; Burke, Karl; Lehmann, Harold P.; Chakraborty, Swati; Wittkowski, Knut M.; Pollock, Brad H.; Johnson, Thomas M.; Huser, Vojtech

    2012-01-01

    Human studies are one of the most valuable sources of knowledge in biomedical research, but data about their design and results are currently widely dispersed in siloed systems. Federation of these data is needed to facilitate large-scale data analysis to realize the goals of evidence-based medicine. The Human Studies Database project has developed an informatics infrastructure for federated query of human studies databases, using a generalizable approach to ontology-based data access. Our approach has three main components. First, the Ontology of Clinical Research (OCRe) provides the reference semantics. Second, a data model, automatically derived from OCRe into XSD, maintains semantic synchrony of the underlying representations while facilitating data acquisition using common XML technologies. Finally, the Query Integrator issues queries distributed over the data, OCRe, and other ontologies such as SNOMED in BioPortal. We report on a demonstration of this infrastructure on data acquired from institutional systems and from ClinicalTrials.gov. PMID:23304360

  18. Ontological Labels for Automated Location of Anatomical Shape Differences

    PubMed Central

    Steinert-Threlkeld, Shane; Ardekani, Siamak; Mejino, Jose L.V.; Detwiler, Landon Todd; Brinkley, James F.; Halle, Michael; Kikinis, Ron; Winslow, Raimond L.; Miller, Michael I.; Ratnanather, J. Tilak

    2012-01-01

    A method for automated location of shape differences in diseased anatomical structures via high resolution biomedical atlases annotated with labels from formal ontologies is described. In particular, a high resolution magnetic resonance image of the myocardium of the human left ventricle was segmented and annotated with structural terms from an extracted subset of the Foundational Model of Anatomy ontology. The atlas was registered to the end systole template of a previous study of left ventricular remodeling in cardiomyopathy using a diffeomorphic registration algorithm. The previous study used thresholding and visual inspection to locate a region of statistical significance which distinguished patients with ischemic cardiomyopathy from those with nonischemic cardiomyopathy. Using semantic technologies and the deformed annotated atlas, this location was more precisely found. Although this study used only a cardiac atlas, it provides a proof-of-concept that ontologically labeled biomedical atlases of any anatomical structure can be used to automate location-based inferences. PMID:22490168

  19. OntoSoft: An Ontology for Capturing Scientific Software Metadata

    NASA Astrophysics Data System (ADS)

    Gil, Y.

    2015-12-01

    We have developed OntoSoft, an ontology to describe metadata for scientific software. The ontology is designed considering how scientists would approach the reuse and sharing of software. This includes supporting a scientist to: 1) identify software, 2) understand and assess software, 3) execute software, 4) get support for the software, 5) do research with the software, and 6) update the software. The ontology is available in OWL and contains more than fifty terms. We have used OntoSoft to structure the OntoSoft software registry for geosciences, and to develop user interfaces to capture its metadata. OntoSoft is part of the NSF EarthCube initiative and contributes to its vision of scientific knowledge sharing, in this case about scientific software.

  20. Intelligent E-Learning Systems: Automatic Construction of Ontologies

    NASA Astrophysics Data System (ADS)

    Peso, Jesús del; de Arriaga, Fernando

    2008-05-01

    During the last years a new generation of Intelligent E-Learning Systems (ILS) has emerged with enhanced functionality due, mainly, to influences from Distributed Artificial Intelligence, to the use of cognitive modelling, to the extensive use of the Internet, and to new educational ideas such as the student-centered education and Knowledge Management. The automatic construction of ontologies provides means of automatically updating the knowledge bases of their respective ILS, and of increasing their interoperability and communication among them, sharing the same ontology. The paper presents a new approach, able to produce ontologies from a small number of documents such as those obtained from the Internet, without the assistance of large corpora, by using simple syntactic rules and some semantic information. The method is independent of the natural language used. The use of a multi-agent system increases the flexibility and capability of the method. Although the method can be easily improved, the results so far obtained, are promising.

  1. Grinder Variant System Design and Implementation Based on Ontology

    NASA Astrophysics Data System (ADS)

    Yang, G. H.; Zhang, T. P.

    In order to improve the efficiency of product design and reuse in heterogeneous system of knowledge sharing, this paper introduced the concept of ontology into product variant design, and grinding machine design was as an example. A lot of experience and accumulated knowledge in product design was shared and reused. It is precisely to formulate ontology knowledge such as variant design features and parameter, and applied the software protégé4.3 to construct ontology model, as well as runed resoning on model data information. It developed a set of complete product intelligent system of variant design, which can effectively solve the problem of the repeated design and greatly shorten product development cycle.

  2. Ontology Development and Evolution in the Accident Investigation Domain

    NASA Technical Reports Server (NTRS)

    Carvalho, Robert; Berrios, Dan; Williams, James

    2004-01-01

    InvestiigationOrganizer (IO) is a collaborative semantic web system designed to support the conduct of mishap investigations. IO provides a common repository for a wide range of mishap related information, allowing investigators to integrate evidence, causal models, and investigation results. IO has been used to support investigations ranging from a small property damage case to the loss of the Space Shuttle Columbia. Through IO'S use in these investigations, we have learned significant lessons? about the application of ontologies and semantic systems to solving real-world problems. This paper will describe the development of the ontology within IO, from the initial development, its growth in response to user requests during use in investigations, and the recent work that was done to control the results of that growth. This paper will also describe the lessons learned from this experience and how they may apply to the implementaton of future ontologies and semantic systems.

  3. Mapping the entangled ontology of science teachers' lived experience

    NASA Astrophysics Data System (ADS)

    Daugbjerg, Peer S.; de Freitas, Elizabeth; Valero, Paola

    2015-09-01

    In this paper we investigate how the bodily activity of teaching, along with the embodied aspect of lived experience, relates to science teachers' ways of dealing with bodies as living organisms which are both the subject matter as well as the site or vehicle of learning. More precisely, the following questions are pursued: (1) In what ways do primary science teachers refer to the lived and living body in teaching and learning? (2) In what ways do primary science teachers tap into past experiences in which the body figured prominently in order to teach students about living organisms? We draw on the relational ontology and intra-action of Karen Barad (J Women Cult Soc 28(3): 801, 2003) as she argues for a "relational ontology" that sees a relation as a dynamic flowing entanglement of a matter and meaning. We combine this with the materialist phenomenological studies of embodiment by SungWon Hwang and Wolff-Michael Roth (Scientific and mathematical bodies, Sense Publishers, Rotterdam, 2011), as they address how the teachers and students are present in the classroom with/in their "living and lived bodies". Our aim is to use theoretical insights from these two different but complementary approaches to map the embodiment of teachers' experiences and actions. We build our understanding of experience on the work of John Dewey (Experience and education, Simon & Schuster, New York, 1938) and also Jean Clandinin and Michael Connelly (Handbook of qualitative research, Sage Publications, California, 2000), leading us to propose three dimensions: settings, relations and continuity. This means that bodies and settings are mutually entailed in the present relation, and furthermore that the past as well as the present of these bodies and settings—their continuity—is also part of the present relation. We analyse the entanglement of lived experience and embodied teaching using these three proposed dimensions of experience. Analysing interviews and observations of three Danish

  4. An Ontology for the Discovery of Time-series Data

    NASA Astrophysics Data System (ADS)

    Hooper, R. P.; Choi, Y.; Piasecki, M.; Zaslavsky, I.; Valentine, D. W.; Whitenack, T.

    2010-12-01

    An ontology was developed to enable a single-dimensional keyword search of time-series data collected at fixed points, such as stream gage records, water quality observations, or repeated biological measurements collected at fixed stations. The hierarchical levels were developed to allow navigation from general concepts to more specific ones, terminating in a leaf concept, which is the specific property measured. For example, the concept “nutrient” has child concepts of “nitrogen”, “phosphorus”, and “carbon”; each of these children concepts are then broken into the actual constituent measured (e.g., “total kjeldahl nitrogen” or “nitrate + nitrite”). In this way, a non-expert user can find all nutrients containing nitrogen without knowing all the species measured, but an expert user can go immediately to the compound of interest. In addition, a property, such as dissolved silica, can appear as a leaf concept under nutrients or weathering products. This flexibility allows users from various disciplines to find properties of interest. The ontology can be viewed at http://water.sdsc.edu/hiscentral/startree.aspx. Properties measured by various data publishers (e.g., universities and government agencies) are tagged with leaf concepts from this ontology. A discovery client, HydroDesktop, creates a search request by defining the spatial and temporal extent of interest and a keyword taken from the discovery ontology. Metadata returned from the catalog describes the time series which meet the specified search criteria. This ontology is considered to be an initial description of physical, chemical and biological properties measured in water and suspended sediment. Future plans call for creating a moderated forum for the scientific community to add to and to modify this ontology. Further information for the Hydrologic Information Systems project, of which this is a part, is available at http://his.cuahsi.org.

  5. The mouse pathology ontology, MPATH; structure and applications

    PubMed Central

    2013-01-01

    Background The capture and use of disease-related anatomic pathology data for both model organism phenotyping and human clinical practice requires a relatively simple nomenclature and coding system that can be integrated into data collection platforms (such as computerized medical record-keeping systems) to enable the pathologist to rapidly screen and accurately record observations. The MPATH ontology was originally constructed in 2,000 by a committee of pathologists for the annotation of rodent histopathology images, but is now widely used for coding and analysis of disease and phenotype data for rodents, humans and zebrafish. Construction and content MPATH is divided into two main branches describing pathological processes and structures based on traditional histopathological principles. It does not aim to include definitive diagnoses, which would generally be regarded as disease concepts. It contains 888 core pathology terms in an almost exclusively is_a hierarchy nine layers deep. Currently, 86% of the terms have textual definitions and contain relationships as well as logical axioms to other ontologies such the Gene Ontology. Application and utility MPATH was originally devised for the annotation of histopathological images from mice but is now being used much more widely in the recording of diagnostic and phenotypic data from both mice and humans, and in the construction of logical definitions for phenotype and disease ontologies. We discuss the use of MPATH to generate cross-products with qualifiers derived from a subset of the Phenotype and Trait Ontology (PATO) and its application to large-scale high-throughput phenotyping studies. MPATH provides a largely species-agnostic ontology for the descriptions of anatomic pathology, which can be applied to most amniotes and is now finding extensive use in species other than mice. It enables investigators to interrogate large datasets at a variety of depths, use semantic analysis to identify the relations between

  6. An ontological model of the practice transformation process.

    PubMed

    Sen, Arun; Sinha, Atish P

    2016-06-01

    Patient-centered medical home is defined as an approach for providing comprehensive primary care that facilitates partnerships between individual patients and their personal providers. The current state of the practice transformation process is ad hoc and no methodological basis exists for transforming a practice into a patient-centered medical home. Practices and hospitals somehow accomplish the transformation and send the transformation information to a certification agency, such as the National Committee for Quality Assurance, completely ignoring the development and maintenance of the processes that keep the medical home concept alive. Many recent studies point out that such a transformation is hard as it requires an ambitious whole-practice reengineering and redesign. As a result, the practices suffer change fatigue in getting the transformation done. In this paper, we focus on the complexities of the practice transformation process and present a robust ontological model for practice transformation. The objective of the model is to create an understanding of the practice transformation process in terms of key process areas and their activities. We describe how our ontology captures the knowledge of the practice transformation process, elicited from domain experts, and also discuss how, in the future, that knowledge could be diffused across stakeholders in a healthcare organization. Our research is the first effort in practice transformation process modeling. To build an ontological model for practice transformation, we adopt the Methontology approach. Based on the literature, we first identify the key process areas essential for a practice transformation process to achieve certification status. Next, we develop the practice transformation ontology by creating key activities and precedence relationships among the key process areas using process maturity concepts. At each step, we employ a panel of domain experts to verify the intermediate representations of the

  7. PragmatiX: An Interactive Tool for Visualizing the Creation Process Behind Collaboratively Engineered Ontologies.

    PubMed

    Walk, Simon; Pöschko, Jan; Strohmaier, Markus; Andrews, Keith; Tudorache, Tania; Noy, Natalya F; Nyulas, Csongor; Musen, Mark A

    2013-01-01

    With the emergence of tools for collaborative ontology engineering, more and more data about the creation process behind collaborative construction of ontologies is becoming available. Today, collaborative ontology engineering tools such as Collaborative Protégé offer rich and structured logs of changes, thereby opening up new challenges and opportunities to study and analyze the creation of collaboratively constructed ontologies. While there exists a plethora of visualization tools for ontologies, they have primarily been built to visualize aspects of the final product (the ontology) and not the collaborative processes behind construction (e.g. the changes made by contributors over time). To the best of our knowledge, there exists no ontology visualization tool today that focuses primarily on visualizing the history behind collaboratively constructed ontologies. Since the ontology engineering processes can influence the quality of the final ontology, we believe that visualizing process data represents an important stepping-stone towards better understanding of managing the collaborative construction of ontologies in the future. In this application paper, we present a tool - PragmatiX - which taps into structured change logs provided by tools such as Collaborative Protégé to visualize various pragmatic aspects of collaborative ontology engineering. The tool is aimed at managers and leaders of collaborative ontology engineering projects to help them in monitoring progress, in exploring issues and problems, and in tracking quality-related issues such as overrides and coordination among contributors. The paper makes the following contributions: (i) we present PragmatiX, a tool for visualizing the creation process behind collaboratively constructed ontologies (ii) we illustrate the functionality and generality of the tool by applying it to structured logs of changes of two large collaborative ontology-engineering projects and (iii) we conduct a heuristic evaluation

  8. Measuring the level of activity in community built bio-ontologies.

    PubMed

    Malone, James; Stevens, Robert

    2013-02-01

    In this paper we explore the measurement of activity in ontology projects as an aspect of community ontology building. When choosing whether to use an ontology or whether to participate in its development, having some knowledge of how actively that ontology is developed is an important issue. Our knowledge of biology grows and changes and an ontology must adapt to keep pace with those changes and also adapt with respect to other ontologies and organisational principles. In essence, we need to know if there is an 'active' community involved with a project or whether a given ontology is inactive or moribund. We explore the use of additions, deletions and changes to ontology files, the regularity and frequency of releases, and the number of ontology repository updates to an ontology as the basis for measuring activity in an ontology. We present our results of this study, which show a dramatic range of activity across some of the more prominent community ontologies, illustrating very active and mature efforts through to those which appear to have become dormant for a number of possible reasons. We show that global activity within the community has remained at a similar level over the last 2 years. Measuring additions, deletions and changes, together with release frequency, appear to be useful metrics of activity and useful pointers towards future behaviour. Measuring who is making edits to ontologies is harder to capture; this raises issues of record keeping in ontology projects and in micro-credit, although we have identified one ontologist that appears influential across many community efforts; a Super-Ontologist. We also discuss confounding factors in our activity metric and discuss how it can be improved and adopted as an assessment criterion for community ontology development. Overall, we show that it is possible to objectively measure the activity in an ontology and to make some prediction about future activity. PMID:22554701

  9. Towards Ontology as Knowledge Representation for Intellectual Capital Measurement

    NASA Astrophysics Data System (ADS)

    Zadjabbari, B.; Wongthongtham, P.; Dillon, T. S.

    For many years, physical asset indicators were the main evidence of an organization’s successful performance. However, the situation has changed after information technology revolution in the knowledge-based economy. Since 1980’s business performance has not been limited only to physical assets instead intellectual capital are increasingly playing a major role in business performance. In this paper, we utilize ontology as a tool for knowledge representation in the domain of intellectual capital measurement. The ontology classifies ways of intangible capital measurement.

  10. Comparing Drools and ontology reasoning approaches for telecardiology decision support.

    PubMed

    Van Hille, Pascal; Jacques, Julie; Taillard, Julien; Rosier, Arnaud; Delerue, David; Burgun, Anita; Dameron, Olivier

    2012-01-01

    Implantable cardioverter defibrillators can generate numerous alerts. Automatically classifying these alerts according to their severity hinges on the CHA2DS2VASc score. It requires some reasoning capabilities for interpreting the patient's data. We compared two approaches for implementing the reasoning module. One is based on the Drools engine, and the other is based on semantic web formalisms. Both were valid approaches with correct performances. For a broader domain, their limitations are the number and complexity of Drools rules and the performances of ontology-based reasoning, which suggests using the ontology for automatically generating a part of the Drools rules. PMID:22874200

  11. A Simulation Model Articulation of the REA Ontology

    NASA Astrophysics Data System (ADS)

    Laurier, Wim; Poels, Geert

    This paper demonstrates how the REA enterprise ontology can be used to construct simulation models for business processes, value chains and collaboration spaces in supply chains. These models support various high-level and operational management simulation applications, e.g. the analysis of enterprise sustainability and day-to-day planning. First, the basic constructs of the REA ontology and the ExSpect modelling language for simulation are introduced. Second, collaboration space, value chain and business process models and their conceptual dependencies are shown, using the ExSpect language. Third, an exhibit demonstrates the use of value chain models in predicting the financial performance of an enterprise.

  12. ODISEES: Ontology-Driven Interactive Search Environment for Earth Sciences

    NASA Technical Reports Server (NTRS)

    Rutherford, Matthew T.; Huffer, Elisabeth B.; Kusterer, John M.; Quam, Brandi M.

    2015-01-01

    This paper discusses the Ontology-driven Interactive Search Environment for Earth Sciences (ODISEES) project currently being developed to aid researchers attempting to find usable data among an overabundance of closely related data. ODISEES' ontological structure relies on a modular, adaptable concept modeling approach, which allows the domain to be modeled more or less as it is without worrying about terminology or external requirements. In the model, variables are individually assigned semantic content based on the characteristics of the measurements they represent, allowing intuitive discovery and comparison of data without requiring the user to sift through large numbers of data sets and variables to find the desired information.

  13. Effects of student ontological position on cognition of human origins

    NASA Astrophysics Data System (ADS)

    Ervin, Jeremy Alan

    In this study, the narratives from a hermeneutical dialectic cycle of three high school students were analyzed to understand the influences of ontological position on the learning of human origins. The interpretation of the narratives provides the reader an opportunity to consider the learning process from the perspective of worldview and conceptual change theories. Questions guiding this research include: Within a context of a worldview, what is the range of ontological positions among a high school AP biology class? To what extent does ontological position influence the learning of scientific concepts about human origins? If a student's ontological position is contradictory to scientific explanation of human origins, how will learning strategies and motivations change? All consenting students in an AP biology class were interviewed in order to select three students who represented three different ontological positions of a worldview: No Supernatural, Supernatural Without Impact, or Supernatural Impact. The issue of worldview is addressed at length in this work. Consenting students had completed the graduation requirements in biology, but were taking an additional biology course in preparation for college. Enrollment in an AP biology course was assumed to indicate that the selected students have an understanding of the concept of human origins at a comprehensive level, but not necessarily at an apprehension level, both being needed for conceptual change. Examination of the narratives reveals that students may alternate between two ontological positions in order to account for inconsistencies within a situation. This relativity enables the range of ontological positions to vary depending on concepts being considered. Not all Supernatural Impact positions conflict with biological understanding of human origins due to the ability of some to create a dichotomy between religion and school. Any comprehended concepts within this dichotomy lead to plagiaristic knowledge

  14. Taking Uptaking up, or, a Deconstructionist "Ontology of Difference" and a Developmental One

    ERIC Educational Resources Information Center

    Kellogg, David

    2009-01-01

    Not too long ago, Wolff-Michael Roth suggested that this space might be made into a kind of open house. The author of this article wants to use Roth's suggestion to take up his own intriguing editorial on the ontology of difference. The author wants to show that the ontology of difference is nonidentical with the ontology of difference: It can be…

  15. An Approach to Folksonomy-Based Ontology Maintenance for Learning Environments

    ERIC Educational Resources Information Center

    Gasevic, D.; Zouaq, Amal; Torniai, Carlo; Jovanovic, J.; Hatala, Marek

    2011-01-01

    Recent research in learning technologies has demonstrated many promising contributions from the use of ontologies and semantic web technologies for the development of advanced learning environments. In spite of those benefits, ontology development and maintenance remain the key research challenges to be solved before ontology-enhanced learning…

  16. Pedagogically-Driven Ontology Network for Conceptualizing the e-Learning Assessment Domain

    ERIC Educational Resources Information Center

    Romero, Lucila; North, Matthew; Gutiérrez, Milagros; Caliusco, Laura

    2015-01-01

    The use of ontologies as tools to guide the generation, organization and personalization of e-learning content, including e-assessment, has drawn attention of the researchers because ontologies can represent the knowledge of a given domain and researchers use the ontology to reason about it. Although the use of these semantic technologies tends to…

  17. Ontologies for Effective Use of Context in E-Learning Settings

    ERIC Educational Resources Information Center

    Jovanovic, Jelena; Gasevic, Dragan; Knight, Colin; Richards, Griff

    2007-01-01

    This paper presents an ontology-based framework aimed at explicit representation of context-specific metadata derived from the actual usage of learning objects and learning designs. The core part of the proposed framework is a learning object context ontology, that leverages a range of other kinds of learning ontologies (e.g., user modeling…

  18. Using Ontological Engineering to Overcome AI-ED Problems: Contribution, Impact and Perspectives

    ERIC Educational Resources Information Center

    Mizoguchi, Riichiro; Bourdeau, Jacqueline

    2016-01-01

    This article reflects on the ontology engineering methodology discussed by the paper entitled "Using Ontological Engineering to Overcome AI-ED Problems" published in this journal in 2000. We discuss the achievements obtained in the last 10 years, the impact of our work as well as recent trends and perspectives in ontology engineering for…

  19. Re-use of standard ontologies in a water quality vocabulary (Invited)

    NASA Astrophysics Data System (ADS)

    Cox, S. J.; Simons, B.; Yu, J.

    2013-12-01

    Observations provide the key constraints on environmental and earth science investigations. Where an investigation uses data sourced from multiple providers, data fusion depends on the observation classifications being comparable. Standard models for observation metadata are available (ISO 19156) which provide slots for key classifiers, in particular, the observed property and observation procedure. While universal use of common vocabularies might be desirable in achieving interoperability, this is unlikely in practice. However, semantic web vocabularies provide the means for asserting proximity and other relationships between items in different vocabularies, thus enabling mediation as an interoperability solution. Here we report on the development of a vocabulary for water quality observations in which recording relationships with existing vocabularies was a core strategy. The vocabulary is required to enable combination of a number of groundwater, surface water and marine water quality datasets on an ongoing basis. Our vocabulary model is based on the principle that observations generally report values of specific parameters which are defined by combining a number of facets. We start from Quantities, Units, Dimensions and Data Types (QUDT), which is an OWL ontology developed by NASA and TopQuadrant. We extend this with two additional classes, for Observed Property and Identified Object, and two linking properties, which enable us to create an observed property vocabulary for water quality applications. This ontology is comparable with models for observed properties developed as part of OGC's Observations and Measurements v1.0 standard, the INSPIRE Generic Conceptual Model, and may also be compared with the W3C SSN Ontology, which is based on the DOLCE Ultralite upper-ontology. Water quality observations commonly report concentrations of chemicals, both natural and contaminant, so we tie many of the Identified Objects to items from Chemical Entities of Biological

  20. Ontology Language to Support Description of Experiment Control System Semantics, Collaborative Knowledge-Base Design and Ontology Reuse

    SciTech Connect

    Vardan Gyurjyan, D Abbott, G Heyes, E Jastrzembski, B Moffit, C Timmer, E Wolin

    2009-10-01

    In this paper we discuss the control domain specific ontology that is built on top of the domain-neutral Resource Definition Framework (RDF). Specifically, we will discuss the relevant set of ontology concepts along with the relationships among them in order to describe experiment control components and generic event-based state machines. Control Oriented Ontology Language (COOL) is a meta-data modeling language that provides generic means for representation of physics experiment control processes and components, and their relationships, rules and axioms. It provides a semantic reference frame that is useful for automating the communication of information for configuration, deployment and operation. COOL has been successfully used to develop a complete and dynamic knowledge-base for experiment control systems, developed using the AFECS framework.