Science.gov

Sample records for ontology lookup service

  1. Simple Lookup Service

    Energy Science and Technology Software Center (ESTSC)

    2013-05-01

    Simple Lookup Service (sLS) is a REST/JSON based lookup service that allows users to publish information in the form of key-value pairs and search for the published information. The lookup service supports both pull and push model. This software can be used to create a distributed architecture/cloud.

  2. The ontology-based answers (OBA) service: a connector for embedded usage of ontologies in applications.

    PubMed

    Dönitz, Jürgen; Wingender, Edgar

    2012-01-01

    The semantic web depends on the use of ontologies to let electronic systems interpret contextual information. Optimally, the handling and access of ontologies should be completely transparent to the user. As a means to this end, we have developed a service that attempts to bridge the gap between experts in a certain knowledge domain, ontologists, and application developers. The ontology-based answers (OBA) service introduced here can be embedded into custom applications to grant access to the classes of ontologies and their relations as most important structural features as well as to information encoded in the relations between ontology classes. Thus computational biologists can benefit from ontologies without detailed knowledge about the respective ontology. The content of ontologies is mapped to a graph of connected objects which is compatible to the object-oriented programming style in Java. Semantic functions implement knowledge about the complex semantics of an ontology beyond the class hierarchy and "partOf" relations. By using these OBA functions an application can, for example, provide a semantic search function, or (in the examples outlined) map an anatomical structure to the organs it belongs to. The semantic functions relieve the application developer from the necessity of acquiring in-depth knowledge about the semantics and curation guidelines of the used ontologies by implementing the required knowledge. The architecture of the OBA service encapsulates the logic to process ontologies in order to achieve a separation from the application logic. A public server with the current plugins is available and can be used with the provided connector in a custom application in scenarios analogous to the presented use cases. The server and the client are freely available if a project requires the use of custom plugins or non-public ontologies. The OBA service and further documentation is available at http://www.bioinf.med.uni-goettingen.de/projects/oba. PMID

  3. Research on e-learning services based on ontology theory

    NASA Astrophysics Data System (ADS)

    Liu, Rui

    2013-07-01

    E-learning services can realize network learning resource sharing and interoperability, but they can't realize automatic discovery, implementation and integration of services. This paper proposes a framework of e-learning services based on ontology, the ontology technology is applied to the publication and discovery process of e-learning services, in order to realize accurate and efficient retrieval and utilization of e-learning services.

  4. The construction and practice of GIS ontology service mechanism

    NASA Astrophysics Data System (ADS)

    Yang, Kun; Wang, Jun; Peng, Shuang-yun; Cheng, Hong-ping

    2005-10-01

    With the development of Semantic Web technology, the spatial information service based on ontology is an effective way for sharing and interoperation of heterogeneous information resources in the distributed network environment. Based on the deep analysis for the spatial information service mechanism of geo-ontology, the system construction strategy and service workflow and combined with the present mainstream commercial GIS software packages, three solutions of system construction for spatial information sharing and interoperation have been proposed here in this paper. The different geographic information application systems distributed on the internet may be integrated dynamically and openly by using one of the three solutions for realizing the sharing and interoperation of heterogeneous spatial information resources in the distributing environment. In order to realize the practical applications of spatial information sharing and interoperation in different brunches of police system, a prototype system for crime case information sharing based on geo-ontology has also been developed by using the methods described above.

  5. BioPortal: enhanced functionality via new Web services from the National Center for Biomedical Ontology to access and use ontologies in software applications

    PubMed Central

    Whetzel, Patricia L.; Noy, Natalya F.; Shah, Nigam H.; Alexander, Paul R.; Nyulas, Csongor; Tudorache, Tania; Musen, Mark A.

    2011-01-01

    The National Center for Biomedical Ontology (NCBO) is one of the National Centers for Biomedical Computing funded under the NIH Roadmap Initiative. Contributing to the national computing infrastructure, NCBO has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies (http://bioportal.bioontology.org) via the NCBO Web services. BioPortal enables community participation in the evaluation and evolution of ontology content by providing features to add mappings between terms, to add comments linked to specific ontology terms and to provide ontology reviews. The NCBO Web services (http://www.bioontology.org/wiki/index.php/NCBO_REST_services) enable this functionality and provide a uniform mechanism to access ontologies from a variety of knowledge representation formats, such as Web Ontology Language (OWL) and Open Biological and Biomedical Ontologies (OBO) format. The Web services provide multi-layered access to the ontology content, from getting all terms in an ontology to retrieving metadata about a term. Users can easily incorporate the NCBO Web services into software applications to generate semantically aware applications and to facilitate structured data collection. PMID:21672956

  6. Global polar geospatial information service retrieval based on search engine and ontology reasoning

    USGS Publications Warehouse

    Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang

    2007-01-01

    In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

  7. Towards a Cross-domain Infrastructure to Support Electronic Identification and Capability Lookup for Cross-border ePrescription/Patient Summary Services.

    PubMed

    Katehakis, Dimitrios G; Masi, Massimiliano; Wisniewski, Francois; Bittins, Sören

    2016-01-01

    Seamless patient identification, as well as locating capabilities of remote services, are considered to be key enablers for large scale deployment of facilities to support the delivery of cross-border healthcare. This work highlights challenges investigated within the context of the Electronic Simple European Networked Services (e-SENS) large scale pilot (LSP) project, aiming to assist the deployment of cross-border, digital, public services through generic, re-usable technical components or Building Blocks (BBs). Through the case for the cross-border ePrescription/Patient Summary (eP/PS) service the paper demonstrates how experience coming from other domains, in regard to electronic identification (eID) and capability lookup, can be utilized in trying to raise technology readiness levels in disease diagnosis and treatment. The need for consolidating the existing outcomes of non-health specific BBs is examined, together with related issues that need to be resolved, for improving technical certainty and making it easier for citizens who travel to use innovative eHealth services, and potentially share personal health records (PHRs) with other providers abroad, in a regulated manner. PMID:27225571

  8. Towards a Formal Representation of Processes and Objects Regarding the Delivery of Telehealth Services: The Telehealth Ontology (TEON).

    PubMed

    Santana, Filipe; Schulz, Stefan; Campos, Amadeu; Novaes, Magdala A

    2015-01-01

    This study introduces ontological aspects concerning the Telehealth Ontology (TEON), an ontology that represents formal-ontological content concerning the delivery of telehealth services. TEON formally represents the main services, actors and other entity types relevant to telehealth service delivery. TEON uses the upper level ontology BioTopLite2 and reuses content from the Ontology for Biomedical Investigations (OBI). The services embedded in telehealth services are considered as essential as the common services provided by the health-related practices. We envision TEON as a service to support the development of telehealth systems. TEON might also enable the integration of heterogeneous telehealth systems, and provide a base to automatize the processing of telehealth-related content. PMID:26262407

  9. The Design and Engineering of Mobile Data Services: Developing an Ontology Based on Business Model Thinking

    NASA Astrophysics Data System (ADS)

    Al-Debei, Mutaz M.; Fitzgerald, Guy

    This paper addresses the design and engineering problem related to mobile data services. The aim of the research is to inform and advise mobile service design and engineering by looking at this issue from a rigorous and holistic perspective. To this aim, this paper develops an ontology based on business model thinking. The developed ontology identifies four primary dimensions in designing business models of mobile data services: value proposition, value network, value architecture, and value finance. Within these dimensions, 15 key design concepts are identified along with their interrelationships and rules in the telecommunication service business model domain and unambiguous semantics are produced. The developed ontology is of value to academics and practitioners alike, particularly those interested in strategic-oriented IS/IT and business developments in telecommunications. Employing the developed ontology would systemize mobile service engineering functions and make them more manageable, effective, and creative. The research approach to building the mobile service business model ontology essentially follows the design science paradigm. Within this paradigm, we incorporate a number of different research methods, so the employed methodology might be better characterized as a pluralist approach.

  10. The Semantic Retrieval of Spatial Data Service Based on Ontology in SIG

    NASA Astrophysics Data System (ADS)

    Sun, S.; Liu, D.; Li, G.; Yu, W.

    2011-08-01

    The research of SIG (Spatial Information Grid) mainly solves the problem of how to connect different computing resources, so that users can use all the resources in the Grid transparently and seamlessly. In SIG, spatial data service is described in some kinds of specifications, which use different meta-information of each kind of services. This kind of standardization cannot resolve the problem of semantic heterogeneity, which may limit user to obtain the required resources. This paper tries to solve two kinds of semantic heterogeneities (name heterogeneity and structure heterogeneity) in spatial data service retrieval based on ontology, and also, based on the hierarchical subsumption relationship among concept in ontology, the query words can be extended and more resource can be matched and found for user. These applications of ontology in spatial data resource retrieval can help to improve the capability of keyword matching, and find more related resources.

  11. An Ontological Consideration on Essential Properties of the Notion of ``Service"

    NASA Astrophysics Data System (ADS)

    Sumita, Kouhei; Kitamura, Yoshinobu; Sasajima, Munehiko; Takfuji, Sunao; Mizoguchi, Riichiro

    Although many definitions of services have been proposed in Service Science and Service Engineering, essentialities of the notion of ``service" remain unclear. Especially, some existing definitions of service are similar to the definition of function of artifacts, and there is no clear distinction between them. Thus, aiming at an ontological conceptualization of service, we have made an ontological investigation into the distinction between service and artifact function. In this article, we reveal essential properties of service and propose a model and a definition of service. Firstly, we extract 42 properties of service from 15 articles in different disciplines in order to find out fundamental concepts of service. Then we show that the notion of function shares the extracted foundational concepts of service and thus point out the necessity of the distinction between them. Secondly, we propose a multi-layered model of services, which is based on the conceptualization of goal-oriented effects at the base-level and at the upper-level. Thirdly, based on the model, we clarify essential properties of service which can distinguish artifact function. The conceptualization of upper-effects (upper-service) enables us to show that upper-services include various effects such as sales and manufacturing. Lastly, we propose a definition of the notion of service based on the essential properties and show its validity using some examples.

  12. Research of three level match method about semantic web service based on ontology

    NASA Astrophysics Data System (ADS)

    Xiao, Jie; Cai, Fang

    2011-10-01

    An important step of Web service Application is the discovery of useful services. Keywords are used in service discovery in traditional technology like UDDI and WSDL, with the disadvantage of user intervention, lack of semantic description and low accuracy. To cope with these problems, OWL-S is introduced and extended with QoS attributes to describe the attribute and functions of Web Services. A three-level service matching algorithm based on ontology and QOS in proposed in this paper. Our algorithm can match web service by utilizing the service profile, QoS parameters together with input and output of the service. Simulation results shows that it greatly enhanced the speed of service matching while high accuracy is also guaranteed.

  13. Using Ontologies to Formalize Services Specifications in Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Breitman, Karin Koogan; Filho, Aluizio Haendchen; Haeusler, Edward Hermann

    2004-01-01

    One key issue in multi-agent systems (MAS) is their ability to interact and exchange information autonomously across applications. To secure agent interoperability, designers must rely on a communication protocol that allows software agents to exchange meaningful information. In this paper we propose using ontologies as such communication protocol. Ontologies capture the semantics of the operations and services provided by agents, allowing interoperability and information exchange in a MAS. Ontologies are a formal, machine processable, representation that allows to capture the semantics of a domain and, to derive meaningful information by way of logical inference. In our proposal we use a formal knowledge representation language (OWL) that translates into Description Logics (a subset of first order logic), thus eliminating ambiguities and providing a solid base for machine based inference. The main contribution of this approach is to make the requirements explicit, centralize the specification in a single document (the ontology itself), at the same that it provides a formal, unambiguous representation that can be processed by automated inference machines.

  14. OLSVis: an animated, interactive visual browser for bio-ontologies

    PubMed Central

    2012-01-01

    Background More than one million terms from biomedical ontologies and controlled vocabularies are available through the Ontology Lookup Service (OLS). Although OLS provides ample possibility for querying and browsing terms, the visualization of parts of the ontology graphs is rather limited and inflexible. Results We created the OLSVis web application, a visualiser for browsing all ontologies available in the OLS database. OLSVis shows customisable subgraphs of the OLS ontologies. Subgraphs are animated via a real-time force-based layout algorithm which is fully interactive: each time the user makes a change, e.g. browsing to a new term, hiding, adding, or dragging terms, the algorithm performs smooth and only essential reorganisations of the graph. This assures an optimal viewing experience, because subsequent screen layouts are not grossly altered, and users can easily navigate through the graph. URL: http://ols.wordvis.com Conclusions The OLSVis web application provides a user-friendly tool to visualise ontologies from the OLS repository. It broadens the possibilities to investigate and select ontology subgraphs through a smooth visualisation method. PMID:22646023

  15. Reliability Prediction of Ontology-Based Service Compositions Using Petri Net and Time Series Models

    PubMed Central

    Li, Jia; Xia, Yunni; Luo, Xin

    2014-01-01

    OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy. PMID:24688429

  16. Case-based classification alternatives to ontologies for automated web service discovery and integration

    NASA Astrophysics Data System (ADS)

    Ladner, Roy; Warner, Elizabeth; Petry, Fred; Gupta, Kalyan Moy; Moore, Philip; Aha, David W.; Shaw, Kevin

    2006-05-01

    Web Services are becoming the standard technology used to share data for many Navy and other DoD operations. Since Web Services technologies provide for discoverable, self-describing services that conform to common standards, this paradigm holds the promise of an automated capability to obtain and integrate data. However, automated integration of applications to access and retrieve data from heterogeneous sources in a distributed system such as the Internet poses many difficulties. Assimilation of data from Web-based sources means that differences in schema and terminology prevent simple querying and retrieval of data. Thus, machine understanding of the Web Services interface is necessary for automated selection and invocation of the correct service. Service availability is also an issue that needs to be resolved. There have been many advances on ontologies to help resolve these difficulties to support the goal of sharing knowledge for various domains of interest. In this paper we examine the use of case-based classification as an alternative/supplement to using ontologies for resolving several questions related to knowledge sharing. While ontologies encompass a formal definition of a domain of interest, case-based reasoning is a problem solving methodology that retrieves and reuses decisions from stored cases to solve new problems, and case-based classification involves applying this methodology to classification tasks. Our approach generalizes well in sparse data, which characterizes our Web Services application. We present our study as it relates to our work on development of the Advanced MetOc Broker, whose objective is the automated application integration of meteorological and oceanographic (MetOc) Web Services.

  17. An ontology-based collaborative service framework for agricultural information

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In recent years, China has developed modern agriculture energetically. An effective information framework is an important way to provide farms with agricultural information services and improve farmer's production technology and their income. The mountain areas in central China are dominated by agri...

  18. Process model-based atomic service discovery and composition of composite semantic web services using web ontology language for services (OWL-S)

    NASA Astrophysics Data System (ADS)

    Paulraj, D.; Swamynathan, S.; Madhaiyan, M.

    2012-11-01

    Web Service composition has become indispensable as a single web service cannot satisfy complex functional requirements. Composition of services has received much interest to support business-to-business (B2B) or enterprise application integration. An important component of the service composition is the discovery of relevant services. In Semantic Web Services (SWS), service discovery is generally achieved by using service profile of Ontology Web Languages for Services (OWL-S). The profile of the service is a derived and concise description but not a functional part of the service. The information contained in the service profile is sufficient for atomic service discovery, but it is not sufficient for the discovery of composite semantic web services (CSWS). The purpose of this article is two-fold: first to prove that the process model is a better choice than the service profile for service discovery. Second, to facilitate the composition of inter-organisational CSWS by proposing a new composition method which uses process ontology. The proposed service composition approach uses an algorithm which performs a fine grained match at the level of atomic process rather than at the level of the entire service in a composite semantic web service. Many works carried out in this area have proposed solutions only for the composition of atomic services and this article proposes a solution for the composition of composite semantic web services.

  19. Design of Ontology-Based Sharing Mechanism for Web Services Recommendation Learning Environment

    NASA Astrophysics Data System (ADS)

    Chen, Hong-Ren

    The number of digital learning websites is growing as a result of advances in computer technology and new techniques in web page creation. These sites contain a wide variety of information but may be a source of confusion to learners who fail to find the information they are seeking. This has led to the concept of recommendation services to help learners acquire information and learning resources that suit their requirements. Learning content like this cannot be reused by other digital learning websites. A successful recommendation service that satisfies a certain learner must cooperate with many other digital learning objects so that it can achieve the required relevance. The study proposes using the theory of knowledge construction in ontology to make the sharing and reuse of digital learning resources possible. The learning recommendation system is accompanied by the recommendation of appropriate teaching materials to help learners enhance their learning abilities. A variety of diverse learning components scattered across the Internet can be organized through an ontological process so that learners can use information by storing, sharing, and reusing it.

  20. An ontology-based semantic configuration approach to constructing Data as a Service for enterprises

    NASA Astrophysics Data System (ADS)

    Cai, Hongming; Xie, Cheng; Jiang, Lihong; Fang, Lu; Huang, Chenxi

    2016-03-01

    To align business strategies with IT systems, enterprises should rapidly implement new applications based on existing information with complex associations to adapt to the continually changing external business environment. Thus, Data as a Service (DaaS) has become an enabling technology for enterprise through information integration and the configuration of existing distributed enterprise systems and heterogonous data sources. However, business modelling, system configuration and model alignment face challenges at the design and execution stages. To provide a comprehensive solution to facilitate data-centric application design in a highly complex and large-scale situation, a configurable ontology-based service integrated platform (COSIP) is proposed to support business modelling, system configuration and execution management. First, a meta-resource model is constructed and used to describe and encapsulate information resources by way of multi-view business modelling. Then, based on ontologies, three semantic configuration patterns, namely composite resource configuration, business scene configuration and runtime environment configuration, are designed to systematically connect business goals with executable applications. Finally, a software architecture based on model-view-controller (MVC) is provided and used to assemble components for software implementation. The result of the case study demonstrates that the proposed approach provides a flexible method of implementing data-centric applications.

  1. Persistent identifiers for web service requests relying on a provenance ontology design pattern

    NASA Astrophysics Data System (ADS)

    Car, Nicholas; Wang, Jingbo; Wyborn, Lesley; Si, Wei

    2016-04-01

    Delivering provenance information for datasets produced from static inputs is relatively straightforward: we represent the processing actions and data flow using provenance ontologies and link to stored copies of the inputs stored in repositories. If appropriate detail is given, the provenance information can then describe what actions have occurred (transparency) and enable reproducibility. When web service-generated data is used by a process to create a dataset instead of a static inputs, we need to use sophisticated provenance representations of the web service request as we can no longer just link to data stored in a repository. A graph-based provenance representation, such as the W3C's PROV standard, can be used to model the web service request as a single conceptual dataset and also as a small workflow with a number of components within the same provenance report. This dual representation does more than just allow simplified or detailed views of a dataset's production to be used where appropriate. It also allow persistent identifiers to be assigned to instances of a web service requests, thus enabling one form of dynamic data citation, and for those identifiers to resolve to whatever level of detail implementers think appropriate in order for that web service request to be reproduced. In this presentation we detail our reasoning in representing web service requests as small workflows. In outline, this stems from the idea that web service requests are perdurant things and in order to most easily persist knowledge of them for provenance, we should represent them as a nexus of relationships between endurant things, such as datasets and knowledge of particular system types, as these endurant things are far easier to persist. We also describe the ontology design pattern that we use to represent workflows in general and how we apply it to different types of web service requests. We give examples of specific web service requests instances that were made by systems

  2. Ontology-aided annotation, visualization, and generalization of geological time-scale information from online geological map services

    NASA Astrophysics Data System (ADS)

    Ma, Xiaogang; Carranza, Emmanuel John M.; Wu, Chonglong; van der Meer, Freek D.

    2012-03-01

    Geological maps are increasingly published and shared online, whereas tools and services supporting information retrieval and knowledge discovery are underdeveloped. In this study, we developed an ontology of geological time scale by using a Resource Description Framework model to represent the ordinal hierarchical structure of the geological time scale and to encode collected annotations of geological time scale concepts. We also developed an animated graphical view of the developed ontology, and functions for interactions between the ontology, the animation and online geological maps published as layers of OGC Web Map Service. The featured functions include automatic annotations for geological time concepts recognized from a geological map, changing layouts in the animation to highlight a concept, showing legends of geological time contents in an online map with the animation, and filtering out and generalizing geological time features in an online map by operating the map legend shown in the animation. We set up a pilot system and carried out a user survey to test and evaluate the usability and usefulness of the developed ontology, animation and interactive functions. Results of the pilot system and the user survey demonstrate that our works enhance features of online geological map services and they are helpful for users to understand and to explore geological time contents and features, respectively, of a geological map.

  3. The @neurIST ontology of intracranial aneurysms: providing terminological services for an integrated IT infrastructure.

    PubMed

    Boeker, Martin; Stenzhorn, Holger; Kumpf, Kai; Bijlenga, Philippe; Schulz, Stefan; Hanser, Susanne

    2007-01-01

    The @neurIST ontology is currently under development within the scope of the European project @neurIST intended to serve as a module in a complex architecture aiming at providing a better understanding and management of intracranial aneurysms and subarachnoid hemorrhages. Due to the integrative structure of the project the ontology needs to represent entities from various disciplines on a large spatial and temporal scale. Initial term acquisition was performed by exploiting a database scaffold, literature analysis and communications with domain experts. The ontology design is based on the DOLCE upper ontology and other existing domain ontologies were linked or partly included whenever appropriate (e.g., the FMA for anatomical entities and the UMLS for definitions and lexical information). About 2300 predominantly medical entities were represented but also a multitude of biomolecular, epidemiological, and hemodynamic entities. The usage of the ontology in the project comprises terminological control, text mining, annotation, and data mediation. PMID:18693797

  4. "You Call This Service?": A Civic Ontology Approach to Evaluating Service-Learning in Diverse Communities

    ERIC Educational Resources Information Center

    Marichal, Jose

    2010-01-01

    This article considers the impact of service-learning in diverse communities on student civic development. A key debate in the literature is whether service-learning in diverse communities fosters student moral/cognitive development or reinforces preexisting stereotypes. This debate has significant implications for student's future civic…

  5. Performing ontology.

    PubMed

    Aspers, Patrik

    2015-06-01

    Ontology, and in particular, the so-called ontological turn, is the topic of a recent themed issue of Social Studies of Science (Volume 43, Issue 3, 2013). Ontology, or metaphysics, is in philosophy concerned with what there is, how it is, and forms of being. But to what is the science and technology studies researcher turning when he or she talks of ontology? It is argued that it is unclear what is gained by arguing that ontology also refers to constructed elements. The 'ontological turn' comes with the risk of creating a pseudo-debate or pseudo-activity, in which energy is used for no end, at the expense of empirical studies. This text rebuts the idea of an ontological turn as foreshadowed in the texts of the themed issue. It argues that there is no fundamental qualitative difference between the ontological turn and what we know as constructivism. PMID:26477201

  6. Quantum ontologies

    SciTech Connect

    Stapp, H.P.

    1988-12-01

    Quantum ontologies are conceptions of the constitution of the universe that are compatible with quantum theory. The ontological orientation is contrasted to the pragmatic orientation of science, and reasons are given for considering quantum ontologies both within science, and in broader contexts. The principal quantum ontologies are described and evaluated. Invited paper at conference: Bell's Theorem, Quantum Theory, and Conceptions of the Universe, George Mason University, October 20-21, 1988. 16 refs.

  7. DEDUCE Clinical Text: An Ontology-based Module to Support Self-Service Clinical Notes Exploration and Cohort Development.

    PubMed

    Roth, Christopher; Rusincovitch, Shelley A; Horvath, Monica M; Brinson, Stephanie; Evans, Steve; Shang, Howard C; Ferranti, Jeffrey M

    2013-01-01

    Large amounts of information, as well as opportunities for informing research, education, and operations, are contained within clinical text such as radiology reports and pathology reports. However, this content is less accessible and harder to leverage than structured, discrete data. We report on an extension to the Duke Enterprise Data Unified Content Explorer (DEDUCE), a self-service query tool developed to provide clinicians and researchers with access to data within the Duke Medicine Enterprise Data Warehouse (EDW). The DEDUCE Clinical Text module supports ontology-based text searching, enhanced filtering capabilities based on document attributes, and integration of clinical text with structured data and cohort development. The module is implemented with open-source tools extensible to other institutions, including a Java-based search engine (Apache Solr) with complementary full-text indexing library (Lucene) employed with a negation engine (NegEx) modified by clinical users to include to local domain-specific negation phrases. PMID:24303270

  8. Leveraging biomedical ontologies and annotation services to organize microbiome data from Mammalian hosts.

    PubMed

    Sarkar, Indra Neil

    2010-01-01

    A better understanding of commensal microbiotic communities ("microbiomes") may provide valuable insights to human health. Towards this goal, an essential step may be the development of approaches to organize data that can enable comparative hypotheses across mammalian microbiomes. The present study explores the feasibility of using existing biomedical informatics resources - especially focusing on those available at the National Center for Biomedical Ontology - to organize microbiome data contained within large sequence repositories, such as GenBank. The results indicate that the Foundational Model of Anatomy and SNOMED CT can be used to organize greater than 90% of the bacterial organisms associated with 10 domesticated mammalian species. The promising findings suggest that the current biomedical informatics infrastructure may be used towards the organizing of microbiome data beyond humans. Furthermore, the results identify key concepts that might be organized into a semantic structure for incorporation into subsequent annotations that could facilitate comparative biomedical hypotheses pertaining to human health. PMID:21347072

  9. Design of a Golf Swing Injury Detection and Evaluation open service platform with Ontology-oriented clustering case-based reasoning mechanism.

    PubMed

    Ku, Hao-Hsiang

    2015-01-01

    Nowadays, people can easily use a smartphone to get wanted information and requested services. Hence, this study designs and proposes a Golf Swing Injury Detection and Evaluation open service platform with Ontology-oritened clustering case-based reasoning mechanism, which is called GoSIDE, based on Arduino and Open Service Gateway initative (OSGi). GoSIDE is a three-tier architecture, which is composed of Mobile Users, Application Servers and a Cloud-based Digital Convergence Server. A mobile user is with a smartphone and Kinect sensors to detect the user's Golf swing actions and to interact with iDTV. An application server is with Intelligent Golf Swing Posture Analysis Model (iGoSPAM) to check a user's Golf swing actions and to alter this user when he is with error actions. Cloud-based Digital Convergence Server is with Ontology-oriented Clustering Case-based Reasoning (CBR) for Quality of Experiences (OCC4QoE), which is designed to provide QoE services by QoE-based Ontology strategies, rules and events for this user. Furthermore, GoSIDE will automatically trigger OCC4QoE and deliver popular rules for a new user. Experiment results illustrate that GoSIDE can provide appropriate detections for Golfers. Finally, GoSIDE can be a reference model for researchers and engineers. PMID:26444809

  10. A piecewise lookup table for calculating nonbonded pairwise atomic interactions.

    PubMed

    Luo, Jinping; Liu, Lijun; Su, Peng; Duan, Pengbo; Lu, Daihui

    2015-11-01

    A critical challenge for molecular dynamics simulations of chemical or biological systems is to improve the calculation efficiency while retaining sufficient accuracy. The main bottleneck in improving the efficiency is the evaluation of nonbonded pairwise interactions. We propose a new piecewise lookup table method for rapid and accurate calculation of interatomic nonbonded pairwise interactions. The piecewise lookup table allows nonuniform assignment of table nodes according to the slope of the potential function and the pair interaction distribution. The proposed method assigns the nodes more reasonably than in general lookup tables, and thus improves the accuracy while requiring fewer nodes. To obtain the same level of accuracy, our piecewise lookup table accelerates the calculation via the efficient usage of cache memory. This new method is straightforward to implement and should be broadly applicable. Graphical Abstract Illustration of piecewise lookup table method. PMID:26481475

  11. Tool Support for Software Lookup Table Optimization

    DOE PAGESBeta

    Wilcox, Chris; Strout, Michelle Mills; Bieman, James M.

    2011-01-01

    A number of scientific applications are performance-limited by expressions that repeatedly call costly elementary functions. Lookup table (LUT) optimization accelerates the evaluation of such functions by reusing previously computed results. LUT methods can speed up applications that tolerate an approximation of function results, thereby achieving a high level of fuzzy reuse. One problem with LUT optimization is the difficulty of controlling the tradeoff between performance and accuracy. The current practice of manual LUT optimization adds programming effort by requiring extensive experimentation to make this tradeoff, and such hand tuning can obfuscate algorithms. In this paper we describe a methodology andmore » tool implementation to improve the application of software LUT optimization. Our Mesa tool implements source-to-source transformations for C or C++ code to automate the tedious and error-prone aspects of LUT generation such as domain profiling, error analysis, and code generation. We evaluate Mesa with five scientific applications. Our results show a performance improvement of 3.0× and 6.9× for two molecular biology algorithms, 1.4× for a molecular dynamics program, 2.1× to 2.8× for a neural network application, and 4.6× for a hydrology calculation. We find that Mesa enables LUT optimization with more control over accuracy and less effort than manual approaches.« less

  12. Tool support for software lookup table optimization

    PubMed Central

    Strout, Michelle Mills; Bieman, James M.

    2012-01-01

    A number of scientific applications are performance-limited by expressions that repeatedly call costly elementary functions. Lookup table (LUT) optimization accelerates the evaluation of such functions by reusing previously computed results. LUT methods can speed up applications that tolerate an approximation of function results, thereby achieving a high level of fuzzy reuse. One problem with LUT optimization is the difficulty of controlling the tradeoff between performance and accuracy. The current practice of manual LUT optimization adds programming effort by requiring extensive experimentation to make this tradeoff, and such hand tuning can obfuscate algorithms. In this paper we describe a methodology and tool implementation to improve the application of software LUT optimization. Our Mesa tool implements source-to-source transformations for C or C++ code to automate the tedious and error-prone aspects of LUT generation such as domain profiling, error analysis, and code generation. We evaluate Mesa with five scientific applications. Our results show a performance improvement of 3.0 × and 6.9 × for two molecular biology algorithms, 1.4 × for a molecular dynamics program, 2.1 × to 2.8 × for a neural network application, and 4.6 × for a hydrology calculation. We find that Mesa enables LUT optimization with more control over accuracy and less effort than manual approaches. PMID:24532963

  13. Tool support for software lookup table optimization.

    PubMed

    Wilcox, Chris; Strout, Michelle Mills; Bieman, James M

    2011-12-01

    A number of scientific applications are performance-limited by expressions that repeatedly call costly elementary functions. Lookup table (LUT) optimization accelerates the evaluation of such functions by reusing previously computed results. LUT methods can speed up applications that tolerate an approximation of function results, thereby achieving a high level of fuzzy reuse. One problem with LUT optimization is the difficulty of controlling the tradeoff between performance and accuracy. The current practice of manual LUT optimization adds programming effort by requiring extensive experimentation to make this tradeoff, and such hand tuning can obfuscate algorithms. In this paper we describe a methodology and tool implementation to improve the application of software LUT optimization. Our Mesa tool implements source-to-source transformations for C or C++ code to automate the tedious and error-prone aspects of LUT generation such as domain profiling, error analysis, and code generation. We evaluate Mesa with five scientific applications. Our results show a performance improvement of 3.0 × and 6.9 × for two molecular biology algorithms, 1.4 × for a molecular dynamics program, 2.1 × to 2.8 × for a neural network application, and 4.6 × for a hydrology calculation. We find that Mesa enables LUT optimization with more control over accuracy and less effort than manual approaches. PMID:24532963

  14. Extending netCDF and CF conventions to support enhanced Earth Observation Ontology services: the Prod-Trees project

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo; Valentin, Bernard; Koubarakis, Manolis; Nativi, Stefano

    2013-04-01

    Access to Earth Observation products remains not at all straightforward for end users in most domains. Semantically-enabled search engines, generally accessible through Web portals, have been developed. They allow searching for products by selecting application-specific terms and specifying basic geographical and temporal filtering criteria. Although this mostly suits the needs of the general public, the scientific communities require more advanced and controlled means to find products. Ranges of validity, traceability (e.g. origin, applied algorithms), accuracy, uncertainty, are concepts that are typically taken into account in research activities. The Prod-Trees (Enriching Earth Observation Ontology Services using Product Trees) project will enhance the CF-netCDF product format and vocabulary to allow storing metadata that better describe the products, and in particular EO products. The project will bring a standardized solution that permits annotating EO products in such a manner that official and third-party software libraries and tools will be able to search for products using advanced tags and controlled parameter names. Annotated EO products will be automatically supported by all the compatible software. Because the entire product information will come from the annotations and the standards, there will be no need for integrating extra components and data structures that have not been standardized. In the course of the project, the most important and popular open-source software libraries and tools will be extended to support the proposed extensions of CF-netCDF. The result will be provided back to the respective owners and maintainers for ensuring the best dissemination and adoption of the extended format. The project, funded by ESA, has started in December 2012 and will end in May 2014. It is coordinated by Space Applications Services, and the Consortium includes CNR-IIA and the National and Kapodistrian University of Athens. The first activities included

  15. The Ontology for Biomedical Investigations

    PubMed Central

    Bandrowski, Anita; Brinkman, Ryan; Brochhausen, Mathias; Brush, Matthew H.; Chibucos, Marcus C.; Clancy, Kevin; Courtot, Mélanie; Derom, Dirk; Dumontier, Michel; Fan, Liju; Fostel, Jennifer; Fragoso, Gilberto; Gibson, Frank; Gonzalez-Beltran, Alejandra; Haendel, Melissa A.; He, Yongqun; Heiskanen, Mervi; Hernandez-Boussard, Tina; Jensen, Mark; Lin, Yu; Lister, Allyson L.; Lord, Phillip; Malone, James; Manduchi, Elisabetta; McGee, Monnie; Morrison, Norman; Overton, James A.; Parkinson, Helen; Peters, Bjoern; Rocca-Serra, Philippe; Ruttenberg, Alan; Sansone, Susanna-Assunta; Scheuermann, Richard H.; Schober, Daniel; Smith, Barry; Soldatova, Larisa N.; Stoeckert, Christian J.; Taylor, Chris F.; Torniai, Carlo; Turner, Jessica A.; Vita, Randi; Whetzel, Patricia L.; Zheng, Jie

    2016-01-01

    The Ontology for Biomedical Investigations (OBI) is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO) project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI) and Phenotype Attribute and Trait Ontology (PATO) without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology) and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT)). The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource (http://obi-ontology.org) providing details on the people, policies, and issues being addressed

  16. The Ontology for Biomedical Investigations.

    PubMed

    Bandrowski, Anita; Brinkman, Ryan; Brochhausen, Mathias; Brush, Matthew H; Bug, Bill; Chibucos, Marcus C; Clancy, Kevin; Courtot, Mélanie; Derom, Dirk; Dumontier, Michel; Fan, Liju; Fostel, Jennifer; Fragoso, Gilberto; Gibson, Frank; Gonzalez-Beltran, Alejandra; Haendel, Melissa A; He, Yongqun; Heiskanen, Mervi; Hernandez-Boussard, Tina; Jensen, Mark; Lin, Yu; Lister, Allyson L; Lord, Phillip; Malone, James; Manduchi, Elisabetta; McGee, Monnie; Morrison, Norman; Overton, James A; Parkinson, Helen; Peters, Bjoern; Rocca-Serra, Philippe; Ruttenberg, Alan; Sansone, Susanna-Assunta; Scheuermann, Richard H; Schober, Daniel; Smith, Barry; Soldatova, Larisa N; Stoeckert, Christian J; Taylor, Chris F; Torniai, Carlo; Turner, Jessica A; Vita, Randi; Whetzel, Patricia L; Zheng, Jie

    2016-01-01

    The Ontology for Biomedical Investigations (OBI) is an ontology that provides terms with precisely defined meanings to describe all aspects of how investigations in the biological and medical domains are conducted. OBI re-uses ontologies that provide a representation of biomedical knowledge from the Open Biological and Biomedical Ontologies (OBO) project and adds the ability to describe how this knowledge was derived. We here describe the state of OBI and several applications that are using it, such as adding semantic expressivity to existing databases, building data entry forms, and enabling interoperability between knowledge resources. OBI covers all phases of the investigation process, such as planning, execution and reporting. It represents information and material entities that participate in these processes, as well as roles and functions. Prior to OBI, it was not possible to use a single internally consistent resource that could be applied to multiple types of experiments for these applications. OBI has made this possible by creating terms for entities involved in biological and medical investigations and by importing parts of other biomedical ontologies such as GO, Chemical Entities of Biological Interest (ChEBI) and Phenotype Attribute and Trait Ontology (PATO) without altering their meaning. OBI is being used in a wide range of projects covering genomics, multi-omics, immunology, and catalogs of services. OBI has also spawned other ontologies (Information Artifact Ontology) and methods for importing parts of ontologies (Minimum information to reference an external ontology term (MIREOT)). The OBI project is an open cross-disciplinary collaborative effort, encompassing multiple research communities from around the globe. To date, OBI has created 2366 classes and 40 relations along with textual and formal definitions. The OBI Consortium maintains a web resource (http://obi-ontology.org) providing details on the people, policies, and issues being addressed

  17. Ontology Research and Development. Part 1-A Review of Ontology Generation.

    ERIC Educational Resources Information Center

    Ding, Ying; Foo, Schubert

    2002-01-01

    Discusses the role of ontology in knowledge representation, including enabling content-based access, interoperability, communications, and new levels of service on the Semantic Web; reviews current ontology generation studies and projects as well as problems facing such research; and discusses ontology mapping, information extraction, natural…

  18. A Pilot Ontology for Healthcare Quality Indicators.

    PubMed

    White, Pam; Roudsari, Abdul

    2015-01-01

    Computerisation of quality indicators for the English National Health Service currently relies primarily on queries and clinical coding, with little use of ontologies. We created a searchable ontology for a diverse set of healthcare quality indicators. We investigated attributes and relationships in a set of 222 quality indicators, categorised by clinical pathway, inclusion and exclusion criteria and US Institute of Medicine purpose. Our pilot ontology could reduce duplication of effort in healthcare quality monitoring. PMID:26262409

  19. A Table Look-Up Parser in Online ILTS Applications

    ERIC Educational Resources Information Center

    Chen, Liang; Tokuda, Naoyuki; Hou, Pingkui

    2005-01-01

    A simple table look-up parser (TLUP) has been developed for parsing and consequently diagnosing syntactic errors in semi-free formatted learners' input sentences of an intelligent language tutoring system (ILTS). The TLUP finds a parse tree for a correct version of an input sentence, diagnoses syntactic errors of the learner by tracing and…

  20. Marine Planning and Service Platform: specific ontology based semantic search engine serving data management and sustainable development

    NASA Astrophysics Data System (ADS)

    Manzella, Giuseppe M. R.; Bartolini, Andrea; Bustaffa, Franco; D'Angelo, Paolo; De Mattei, Maurizio; Frontini, Francesca; Maltese, Maurizio; Medone, Daniele; Monachini, Monica; Novellino, Antonio; Spada, Andrea

    2016-04-01

    The MAPS (Marine Planning and Service Platform) project is aiming at building a computer platform supporting a Marine Information and Knowledge System. One of the main objective of the project is to develop a repository that should gather, classify and structure marine scientific literature and data thus guaranteeing their accessibility to researchers and institutions by means of standard protocols. In oceanography the cost related to data collection is very high and the new paradigm is based on the concept to collect once and re-use many times (for re-analysis, marine environment assessment, studies on trends, etc). This concept requires the access to quality controlled data and to information that is provided in reports (grey literature) and/or in relevant scientific literature. Hence, creation of new technology is needed by integrating several disciplines such as data management, information systems, knowledge management. In one of the most important EC projects on data management, namely SeaDataNet (www.seadatanet.org), an initial example of knowledge management is provided through the Common Data Index, that is providing links to data and (eventually) to papers. There are efforts to develop search engines to find author's contributions to scientific literature or publications. This implies the use of persistent identifiers (such as DOI), as is done in ORCID. However very few efforts are dedicated to link publications to the data cited or used or that can be of importance for the published studies. This is the objective of MAPS. Full-text technologies are often unsuccessful since they assume the presence of specific keywords in the text; in order to fix this problem, the MAPS project suggests to use different semantic technologies for retrieving the text and data and thus getting much more complying results. The main parts of our design of the search engine are: • Syntactic parser - This module is responsible for the extraction of "rich words" from the text

  1. Marine Planning and Service Platform: specific ontology based semantic search engine serving data management and sustainable development

    NASA Astrophysics Data System (ADS)

    Manzella, Giuseppe M. R.; Bartolini, Andrea; Bustaffa, Franco; D'Angelo, Paolo; De Mattei, Maurizio; Frontini, Francesca; Maltese, Maurizio; Medone, Daniele; Monachini, Monica; Novellino, Antonio; Spada, Andrea

    2016-04-01

    The MAPS (Marine Planning and Service Platform) project is aiming at building a computer platform supporting a Marine Information and Knowledge System. One of the main objective of the project is to develop a repository that should gather, classify and structure marine scientific literature and data thus guaranteeing their accessibility to researchers and institutions by means of standard protocols. In oceanography the cost related to data collection is very high and the new paradigm is based on the concept to collect once and re-use many times (for re-analysis, marine environment assessment, studies on trends, etc). This concept requires the access to quality controlled data and to information that is provided in reports (grey literature) and/or in relevant scientific literature. Hence, creation of new technology is needed by integrating several disciplines such as data management, information systems, knowledge management. In one of the most important EC projects on data management, namely SeaDataNet (www.seadatanet.org), an initial example of knowledge management is provided through the Common Data Index, that is providing links to data and (eventually) to papers. There are efforts to develop search engines to find author's contributions to scientific literature or publications. This implies the use of persistent identifiers (such as DOI), as is done in ORCID. However very few efforts are dedicated to link publications to the data cited or used or that can be of importance for the published studies. This is the objective of MAPS. Full-text technologies are often unsuccessful since they assume the presence of specific keywords in the text; in order to fix this problem, the MAPS project suggests to use different semantic technologies for retrieving the text and data and thus getting much more complying results. The main parts of our design of the search engine are: • Syntactic parser - This module is responsible for the extraction of "rich words" from the text

  2. Fast Pixel Buffer For Processing With Lookup Tables

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.

    1992-01-01

    Proposed scheme for buffering data on intensities of picture elements (pixels) of image increases rate or processing beyond that attainable when data read, one pixel at time, from main image memory. Scheme applied in design of specialized image-processing circuitry. Intended to optimize performance of processor in which electronic equivalent of address-lookup table used to address those pixels in main image memory required for processing.

  3. Use of the CIM Ontology

    SciTech Connect

    Neumann, Scott; Britton, Jay; Devos, Arnold N.; Widergren, Steven E.

    2006-02-08

    There are many uses for the Common Information Model (CIM), an ontology that is being standardized through Technical Committee 57 of the International Electrotechnical Commission (IEC TC57). The most common uses to date have included application modeling, information exchanges, information management and systems integration. As one should expect, there are many issues that become apparent when the CIM ontology is applied to any one use. Some of these issues are shortcomings within the current draft of the CIM, and others are a consequence of the different ways in which the CIM can be applied using different technologies. As the CIM ontology will and should evolve, there are several dangers that need to be recognized. One is overall consistency and impact upon applications when extending the CIM for a specific need. Another is that a tight coupling of the CIM to specific technologies could limit the value of the CIM in the longer term as an ontology, which becomes a larger issue over time as new technologies emerge. The integration of systems is one specific area of interest for application of the CIM ontology. This is an area dominated by the use of XML for the definition of messages. While this is certainly true when using Enterprise Application Integration (EAI) products, it is even more true with the movement towards the use of Web Services (WS), Service-Oriented Architectures (SOA) and Enterprise Service Buses (ESB) for integration. This general IT industry trend is consistent with trends seen within the IEC TC57 scope of power system management and associated information exchange. The challenge for TC57 is how to best leverage the CIM ontology using the various XML technologies and standards for integration. This paper will provide examples of how the CIM ontology is used and describe some specific issues that should be addressed within the CIM in order to increase its usefulness as an ontology. It will also describe some of the issues and challenges that will

  4. Simple Ontology Format (SOFT)

    SciTech Connect

    Sorokine, Alexandre

    2011-10-01

    Simple Ontology Format (SOFT) library and file format specification provides a set of simple tools for developing and maintaining ontologies. The library, implemented as a perl module, supports parsing and verification of the files in SOFt format, operations with ontologies (adding, removing, or filtering of entities), and converting of ontologies into other formats. SOFT allows users to quickly create ontologies using only a basic text editor, verify it, and portray it in a graph layout system using customized styles.

  5. Datamining with Ontologies.

    PubMed

    Hoehndorf, Robert; Gkoutos, Georgios V; Schofield, Paul N

    2016-01-01

    The use of ontologies has increased rapidly over the past decade and they now provide a key component of most major databases in biology and biomedicine. Consequently, datamining over these databases benefits from considering the specific structure and content of ontologies, and several methods have been developed to use ontologies in datamining applications. Here, we discuss the principles of ontology structure, and datamining methods that rely on ontologies. The impact of these methods in the biological and biomedical sciences has been profound and is likely to increase as more datasets are becoming available using common, shared ontologies. PMID:27115643

  6. Research on the complex network of the UNSPSC ontology

    NASA Astrophysics Data System (ADS)

    Xu, Yingying; Zou, Shengrong; Gu, Aihua; Wei, Li; Zhou, Ta

    The UNSPSC ontology mainly applies to the classification system of the e-business and governments buying the worldwide products and services, and supports the logic structure of classification of the products and services. In this paper, the related technologies of the complex network were applied to analyzing the structure of the ontology. The concept of the ontology was corresponding to the node of the complex network, and the relationship of the ontology concept was corresponding to the edge of the complex network. With existing methods of analysis and performance indicators in the complex network, analyzing the degree distribution and community of the ontology, and the research will help evaluate the concept of the ontology, classify the concept of the ontology and improve the efficiency of semantic matching.

  7. Multiple Lookup Table-Based AES Encryption Algorithm Implementation

    NASA Astrophysics Data System (ADS)

    Gong, Jin; Liu, Wenyi; Zhang, Huixin

    Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.

  8. THE PLANT ONTOLOGY CONSORTIUM AND PLANT ONTOLOGIES

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The goal of the Plant OntologyTM Consortium is to produce structured controlled vocabularies, arranged in ontologies, that can be applied to plant-based database information even as knowledge of the biology of the relevant plant taxa (e.g., development, anatomy, morphology, genomics, proteomics) is ...

  9. Assessment Applications of Ontologies.

    ERIC Educational Resources Information Center

    Chung, Gregory K. W. K.; Niemi, David; Bewley, William L.

    This paper discusses the use of ontologies and their applications to assessment. An ontology provides a shared and common understanding of a domain that can be communicated among people and computational systems. The ontology captures one or more experts' conceptual representation of a domain expressed in terms of concepts and the relationships…

  10. SWEET- An Upper Level Ontology for Earth System Science

    NASA Astrophysics Data System (ADS)

    Raskin, R.

    2005-12-01

    The Semantic Web for Earth and Environmental Terminology (SWEET) provides a set of upper-level ontologies constituting a concept space of Earth system science. These ontologies can be used, mapped, or extended by developers of specialized domain ontologies. SWEET components are being adopted within a diverse range of applications, including: the Geosciences Network (GEON), the Marine Metadata Initiative (MMI), the Virtual Solar Terrestrial Observatory (VSTO), and the Earth Science Markup Language (ESML). SWEET includes 12 ontologies, decomposed into component parts that can be reassembled to meet the needs of user communities. For example, the Property ontology terms (e.g., temperature, pressure) can be associated with measurable (observable) quantities of a dataset. The Substance ontology provides representations of the substance in which a property is being measured (e.g., air, water, rock). The Earth Realm ontology provides representations for the environmental regions of the Earth (e.g., atmospheric boundary layer, ocean mixed layer). The Data and Service ontology enables representations of how data are captured, stored, and accessed. The Numerics ontology entries represent 2-D and 3-D objects or spatial/temporal entities and relations. The Human Activities ontology captures the human side or applications of Earth science. The Phenomena ontology describes major geophysical or geophysical-related events. All of the ontologies are written in the OWL-DL language to give domain specialists a starting vocabulary, over which layers, synonyms, or extensions can be applied.

  11. A Pipelined IP Address Lookup Module for 100 Gbps Line Rates and beyond

    NASA Astrophysics Data System (ADS)

    Teuchert, Domenic; Hauger, Simon

    New Internet services and technologies call for higher packet switching capacities in the core network. Thus, a performance bottleneck arises at the backbone routers, as forwarding of Internet Protocol (IP) packets requires to search the most specific entry in a forwarding table that contains up to several hundred thousand address prefixes. The Tree Bitmap algorithm provides a well-balanced solution in respect of storage needs as well as of search and update complexity. In this paper, we present a pipelined lookup module based on this algorithm, which allows for an easy adaption to diverse protocol and hardware constraints. We determined the pipelining degree required to achieve the throughput for a 100 Gbps router line card by analyzing a representative sub-unit for various configured sizes. The module supports IPv4 and IPv6 configurations providing this throughput, as we determined the performance of our design to achieve a processing rate of 178 million packets per second.

  12. Efficient Lookup Table Retrievals of Gas Abundance from CRISM Spectra

    NASA Astrophysics Data System (ADS)

    Toigo, A. D.; Smith, M. D.; Seelos, F. P.; CRISM Science; Operations Teams

    2011-12-01

    The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) instrument on the Mars Reconnaissance Orbiter (MRO) spacecraft has been collecting spectra in the visible to near-infrared on Mars for over 5 years (almost 3 Martian years). Observations consist of image cubes, with two main spectral samplings (approximately 70 and 550 spectral channels) and two main imaging resolutions (approximately 20 and 200 m/pixel). We present retrievals of gas abundances, specifically CO2, H2O, and CO, from spectra collected in all observation modes. The retrievals are efficiently performed using a lookup table, where the strength of gas absorption features are pre-calculated for an N-dimensional discrete grid of known input parameters (season, location, environment, viewing geometry, etc.) and the one unknown parameter to be retrieved (gas abundance). A reverse interpolation in the lookup table is used to match the observed strength of the gas absorption to the gas abundance. This algorithm is extremely fast compared to traditional radiative transfer computations that seek to recursively fit calculated results to an observed spectral feature, and can therefore be applied on a pixel-by-pixel basis to the tens of thousands of CRISM images, to examine cross-scene structure as well as to produce climatological averages.

  13. A "lookup table" schema for synthetic biological patterning.

    PubMed

    Reitz, Frederick B

    2012-05-01

    A schema is proposed by which the three-dimensional structure and temporal development of a biological organism might be encoded and implemented via a genetic "lookup table". In the schema, diffusive morphogen gradients and/or the global concentration of a quickly diffusing signal index sets of kinase genes having promoters with logarithmically diminished affinity for the signal. Specificity of indexing is enhanced via concomitant expression of phosphatases undoing phosphorylation by "neighboring" kinases of greater affinity. Combinations of thus-selected kinases in turn jointly activate, via multiple phosphorylation, a particular enzyme from a virtual, multi-dimensional array thereof, at locations and times specified within the "lookup table". In principle, such a scheme could be employed to specify arbitrary gross anatomy, surface pigmentation, and/or developmental sequencing, extending the burgeoning toolset of the nascent field of synthetic morphology. A model of two-dimensional surface coloration using this scheme is specified, and LabVIEW software for its exploration is described and made available. PMID:22350667

  14. An Ontology of Therapies

    NASA Astrophysics Data System (ADS)

    Eccher, Claudio; Ferro, Antonella; Pisanelli, Domenico M.

    Ontologies are the essential glue to build interoperable systems and the talk of the day in the medical community. In this paper we present the ontology of medical therapies developed in the course of the Oncocure project, aimed at building a guideline based decision support integrated with a legacy Electronic Patient Record (EPR). The therapy ontology is based upon the DOLCE top level ontology. It is our opinion that our ontology, besides constituting a model capturing the precise meaning of therapy-related concepts, can serve for several practical purposes: interfacing automatic support systems with a legacy EPR, allowing the automatic data analysis, and controlling possible medical errors made during EPR data input.

  15. Simple Ontology Format (SOFT)

    Energy Science and Technology Software Center (ESTSC)

    2011-10-01

    Simple Ontology Format (SOFT) library and file format specification provides a set of simple tools for developing and maintaining ontologies. The library, implemented as a perl module, supports parsing and verification of the files in SOFt format, operations with ontologies (adding, removing, or filtering of entities), and converting of ontologies into other formats. SOFT allows users to quickly create ontologies using only a basic text editor, verify it, and portray it in a graph layoutmore » system using customized styles.« less

  16. Bringing Ontology to the Gene Ontology

    PubMed Central

    Andersen, William

    2003-01-01

    We present an analysis of some considerations involved in expressing the Gene Ontology (GO) as a machine-processible ontology, reflecting principles of formal ontology. GO is a controlled vocabulary that is intended to facilitate communication between biologists by standardizing usage of terms in database annotations. Making such controlled vocabularies maximally useful in support of bioinformatics applications requires explicating in machine-processible form the implicit background information that enables human users to interpret the meaning of the vocabulary terms. In the case of GO, this process would involve rendering the meanings of GO into a formal (logical) language with the help of domain experts, and adding additional information required to support the chosen formalization. A controlled vocabulary augmented in these ways is commonly called an ontology. In this paper, we make a modest exploration to determine the ontological requirements for this extended version of GO. Using the terms within the three GO hierarchies (molecular function, biological process and cellular component), we investigate the facility with which GO concepts can be ontologized, using available tools from the philosophical and ontological engineering literature. PMID:18629099

  17. Ontology Languages and Engineering

    NASA Astrophysics Data System (ADS)

    Horrocks, Ian

    Ontologies and ontology based systems are rapidly becoming mainstream technologies, with RDF and OWL now being deployed in diverse application domains, and with major technology vendors starting to augment their existing systems with ontological reasoning. For example, Oracle Inc. recently enhanced its well-known database management system with modules that use RDF/OWL ontologies to support "semantic data management", and their product brochure lists numerous application areas that can benefit from this technology, including Enterprise Information Integration, KnowledgeMining, Finance, Compliance Management and Life Science Research. The design of the high quality ontologies needed to support such applications is, however, still extremely challenging. In this talk I will describe the design of OWL, show how it facilitates the development of ontology engineering tools, describe the increasingly wide range of available tools, and explain how such tools can be used to support the entire design, deployment and maintenance ontology life-cycle.

  18. Table look-up approach to pattern recognition.

    NASA Technical Reports Server (NTRS)

    Eppler, W. G.; Helmke, C. A.; Evans, R. H.

    1971-01-01

    The table look-up approach is based on prestoring in fast, random-access, core memory the desired answer (e.g., crop-type) for all combinations of multispectral scanner outputs from selected channels. Specifically, each set of measurements from a given point on the ground is interpreted as that address in core memory where the answer can be retrieved. Substituting the simple retrieval operation for the length y computations required by the conventional approach offers two advantages: (1) the processing time is reduced by more than an order of magnitude; (2) the multispectral scanner data can be processed by computers having minimal sophistication, complexity, and cost. These two advantages may make it possible to use an onboard computer to perform the classification function in flight.

  19. Table-lookup algorithms for elementary functions and their error analysis

    SciTech Connect

    Tang, Ping Tak Peter.

    1991-01-01

    Table-lookup algorithms for calculating elementary functions offer superior speed and accuracy when compared with more traditional algorithms. With careful design, we show that it is feasible to implement table-lookup algorithms in hardware. Furthermore, we present a uniform approach to carry out tight error analysis for such implementations. 7 refs.

  20. Cache directory look-up re-use as conflict check mechanism for speculative memory requests

    DOEpatents

    Ohmacht, Martin

    2013-09-10

    In a cache memory, energy and other efficiencies can be realized by saving a result of a cache directory lookup for sequential accesses to a same memory address. Where the cache is a point of coherence for speculative execution in a multiprocessor system, with directory lookups serving as the point of conflict detection, such saving becomes particularly advantageous.

  1. A hierarchical P2P overlay network for interest-based media contents lookup

    NASA Astrophysics Data System (ADS)

    Lee, HyunRyong; Kim, JongWon

    2006-10-01

    We propose a P2P (peer-to-peer) overlay architecture, called IGN (interest grouping network), for contents lookup in the DHC (digital home community), which aims to provide a formalized home-network-extended construction of current P2P file sharing community. The IGN utilizes the Chord and de Bruijn graph for its hierarchical overlay network construction. By combining two schemes and by inheriting its features, the IGN efficiently supports contents lookup. More specifically, by introducing metadata-based lookup keyword, the IGN offers detailed contents lookup that can reflect the user interests. Moreover, the IGN tries to reflect home network environments of DHC by utilizing HG (home gateway) of each home network as a participating node of the IGN. Through experimental and analysis results, we show that the IGN is more efficient than Chord, a well-known DHT (distributed hash table)-based lookup protocol.

  2. A fast IPv6 route lookup scheme for high-speed optical link

    NASA Astrophysics Data System (ADS)

    Yao, Xingmiao; Li, Lemin

    2004-05-01

    A fast IPv6 route lookup scheme implemented by hardware is proposed in this paper. It supports a fast IP address lookup and can insert and delete the prefixes effectively. A novel compressed multibit trie algorithm that decreases the memory space occupied and the average searching time is applied. The scheme proposed in this paper is superior to other IPV6 route lookup ones, for example, by using SRAM pipeline, a lookup speed of 125 x 106 per second can be realized to satisfy 40Gbps optical link rate with only 1.9Mbyte consumption of memory space. As there is no actual IPv6 route prefix, we generate various simulation databases in which prefix length distribution is different. Simulation results show that our scheme has reasonable lookup time, memory space for all the prefix length distribution.

  3. Kuhn's Ontological Relativism.

    ERIC Educational Resources Information Center

    Sankey, Howard

    2000-01-01

    Discusses Kuhn's model of scientific theory change. Documents Kuhn's move away from conceptual relativism and rational relativism. Provides an analysis of his present ontological form of relativism. (CCM)

  4. Ontology Sparse Vector Learning Algorithm for Ontology Similarity Measuring and Ontology Mapping via ADAL Technology

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Zhu, Linli; Wang, Kaiyun

    2015-12-01

    Ontology, a model of knowledge representation and storage, has had extensive applications in pharmaceutics, social science, chemistry and biology. In the age of “big data”, the constructed concepts are often represented as higher-dimensional data by scholars, and thus the sparse learning techniques are introduced into ontology algorithms. In this paper, based on the alternating direction augmented Lagrangian method, we present an ontology optimization algorithm for ontological sparse vector learning, and a fast version of such ontology technologies. The optimal sparse vector is obtained by an iterative procedure, and the ontology function is then obtained from the sparse vector. Four simulation experiments show that our ontological sparse vector learning model has a higher precision ratio on plant ontology, humanoid robotics ontology, biology ontology and physics education ontology data for similarity measuring and ontology mapping applications.

  5. The geographical ontology, LDAP, and the space information semantic grid

    NASA Astrophysics Data System (ADS)

    Cui, Wei; Li, Deren

    2005-10-01

    The research purpose is to discuss the development trend and theory of the semantic integration and interoperability of Geography Information Systems on the network ages and to point out that the geography ontology is the foregone conclusion of the development of the semantic-based integration and interoperability of Geography Information Systems. After analyzing the effect by using the various new technologies, the paper proposes new idea for the family of the ontology class based on the GIS knowledge built here. They are the basic ontology, the domain ontology and the application ontology and are very useful for the sharing and transferring of the semantic information between the complicated distributed systems and object abstracting. The main contributions of the paper are as follows: 1) For the first time taking the ontology and LDAP (Lightweight Directory Access Protocol) in creating and optimizing the architecture of Spatial Information Gird and accelerating the fusion of Geography Information System and other domain's information systems. 2) For the first time, introducing a hybrid method to build geography ontology. This hybrid method mixes the excellence of the independent domain expert and data mining. It improves the efficiency of the method of the domain expert and builds ontology semi-automatically. 3) For the first time, implementing the many-to-many relationship of integration ontology system by LDAP's reference and creating ontology-based virtual organization that could provide transparent service to guests.

  6. The Ontology of Disaster.

    ERIC Educational Resources Information Center

    Thompson, Neil

    1995-01-01

    Explores some key existential or ontological concepts to show their applicability to the complex area of disaster impact as it relates to health and social welfare practice. Draws on existentialist philosophy, particularly that of John Paul Sartre, and introduces some key ontological concepts to show how they specifically apply to the experience…

  7. Constructive Ontology Engineering

    ERIC Educational Resources Information Center

    Sousan, William L.

    2010-01-01

    The proliferation of the Semantic Web depends on ontologies for knowledge sharing, semantic annotation, data fusion, and descriptions of data for machine interpretation. However, ontologies are difficult to create and maintain. In addition, their structure and content may vary depending on the application and domain. Several methods described in…

  8. Development of an Adolescent Depression Ontology for Analyzing Social Data.

    PubMed

    Jung, Hyesil; Park, Hyeoun-Ae; Song, Tae-Min; Jeon, Eunjoo; Kim, Ae Ran; Lee, Joo Yun

    2015-01-01

    Depression in adolescence is associated with significant suicidality. Therefore, it is important to detect the risk for depression and provide timely care to adolescents. This study aims to develop an ontology for collecting and analyzing social media data about adolescent depression. This ontology was developed using the 'ontology development 101'. The important terms were extracted from several clinical practice guidelines and postings on Social Network Service. We extracted 777 terms, which were categorized into 'risk factors', 'sign and symptoms', 'screening', 'diagnosis', 'treatment', and 'prevention'. An ontology developed in this study can be used as a framework to understand adolescent depression using unstructured data from social media. PMID:26262398

  9. High speed lookup table approach to radiometric calibration of multispectral image data

    NASA Technical Reports Server (NTRS)

    Kelly, W. L., IV; Meredith, B. D.; Howle, W. M.

    1980-01-01

    A concept for performing radiometric correction of multispectral image data onboard a spacecraft at very high data rates is presented and demonstrated. This concept utilized a lookup table approach, implemented in hardware, to convert the raw sensor data into the desired corrected output data. The digital lookup table memory was interfaced to a microprocessor to allow the data correction function to be completely programmable. Sensor data was processed with this approach at rates equal to the access time of the lookup table memory. This concept offers flexible high speed data processing for a wide range of applications and will benefit from the continuing improvements in performance of digital memories.

  10. Progressive halftone watermarking using multilayer table lookup strategy.

    PubMed

    Guo, Jing-Ming; Lai, Guo-Hung; Wong, Koksheik; Chang, Li-Chung

    2015-07-01

    In this paper, a halftoning-based multilayer watermarking of low computational complexity is proposed. An additional data-hiding technique is also employed to embed multiple watermarks into the watermark to be embedded to improve the security and embedding capacity. At the encoder, the efficient direct binary search method is employed to generate 256 reference tables to ensure the output is in halftone format. Subsequently, watermarks are embedded by a set of optimized compressed tables with various textural angles for table lookup. At the decoder, the least mean square metric is considered to increase the differences among those generated phenotypes of the embedding angles and reduce the required number of dimensions for each angle. Finally, the naïve Bayes classifier is employed to collect the possibilities of multilayer information for classifying the associated angles to extract the embedded watermarks. These decoded watermarks can be further overlapped for retrieving the additional hidden-layer watermarks. Experimental results show that the proposed method requires only 8.4 ms for embedding a watermark into an image of size 512×512 , under the 32-bit Windows 7 platform running on 4GB RAM, Intel core i7 Sandy Bridge with 4GB RAM and IDE Visual Studio 2010. Finally, only 2 MB is required to store the proposed compressed reference table. PMID:25576570

  11. 3-D lookup: Fast protein structure database searches

    SciTech Connect

    Holm. L.; Sander, C.

    1995-12-31

    There are far fewer classes of three-dimensional protein folds than sequence families but the problem of detecting three-dimensional similarities is NP-complete. We present a novel heuristic for identifying 3-D similarities between a query structure and the database of known protein structures. Many methods for structure alignment use a bottom-up approach, identifying first local matches and then solving a combinatorial problem in building up larger clusters of matching substructures. Here the top-down approach is to start with the global comparison and select a rough superimposition using a fast 3-D lookup of secondary structure motifs. The superimposition is then extended to an alignment of C{sup {alpha}} atoms by an iterative dynamic programming step. An all-against-all comparison of 385-representative proteins (150,000 pair comparisons) took 1 day of computer time on a single R8000 processor. In other words, one query structure is scanned against the database in a matter of minutes. The method is rated at 90% reliability at capturing statistically significant similarities. It is useful as a rapid preprocessor to a comprehensive protein structure database search system.

  12. Application of Ontologies for Big Earth Data

    NASA Astrophysics Data System (ADS)

    Huang, T.; Chang, G.; Armstrong, E. M.; Boening, C.

    2014-12-01

    Connected data is smarter data! Earth Science research infrastructure must do more than just being able to support temporal, geospatial discovery of satellite data. As the Earth Science data archives continue to expand across NASA data centers, the research communities are demanding smarter data services. A successful research infrastructure must be able to present researchers the complete picture, that is, datasets with linked citations, related interdisciplinary data, imageries, current events, social media discussions, and scientific data tools that are relevant to the particular dataset. The popular Semantic Web for Earth and Environmental Terminology (SWEET) ontologies is a collection of ontologies and concepts designed to improve discovery and application of Earth Science data. The SWEET ontologies collection was initially developed to capture the relationships between keywords in the NASA Global Change Master Directory (GCMD). Over the years this popular ontologies collection has expanded to cover over 200 ontologies and 6000 concepts to enable scalable classification of Earth system science concepts and Space science. This presentation discusses the semantic web technologies as the enabling technology for data-intensive science. We will discuss the application of the SWEET ontologies as a critical component in knowledge-driven research infrastructure for some of the recent projects, which include the DARPA Ontological System for Context Artifact and Resources (OSCAR), 2013 NASA ACCESS Virtual Quality Screening Service (VQSS), and the 2013 NASA Sea Level Change Portal (SLCP) projects. The presentation will also discuss the benefits in using semantic web technologies in developing research infrastructure for Big Earth Science Data in an attempt to "accommodate all domains and provide the necessary glue for information to be cross-linked, correlated, and discovered in a semantically rich manner." [1] [1] Savas Parastatidis: A platform for all that we know

  13. Dynamic Generation of Reduced Ontologies to Support Resource Constraints of Mobile Devices

    ERIC Educational Resources Information Center

    Schrimpsher, Dan

    2011-01-01

    As Web Services and the Semantic Web become more important, enabling technologies such as web service ontologies will grow larger. At the same time, use of mobile devices to access web services has doubled in the last year. The ability of these resource constrained devices to download and reason across these ontologies to support service discovery…

  14. Effect on Lookup Aids on Mature Readers' Recall of Technical Text.

    ERIC Educational Resources Information Center

    Blohm, Paul J.

    1987-01-01

    Concludes that despite the potential disruption to the flow of understanding, readers' use of lookups is a necessary and appropriate fixup activity when reading only is inadequate for remedying text confusions. (FL)

  15. Data mining for ontology development.

    SciTech Connect

    Davidson, George S.; Strasburg, Jana; Stampf, David; Neymotin,Lev; Czajkowski, Carl; Shine, Eugene; Bollinger, James; Ghosh, Vinita; Sorokine, Alexandre; Ferrell, Regina; Ward, Richard; Schoenwald, David Alan

    2010-06-01

    A multi-laboratory ontology construction effort during the summer and fall of 2009 prototyped an ontology for counterfeit semiconductor manufacturing. This effort included an ontology development team and an ontology validation methods team. Here the third team of the Ontology Project, the Data Analysis (DA) team reports on their approaches, the tools they used, and results for mining literature for terminology pertinent to counterfeit semiconductor manufacturing. A discussion of the value of ontology-based analysis is presented, with insights drawn from other ontology-based methods regularly used in the analysis of genomic experiments. Finally, suggestions for future work are offered.

  16. A Method for Evaluating and Standardizing Ontologies

    ERIC Educational Resources Information Center

    Seyed, Ali Patrice

    2012-01-01

    The Open Biomedical Ontology (OBO) Foundry initiative is a collaborative effort for developing interoperable, science-based ontologies. The Basic Formal Ontology (BFO) serves as the upper ontology for the domain-level ontologies of OBO. BFO is an upper ontology of types as conceived by defenders of realism. Among the ontologies developed for OBO…

  17. Lookup Tables Versus Stacked Rasch Analysis in Comparing Pre- and Postintervention Adult Strabismus-20 Data

    PubMed Central

    Leske, David A.; Hatt, Sarah R.; Liebermann, Laura; Holmes, Jonathan M.

    2016-01-01

    Purpose We compare two methods of analysis for Rasch scoring pre- to postintervention data: Rasch lookup table versus de novo stacked Rasch analysis using the Adult Strabismus-20 (AS-20). Methods One hundred forty-seven subjects completed the AS-20 questionnaire prior to surgery and 6 weeks postoperatively. Subjects were classified 6 weeks postoperatively as “success,” “partial success,” or “failure” based on angle and diplopia status. Postoperative change in AS-20 scores was compared for all four AS-20 domains (self-perception, interactions, reading function, and general function) overall and by success status using two methods: (1) applying historical Rasch threshold measures from lookup tables and (2) performing a stacked de novo Rasch analysis. Change was assessed by analyzing effect size, improvement exceeding 95% limits of agreement (LOA), and score distributions. Results Effect sizes were similar for all AS-20 domains whether obtained from lookup tables or stacked analysis. Similar proportions exceeded 95% LOAs using lookup tables versus stacked analysis. Improvement in median score was observed for all AS-20 domains using lookup tables and stacked analysis (P < 0.0001 for all comparisons). Conclusions The Rasch-scored AS-20 is a responsive and valid instrument designed to measure strabismus-specific health-related quality of life. When analyzing pre- to postoperative change in AS-20 scores, Rasch lookup tables and de novo stacked Rasch analysis yield essentially the same results. Translational Relevance We describe a practical application of lookup tables, allowing the clinician or researcher to score the Rasch-calibrated AS-20 questionnaire without specialized software. PMID:26933524

  18. The neurological disease ontology

    PubMed Central

    2013-01-01

    Background We are developing the Neurological Disease Ontology (ND) to provide a framework to enable representation of aspects of neurological diseases that are relevant to their treatment and study. ND is a representational tool that addresses the need for unambiguous annotation, storage, and retrieval of data associated with the treatment and study of neurological diseases. ND is being developed in compliance with the Open Biomedical Ontology Foundry principles and builds upon the paradigm established by the Ontology for General Medical Science (OGMS) for the representation of entities in the domain of disease and medical practice. Initial applications of ND will include the annotation and analysis of large data sets and patient records for Alzheimer’s disease, multiple sclerosis, and stroke. Description ND is implemented in OWL 2 and currently has more than 450 terms that refer to and describe various aspects of neurological diseases. ND directly imports the development version of OGMS, which uses BFO 2. Term development in ND has primarily extended the OGMS terms ‘disease’, ‘diagnosis’, ‘disease course’, and ‘disorder’. We have imported and utilize over 700 classes from related ontology efforts including the Foundational Model of Anatomy, Ontology for Biomedical Investigations, and Protein Ontology. ND terms are annotated with ontology metadata such as a label (term name), term editors, textual definition, definition source, curation status, and alternative terms (synonyms). Many terms have logical definitions in addition to these annotations. Current development has focused on the establishment of the upper-level structure of the ND hierarchy, as well as on the representation of Alzheimer’s disease, multiple sclerosis, and stroke. The ontology is available as a version-controlled file at http://code.google.com/p/neurological-disease-ontology along with a discussion list and an issue tracker. Conclusion ND seeks to provide a formal

  19. Open Biomedical Ontology-based Medline exploration

    PubMed Central

    Xuan, Weijian; Dai, Manhong; Mirel, Barbara; Song, Jean; Athey, Brian; Watson, Stanley J; Meng, Fan

    2009-01-01

    Background Effective Medline database exploration is critical for the understanding of high throughput experimental results and the development of novel hypotheses about the mechanisms underlying the targeted biological processes. While existing solutions enhance Medline exploration through different approaches such as document clustering, network presentations of underlying conceptual relationships and the mapping of search results to MeSH and Gene Ontology trees, we believe the use of multiple ontologies from the Open Biomedical Ontology can greatly help researchers to explore literature from different perspectives as well as to quickly locate the most relevant Medline records for further investigation. Results We developed an ontology-based interactive Medline exploration solution called PubOnto to enable the interactive exploration and filtering of search results through the use of multiple ontologies from the OBO foundry. The PubOnto program is a rich internet application based on the FLEX platform. It contains a number of interactive tools, visualization capabilities, an open service architecture, and a customizable user interface. It is freely accessible at: . PMID:19426463

  20. A RESTful way to Manage Ontologies

    NASA Astrophysics Data System (ADS)

    Lowry, R. K.; Lawrence, B. N.

    2009-04-01

    In 2005 BODC implemented the first version of a vocabulary server developed as a contribution to the NERC DataGrid project. Vocabularies were managed within an RDBMS environment and accessed through a SOAP Web Service API. This was designed as a database query interface with operations targeted at designated database fields and results returned as strings. At the end of 2007 a new version of the server was released capable of serving thesauri and ontologies as well as vocabularies. The SOAP API functionality was enhanced and the output format changed to XML. In addition, a pseudo-RESTful query interface was developed directly addressing terms and lists by URLs. This is in full operational use by projects such as SeaDataNet and will run for the foreseeable future. However, operational experience has exposed shortcomings in both the API and its document payload. Other ontology servers, notably at MMI and CSIRO, are coming on-line making now the time to unify ontology management. This paper presents a RESTful API and payload document schema. It is based on the lessons learned in four years of operational vocabulary serving, provides full ontology management functionality and has the potential to form the basis for an interoperable network of distributed ontologies.

  1. Implementation of an advanced table look-up classifier for large area land-use classification

    NASA Technical Reports Server (NTRS)

    Jones, C.

    1974-01-01

    Software employing Eppler's improved table look-up approach to pattern recognition has been developed, and results from this software are presented. The look-up table for each class is a computer representation of a hyperellipsoid in four dimensional space. During implementation of the software Eppler's look-up procedure was modified to include multiple ranges in order to accommodate hollow regions in the ellipsoids. In a typical ERTS classification run less than 6000 36-bit computer words were required to store tables for 24 classes. Classification results from the improved table look-up are identical with those produced by the conventional method, i.e., by calculation of the maximum likelihood decision rule at the moment of classification. With the new look-up approach an entire ERTS MSS frame can be classified into 24 classes in 1.3 hours, compared to 22.5 hours required by the conventional method. The new software is coded completely in FORTRAN to facilitate transfer to other digital computers.

  2. Ontologies for molecular biology.

    PubMed

    Schulze-Kremer, S

    1998-01-01

    Molecular biology has a communication problem. There are many databases using their own labels and categories for storing data objects and some using identical labels and categories but with a different meaning. A prominent example is the concept "gene" which is used with different semantics by major international genomic databases. Ontologies are one means to provide a semantic repository to systematically order relevant concepts in molecular biology and to bridge the different notions in various databases by explicitly specifying the meaning of and relation between the fundamental concepts in an application domain. Here, the upper level and a database branch of a prospective ontology for molecular biology (OMB) is presented and compared to other ontologies with respect to suitability for molecular biology (http:/(/)igd.rz-berlin.mpg.de/approximately www/oe/mbo.html). PMID:9697223

  3. Mechanisms in biomedical ontology

    PubMed Central

    2012-01-01

    The concept of a mechanism has become a standard proposal for explanations in biology. It has been claimed that mechanistic explanations are appropriate for systems biology, because they occupy a middle ground between strict reductionism and holism. Because of their importance in the field a formal ontological description of mechanisms is desirable. The standard philosophical accounts of mechanisms are often ambiguous and lack the clarity that can be provided by a formal-ontological framework. The goal of this paper is to clarify some of these ambiguities and suggest such a framework for mechanisms. Taking some hints from an "ontology of devices" I suggest as a general approach for this task the introduction of functional kinds and functional parts by which the particular relations between a mechanism and its components can be captured. PMID:23046727

  4. Ontological engineering versus metaphysics

    NASA Astrophysics Data System (ADS)

    Tataj, Emanuel; Tomanek, Roman; Mulawka, Jan

    2011-10-01

    It has been recognized that ontologies are a semantic version of world wide web and can be found in knowledge-based systems. A recent time survey of this field also suggest that practical artificial intelligence systems may be motivated by this research. Especially strong artificial intelligence as well as concept of homo computer can also benefit from their use. The main objective of this contribution is to present and review already created ontologies and identify the main advantages which derive such approach for knowledge management systems. We would like to present what ontological engineering borrows from metaphysics and what a feedback it can provide to natural language processing, simulations and modelling. The potential topics of further development from philosophical point of view is also underlined.

  5. IMGT-ONTOLOGY 2012

    PubMed Central

    Giudicelli, Véronique; Lefranc, Marie-Paule

    2012-01-01

    Immunogenetics is the science that studies the genetics of the immune system and immune responses. Owing to the complexity and diversity of the immune repertoire, immunogenetics represents one of the greatest challenges for data interpretation: a large biological expertise, a considerable effort of standardization and the elaboration of an efficient system for the management of the related knowledge were required. IMGT®, the international ImMunoGeneTics information system® (http://www.imgt.org) has reached that goal through the building of a unique ontology, IMGT-ONTOLOGY, which represents the first ontology for the formal representation of knowledge in immunogenetics and immunoinformatics. IMGT-ONTOLOGY manages the immunogenetics knowledge through diverse facets that rely on the seven axioms of the Formal IMGT-ONTOLOGY or IMGT-Kaleidoscope: “IDENTIFICATION,” “DESCRIPTION,” “CLASSIFICATION,” “NUMEROTATION,” “LOCALIZATION,” “ORIENTATION,” and “OBTENTION.” The concepts of identification, description, classification, and numerotation generated from the axioms led to the elaboration of the IMGT® standards that constitute the IMGT Scientific chart: IMGT® standardized keywords (concepts of identification), IMGT® standardized labels (concepts of description), IMGT® standardized gene and allele nomenclature (concepts of classification) and IMGT unique numbering and IMGT Collier de Perles (concepts of numerotation). IMGT-ONTOLOGY has become the global reference in immunogenetics and immunoinformatics for the knowledge representation of immunoglobulins (IG) or antibodies, T cell receptors (TR), and major histocompatibility (MH) proteins of humans and other vertebrates, proteins of the immunoglobulin superfamily (IgSF) and MH superfamily (MhSF), related proteins of the immune system (RPI) of vertebrates and invertebrates, therapeutic monoclonal antibodies (mAbs), fusion proteins for immune applications (FPIA), and composite proteins for

  6. Ontology development for Sufism domain

    NASA Astrophysics Data System (ADS)

    Iqbal, Rizwan

    2012-01-01

    Domain ontology is a descriptive representation of any particular domain which in detail describes the concepts in a domain, the relationships among those concepts and organizes them in a hierarchal manner. It is also defined as a structure of knowledge, used as a means of knowledge sharing to the community. An Important aspect of using ontologies is to make information retrieval more accurate and efficient. Thousands of domain ontologies from all around the world are available online on ontology repositories. Ontology repositories like SWOOGLE currently have over 1000 ontologies covering a wide range of domains. It was found that up to date there was no ontology available covering the domain of "Sufism". This unavailability of "Sufism" domain ontology became a motivation factor for this research. This research came up with a working "Sufism" domain ontology as well a framework, design of the proposed framework focuses on the resolution to problems which were experienced while creating the "Sufism" ontology. The development and working of the "Sufism" domain ontology are covered in detail in this research. The word "Sufism" is a term which refers to Islamic mysticism. One of the reasons to choose "Sufism" for ontology creation is its global curiosity. This research has also managed to create some individuals which inherit the concepts from the "Sufism" ontology. The creation of individuals helps to demonstrate the efficient and precise retrieval of data from the "Sufism" domain ontology. The experiment of creating the "Sufism" domain ontology was carried out on a tool called Protégé. Protégé is a tool which is used for ontology creation, editing and it is open source.

  7. Ontology development for Sufism domain

    NASA Astrophysics Data System (ADS)

    Iqbal, Rizwan

    2011-12-01

    Domain ontology is a descriptive representation of any particular domain which in detail describes the concepts in a domain, the relationships among those concepts and organizes them in a hierarchal manner. It is also defined as a structure of knowledge, used as a means of knowledge sharing to the community. An Important aspect of using ontologies is to make information retrieval more accurate and efficient. Thousands of domain ontologies from all around the world are available online on ontology repositories. Ontology repositories like SWOOGLE currently have over 1000 ontologies covering a wide range of domains. It was found that up to date there was no ontology available covering the domain of "Sufism". This unavailability of "Sufism" domain ontology became a motivation factor for this research. This research came up with a working "Sufism" domain ontology as well a framework, design of the proposed framework focuses on the resolution to problems which were experienced while creating the "Sufism" ontology. The development and working of the "Sufism" domain ontology are covered in detail in this research. The word "Sufism" is a term which refers to Islamic mysticism. One of the reasons to choose "Sufism" for ontology creation is its global curiosity. This research has also managed to create some individuals which inherit the concepts from the "Sufism" ontology. The creation of individuals helps to demonstrate the efficient and precise retrieval of data from the "Sufism" domain ontology. The experiment of creating the "Sufism" domain ontology was carried out on a tool called Protégé. Protégé is a tool which is used for ontology creation, editing and it is open source.

  8. Using a Foundational Ontology for Reengineering a Software Enterprise Ontology

    NASA Astrophysics Data System (ADS)

    Perini Barcellos, Monalessa; de Almeida Falbo, Ricardo

    The knowledge about software organizations is considerably relevant to software engineers. The use of a common vocabulary for representing the useful knowledge about software organizations involved in software projects is important for several reasons, such as to support knowledge reuse and to allow communication and interoperability between tools. Domain ontologies can be used to define a common vocabulary for sharing and reuse of knowledge about some domain. Foundational ontologies can be used for evaluating and re-designing domain ontologies, giving to these real-world semantics. This paper presents an evaluating of a Software Enterprise Ontology that was reengineered using the Unified Foundation Ontology (UFO) as basis.

  9. Ontology Performance Profiling and Model Examination: First Steps

    NASA Astrophysics Data System (ADS)

    Wang, Taowei David; Parsia, Bijan

    "[Reasoner] performance can be scary, so much so, that we cannot deploy the technology in our products." - Michael Shepard. What are typical OWL users to do when their favorite reasoner never seems to return? In this paper, we present our first steps considering this problem. We describe the challenges and our approach, and present a prototype tool to help users identify reasoner performance bottlenecks with respect to their ontologies. We then describe 4 case studies on synthetic and real-world ontologies. While the anecdotal evidence suggests that the service can be useful for both ontology developers and reasoner implementors, much more is desired.

  10. POSet Ontology Categorizer

    Energy Science and Technology Software Center (ESTSC)

    2005-03-01

    POSet Ontology Categorizer (POSOC) V1.0 The POSet Ontology Categorizer (POSOC) software package provides tools for creating and mining of poset-structured ontologies, such as the Gene Ontology (GO). Given a list of weighted query items (ex.genes,proteins, and/or phrases) and one or more focus nodes, POSOC determines the ordered set of GO nodes that summarize the query, based on selections of a scoring function, pseudo-distance measure, specificity level, and cluster determination. Pseudo-distance measures provided are minimum chainmore » length, maximum chain length, average of extreme chain lengths, and average of all chain lengths. A low specificity level, such as -1 or 0, results in a general set of clusters. Increasing the specificity results in more specific results in more specific and lighter clusters. POSOC cluster results can be compared agaist known results by calculations of precision, recall, and f-score for graph neighborhood relationships. This tool has been used in understanding the function of a set of genes, finding similar genes, and annotating new proteins. The POSOC software consists of a set of Java interfaces, classes, and programs that run on Linux or Windows platforms. It incorporates graph classes from OpenJGraph (openjgraph.sourceforge.net).« less

  11. POSet Ontology Categorizer

    SciTech Connect

    Miniszewski, Sue M.

    2005-03-01

    POSet Ontology Categorizer (POSOC) V1.0 The POSet Ontology Categorizer (POSOC) software package provides tools for creating and mining of poset-structured ontologies, such as the Gene Ontology (GO). Given a list of weighted query items (ex.genes,proteins, and/or phrases) and one or more focus nodes, POSOC determines the ordered set of GO nodes that summarize the query, based on selections of a scoring function, pseudo-distance measure, specificity level, and cluster determination. Pseudo-distance measures provided are minimum chain length, maximum chain length, average of extreme chain lengths, and average of all chain lengths. A low specificity level, such as -1 or 0, results in a general set of clusters. Increasing the specificity results in more specific results in more specific and lighter clusters. POSOC cluster results can be compared agaist known results by calculations of precision, recall, and f-score for graph neighborhood relationships. This tool has been used in understanding the function of a set of genes, finding similar genes, and annotating new proteins. The POSOC software consists of a set of Java interfaces, classes, and programs that run on Linux or Windows platforms. It incorporates graph classes from OpenJGraph (openjgraph.sourceforge.net).

  12. Dahlbeck and Pure Ontology

    ERIC Educational Resources Information Center

    Mackenzie, Jim

    2016-01-01

    This article responds to Johan Dahlbeck's "Towards a pure ontology: Children's bodies and morality" ["Educational Philosophy and Theory," vol. 46 (1), 2014, pp. 8-23 (EJ1026561)]. His arguments from Nietzsche and Spinoza do not carry the weight he supposes, and the conclusions he draws from them about pedagogy would be…

  13. The Drosophila anatomy ontology

    PubMed Central

    2013-01-01

    Background Anatomy ontologies are query-able classifications of anatomical structures. They provide a widely-used means for standardising the annotation of phenotypes and expression in both human-readable and programmatically accessible forms. They are also frequently used to group annotations in biologically meaningful ways. Accurate annotation requires clear textual definitions for terms, ideally accompanied by images. Accurate grouping and fruitful programmatic usage requires high-quality formal definitions that can be used to automate classification and check for errors. The Drosophila anatomy ontology (DAO) consists of over 8000 classes with broad coverage of Drosophila anatomy. It has been used extensively for annotation by a range of resources, but until recently it was poorly formalised and had few textual definitions. Results We have transformed the DAO into an ontology rich in formal and textual definitions in which the majority of classifications are automated and extensive error checking ensures quality. Here we present an overview of the content of the DAO, the patterns used in its formalisation, and the various uses it has been put to. Conclusions As a result of the work described here, the DAO provides a high-quality, queryable reference for the wild-type anatomy of Drosophila melanogaster and a set of terms to annotate data related to that anatomy. Extensive, well referenced textual definitions make it both a reliable and useful reference and ensure accurate use in annotation. Wide use of formal axioms allows a large proportion of classification to be automated and the use of consistency checking to eliminate errors. This increased formalisation has resulted in significant improvements to the completeness and accuracy of classification. The broad use of both formal and informal definitions make further development of the ontology sustainable and scalable. The patterns of formalisation used in the DAO are likely to be useful to developers of other

  14. Benchmarking Ontologies: Bigger or Better?

    PubMed Central

    Yao, Lixia; Divoli, Anna; Mayzus, Ilya; Evans, James A.; Rzhetsky, Andrey

    2011-01-01

    A scientific ontology is a formal representation of knowledge within a domain, typically including central concepts, their properties, and relations. With the rise of computers and high-throughput data collection, ontologies have become essential to data mining and sharing across communities in the biomedical sciences. Powerful approaches exist for testing the internal consistency of an ontology, but not for assessing the fidelity of its domain representation. We introduce a family of metrics that describe the breadth and depth with which an ontology represents its knowledge domain. We then test these metrics using (1) four of the most common medical ontologies with respect to a corpus of medical documents and (2) seven of the most popular English thesauri with respect to three corpora that sample language from medicine, news, and novels. Here we show that our approach captures the quality of ontological representation and guides efforts to narrow the breach between ontology and collective discourse within a domain. Our results also demonstrate key features of medical ontologies, English thesauri, and discourse from different domains. Medical ontologies have a small intersection, as do English thesauri. Moreover, dialects characteristic of distinct domains vary strikingly as many of the same words are used quite differently in medicine, news, and novels. As ontologies are intended to mirror the state of knowledge, our methods to tighten the fit between ontology and domain will increase their relevance for new areas of biomedical science and improve the accuracy and power of inferences computed across them. PMID:21249231

  15. 40 CFR Table Nn-2 to Subpart Hh of... - Lookup Default Values for Calculation Methodology 2 of This Subpart

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Lookup Default Values for Calculation Methodology 2 of This Subpart NN Table NN-2 to Subpart HH of Part 98 Protection of Environment ENVIRONMENTAL... Waste Landfills Pt. 98, Subpt. NN, Table NN-2 Table NN-2 to Subpart HH of Part 98—Lookup Default...

  16. 40 CFR Table Nn-2 to Subpart Hh of... - Lookup Default Values for Calculation Methodology 2 of This Subpart

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Lookup Default Values for Calculation Methodology 2 of This Subpart NN Table NN-2 to Subpart HH of Part 98 Protection of Environment ENVIRONMENTAL... Waste Landfills Pt. 98, Subpt. NN, Table NN-2 Table NN-2 to Subpart HH of Part 98—Lookup Default...

  17. Does Look-up Frequency Help Reading Comprehension of EFL Learners? Two Empirical Studies of Electronic Dictionaries

    ERIC Educational Resources Information Center

    Koyama, Toshiko; Takeuchi, Osamu

    2007-01-01

    Two empirical studies were conducted in which the differences in Japanese EFL learners' look-up behavior between hand-held electronic dictionaries (EDs) and printed dictionaries (PDs) were investigated. We focus here on the relation between learners' look-up frequency and degree of reading comprehension of the text. In the first study, a total of…

  18. Improving the dictionary lookup approach for disease normalization using enhanced dictionary and query expansion.

    PubMed

    Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie

    2016-01-01

    The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13.Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract. PMID:27504009

  19. A hybrid table look-up method for H.264/AVC coeff_token decoding

    NASA Astrophysics Data System (ADS)

    Liu, Suhua; Zhang, Yixiong; Lu, Min; Tang, Biyu

    2011-10-01

    In this paper, a hybrid table look-up method for H.264 Coeff_Token Decoding is presented. In the proposed method the probabilities of the codewords with various lengths are analyzed, and based on the statistics a hybrid look-up table is constructed. In the coeff_token decoding process, firstly, a few bits are read from the bit-stream, if a matched codeword is found in the first look-up table, the further look-up steps will be skipped. Otherwise, more bits need to be read and looked up in the second table, which is built upon the number of leading 0's before the first number one. Experimental results on the RTSM Emulation Baseboard ARM926 of RealView show that the proposed method speeds up CAVLD of H.264 by about 8% with more efficient memory utilization, when compared to the prefix-based decoding method. And compared with the pattern-search method based on hashing algorithms adopted in the newest version of FFMPEG, the proposed method reduces memory space by about 77%.

  20. Simulation of the Hermes Lead Glass Calorimeter using a Look-Up Table

    SciTech Connect

    Vandenbroucke, A.; Miller, C. A.

    2006-10-27

    This contribution describes the Monte Carlo simulation of the Hermes Electromagnetic Lead-Glass Calorimeter. The simulation is based on the GEANT3 simulation package in combination with a Look-Up Table. Details of the simulation as well as a comparison with experimental data are reported.

  1. The Comparative "Look-Up" Ability of Script Readers on Television

    ERIC Educational Resources Information Center

    Austin, Henry R.; Donaghy, William C.

    1970-01-01

    Reports on the results of a number of tests designed to compare the abilities of readers to look up from their scripts as they read to a TV camera and to attempt to correlate variation in look-up ability with other silent and oral reading parameters." (Author/AA)

  2. Time and space efficient method-lookup for object-oriented programs

    SciTech Connect

    Muthukrishnan, S.; Mueller, M.

    1996-12-31

    Object-oriented languages (OOLs) are becoming increasingly popular in software development. The modular units in such languages are abstract data types called classes, comprising data and functions (or selectors in the OOL parlance); each selector has possibly multiple implementations (or methods in OOL parlance) each in a different class. These languages support reusability of code/functions by allowing a class to inherit methods from its superclass in a hierarchical arrangement of the various classes. Therefore, when a selector s is invoked in a class c, the relevant method for s inherited by c has to be determined. That is the fundamental problem of method-lookup in object-oriented programs. Since nearly every statement of such programs calls for a method-lookup, efficient support of OOLs crucially relies on the method-lookup mechanism. The challenge in implementing the method-lookup, as it turns out, is to use only a reasonable amount of table-space while keeping the query time down. Substantial research has gone into achieving improved space vs time trade-off in practice.

  3. The Dictionary Look-up Behavior of Hong Kong Students: A Large-Scale Survey.

    ERIC Educational Resources Information Center

    Fan, May Y.

    2000-01-01

    Investigates the look-up behavior of bilingualized dictionaries of Hong Kong (China) students focusing on dictionary information frequency usage and the perceptions of the usefulness of such information. Indicates that students in general make limited use of the bilingualized dictionary, while more proficient students use the dictionary more…

  4. Cache directory lookup reader set encoding for partial cache line speculation support

    DOEpatents

    Gara, Alan; Ohmacht, Martin

    2014-10-21

    In a multiprocessor system, with conflict checking implemented in a directory lookup of a shared cache memory, a reader set encoding permits dynamic recordation of read accesses. The reader set encoding includes an indication of a portion of a line read, for instance by indicating boundaries of read accesses. Different encodings may apply to different types of speculative execution.

  5. Improving the dictionary lookup approach for disease normalization using enhanced dictionary and query expansion

    PubMed Central

    Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie

    2016-01-01

    The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13. Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract PMID:27504009

  6. Rehabilitation robotics ontology on the cloud.

    PubMed

    Dogmus, Zeynep; Papantoniou, Agis; Kilinc, Muhammed; Yildirim, Sibel A; Erdem, Esra; Patoglu, Volkan

    2013-06-01

    We introduce the first formal rehabilitation robotics ontology, called RehabRobo-Onto, to represent information about rehabilitation robots and their properties; and a software system RehabRobo-Query to facilitate access to this ontology. RehabRobo-Query is made available on the cloud, utilizing Amazon Web services, so that 1) rehabilitation robot designers around the world can add/modify information about their robots in RehabRobo-Onto, and 2) rehabilitation robot designers and physical medicine experts around the world can access the knowledge in RehabRobo-Onto by means of questions about robots, in natural language, with the guide of the intelligent userinterface of RehabRobo-Query. The ontology system consisting of RehabRobo-Onto and RehabRobo-Query is of great value to robot designers as well as physical therapists and medical doctors. On the one hand, robot designers can access various properties of the existing robots and to the related publications to further improve the state-of-the-art. On the other hand, physical therapists and medical doctors can utilize the ontology to compare rehabilitation robots and to identify the ones that serve best to cover their needs, or to evaluate the effects of various devices for targeted joint exercises on patients with specific disorders. PMID:24187234

  7. Ontology Mappings to Improve Learning Resource Search

    ERIC Educational Resources Information Center

    Gasevic, Dragan; Hatala, Marek

    2006-01-01

    This paper proposes an ontology mapping-based framework that allows searching for learning resources using multiple ontologies. The present applications of ontologies in e-learning use various ontologies (eg, domain, curriculum, context), but they do not give a solution on how to interoperate e-learning systems based on different ontologies. The…

  8. An Ontology for Software Engineering Education

    ERIC Educational Resources Information Center

    Ling, Thong Chee; Jusoh, Yusmadi Yah; Adbullah, Rusli; Alwi, Nor Hayati

    2013-01-01

    Software agents communicate using ontology. It is important to build an ontology for specific domain such as Software Engineering Education. Building an ontology from scratch is not only hard, but also incur much time and cost. This study aims to propose an ontology through adaptation of the existing ontology which is originally built based on a…

  9. A full-spectrum k-distribution look-up table for radiative transfer in nonhomogeneous gaseous media

    NASA Astrophysics Data System (ADS)

    Wang, Chaojun; Ge, Wenjun; Modest, Michael F.; He, Boshu

    2016-01-01

    A full-spectrum k-distribution (FSK) look-up table has been constructed for gas mixtures within a certain range of thermodynamic states for three species, i.e., CO2, H2O and CO. The k-distribution of a mixture is assembled directly from the summation of the linear absorption coefficients of three species. The systematic approach to generate the table, including the generation of the pressure-based absorption coefficient and the generation of the k-distribution, is discussed. To efficiently obtain accurate k-values for arbitrary thermodynamic states from tabulated values, a 6-D linear interpolation method is employed. A large number of radiative heat transfer calculations have been carried out to test the accuracy of the FSK look-up table. Results show that, using the FSK look-up table can provide excellent accuracy compared to the exact results. Without the time-consuming process of assembling k-distribution from individual species plus mixing, using the FSK look-up table can save considerable computational cost. To evaluate the accuracy as well as the efficiency of the FSK look-up table, radiative heat transfer via a scaled Sandia D Flame is calculated to compare the CPU execution time using the FSK method based on the narrow-band database, correlations, and the look-up table. Results show that the FSK look-up table can provide a computationally cheap alternative without much sacrifice in accuracy.

  10. Spectral Retrieval of Latent Heating Profiles from TRMM PR Data: Comparison of Look-Up Tables

    NASA Technical Reports Server (NTRS)

    Shige, Shoichi; Takayabu, Yukari N.; Tao, Wei-Kuo; Johnson, Daniel E.; Shie, Chung-Lin

    2003-01-01

    The primary goal of the Tropical Rainfall Measuring Mission (TRMM) is to use the information about distributions of precipitation to determine the four dimensional (i.e., temporal and spatial) patterns of latent heating over the whole tropical region. The Spectral Latent Heating (SLH) algorithm has been developed to estimate latent heating profiles for the TRMM Precipitation Radar (PR) with a cloud- resolving model (CRM). The method uses CRM- generated heating profile look-up tables for the three rain types; convective, shallow stratiform, and anvil rain (deep stratiform with a melting level). For convective and shallow stratiform regions, the look-up table refers to the precipitation top height (PTH). For anvil region, on the other hand, the look- up table refers to the precipitation rate at the melting level instead of PTH. For global applications, it is necessary to examine the universality of the look-up table. In this paper, we compare the look-up tables produced from the numerical simulations of cloud ensembles forced with the Tropical Ocean Global Atmosphere (TOGA) Coupled Atmosphere-Ocean Response Experiment (COARE) data and the GARP Atlantic Tropical Experiment (GATE) data. There are some notable differences between the TOGA-COARE table and the GATE table, especially for the convective heating. First, there is larger number of deepest convective profiles in the TOGA-COARE table than in the GATE table, mainly due to the differences in SST. Second, shallow convective heating is stronger in the TOGA COARE table than in the GATE table. This might be attributable to the difference in the strength of the low-level inversions. Third, altitudes of convective heating maxima are larger in the TOGA COARE table than in the GATE table. Levels of convective heating maxima are located just below the melting level, because warm-rain processes are prevalent in tropical oceanic convective systems. Differences in levels of convective heating maxima probably reflect

  11. The ontology of biological taxa

    PubMed Central

    Schulz, Stefan; Stenzhorn, Holger; Boeker, Martin

    2008-01-01

    Motivation: The classification of biological entities in terms of species and taxa is an important endeavor in biology. Although a large amount of statements encoded in current biomedical ontologies is taxon-dependent there is no obvious or standard way for introducing taxon information into an integrative ontology architecture, supposedly because of ongoing controversies about the ontological nature of species and taxa. Results: In this article, we discuss different approaches on how to represent biological taxa using existing standards for biomedical ontologies such as the description logic OWL DL and the Open Biomedical Ontologies Relation Ontology. We demonstrate how hidden ambiguities of the species concept can be dealt with and existing controversies can be overcome. A novel approach is to envisage taxon information as qualities that inhere in biological organisms, organism parts and populations. Availability: The presented methodology has been implemented in the domain top-level ontology BioTop, openly accessible at http://purl.org/biotop. BioTop may help to improve the logical and ontological rigor of biomedical ontologies and further provides a clear architectural principle to deal with biological taxa information. Contact: stschulz@uni-freiburg.de PMID:18586729

  12. A Distributed Look-up Architecture for Text Mining Applications using MapReduce.

    PubMed

    Balkir, Atilla Soner; Foster, Ian; Rzhetsky, Andrey

    2011-11-01

    Text mining applications typically involve statistical models that require accessing and updating model parameters in an iterative fashion. With the growing size of the data, such models become extremely parameter rich, and naive parallel implementations fail to address the scalability problem of maintaining a distributed look-up table that maps model parameters to their values. We evaluate several existing alternatives to provide coordination among worker nodes in Hadoop [11] clusters, and suggest a new multi-layered look-up architecture that is specifically optimized for certain problem domains. Our solution exploits the power-law distribution characteristics of the phrase or n-gram counts in large corpora while utilizing a Bloom Filter [2], in-memory cache, and an HBase [12] cluster at varying levels of abstraction. PMID:25356441

  13. Decomposition, lookup, and recombination: MEG evidence for the full decomposition model of complex visual word recognition.

    PubMed

    Fruchter, Joseph; Marantz, Alec

    2015-04-01

    There is much evidence that visual recognition of morphologically complex words (e.g., teacher) proceeds via a decompositional route, first involving recognition of their component morphemes (teach + -er). According to the Full Decomposition model, after the visual decomposition stage, followed by morpheme lookup, there is a final "recombination" stage, in which the decomposed morphemes are combined and the well-formedness of the complex form is evaluated. Here, we use MEG to provide evidence for the temporally-differentiated stages of this model. First, we demonstrate an early effect of derivational family entropy, corresponding to the stem lookup stage; this is followed by a surface frequency effect, corresponding to the later recombination stage. We also demonstrate a late effect of a novel statistical measure, semantic coherence, which quantifies the gradient semantic well-formedness of complex words. Our findings illustrate the usefulness of corpus measures in investigating the component processes within visual word recognition. PMID:25797098

  14. A Distributed Look-up Architecture for Text Mining Applications using MapReduce

    PubMed Central

    Balkir, Atilla Soner; Foster, Ian; Rzhetsky, Andrey

    2011-01-01

    Text mining applications typically involve statistical models that require accessing and updating model parameters in an iterative fashion. With the growing size of the data, such models become extremely parameter rich, and naive parallel implementations fail to address the scalability problem of maintaining a distributed look-up table that maps model parameters to their values. We evaluate several existing alternatives to provide coordination among worker nodes in Hadoop [11] clusters, and suggest a new multi-layered look-up architecture that is specifically optimized for certain problem domains. Our solution exploits the power-law distribution characteristics of the phrase or n-gram counts in large corpora while utilizing a Bloom Filter [2], in-memory cache, and an HBase [12] cluster at varying levels of abstraction. PMID:25356441

  15. A microprocessor-based table lookup approach for magnetic bearing linearization

    NASA Technical Reports Server (NTRS)

    Groom, N. J.; Miller, J. B.

    1981-01-01

    An approach for producing a linear transfer characteristic between force command and force output of a magnetic bearing actuator without flux biasing is presented. The approach is microprocessor based and uses a table lookup to generate drive signals for the magnetic bearing power driver. An experimental test setup used to demonstrate the feasibility of the approach is described, and test results are presented. The test setup contains bearing elements similar to those used in a laboratory model annular momentum control device.

  16. A fast chaotic cryptographic scheme with dynamic look-up table

    NASA Astrophysics Data System (ADS)

    Wong, K. W.

    2002-06-01

    We propose a fast chaotic cryptographic scheme based on iterating a logistic map. In particular, no random numbers need to be generated and the look-up table used in the cryptographic process is updated dynamically. Simulation results show that the proposed method leads to a substantial reduction in the encryption and decryption time. As a result, chaotic cryptography becomes more practical in the secure transmission of large multi-media files over public data communication network.

  17. GeoSciGraph: An Ontological Framework for EarthCube Semantic Infrastructure

    NASA Astrophysics Data System (ADS)

    Gupta, A.; Schachne, A.; Condit, C.; Valentine, D.; Richard, S.; Zaslavsky, I.

    2015-12-01

    The CINERGI (Community Inventory of EarthCube Resources for Geosciences Interoperability) project compiles an inventory of a wide variety of earth science resources including documents, catalogs, vocabularies, data models, data services, process models, information repositories, domain-specific ontologies etc. developed by research groups and data practitioners. We have developed a multidisciplinary semantic framework called GeoSciGraph semantic ingration of earth science resources. An integrated ontology is constructed with Basic Formal Ontology (BFO) as its upper ontology and currently ingests multiple component ontologies including the SWEET ontology, GeoSciML's lithology ontology, Tematres controlled vocabulary server, GeoNames, GCMD vocabularies on equipment, platforms and institutions, software ontology, CUAHSI hydrology vocabulary, the environmental ontology (ENVO) and several more. These ontologies are connected through bridging axioms; GeoSciGraph identifies lexically close terms and creates equivalence class or subclass relationships between them after human verification. GeoSciGraph allows a community to create community-specific customizations of the integrated ontology. GeoSciGraph uses the Neo4J,a graph database that can hold several billion concepts and relationships. GeoSciGraph provides a number of REST services that can be called by other software modules like the CINERGI information augmentation pipeline. 1) Vocabulary services are used to find exact and approximate terms, term categories (community-provided clusters of terms e.g., measurement-related terms or environmental material related terms), synonyms, term definitions and annotations. 2) Lexical services are used for text parsing to find entities, which can then be included into the ontology by a domain expert. 3) Graph services provide the ability to perform traversal centric operations e.g., finding paths and neighborhoods which can be used to perform ontological operations like

  18. Generation of Look-Up Tables for Dynamic Job Shop Scheduling Decision Support Tool

    NASA Astrophysics Data System (ADS)

    Oktaviandri, Muchamad; Hassan, Adnan; Mohd Shaharoun, Awaluddin

    2016-02-01

    Majority of existing scheduling techniques are based on static demand and deterministic processing time, while most job shop scheduling problem are concerned with dynamic demand and stochastic processing time. As a consequence, the solutions obtained from the traditional scheduling technique are ineffective wherever changes occur to the system. Therefore, this research intends to develop a decision support tool (DST) based on promising artificial intelligent that is able to accommodate the dynamics that regularly occur in job shop scheduling problem. The DST was designed through three phases, i.e. (i) the look-up table generation, (ii) inverse model development and (iii) integration of DST components. This paper reports the generation of look-up tables for various scenarios as a part in development of the DST. A discrete event simulation model was used to compare the performance among SPT, EDD, FCFS, S/OPN and Slack rules; the best performances measures (mean flow time, mean tardiness and mean lateness) and the job order requirement (inter-arrival time, due dates tightness and setup time ratio) which were compiled into look-up tables. The well-known 6/6/J/Cmax Problem from Muth and Thompson (1963) was used as a case study. In the future, the performance measure of various scheduling scenarios and the job order requirement will be mapped using ANN inverse model.

  19. Ontological turns, turnoffs and roundabouts.

    PubMed

    Sismondo, Sergio

    2015-06-01

    There has been much talk of an 'ontological turn' in Science and Technology Studies. This commentary explores some recent work on multiple and historical ontologies, especially articles published in this journal, against a background of constructivism. It can be tempting to read an ontological turn as based and promoting a version of perspectivism, but that is inadequate to the scholarly work and opens multiple ontologies to serious criticisms. Instead, we should read our ontological turn or turns as being about multiplicities of practices and the ways in which these practices shape the material world. Ontologies arise out of practices through which people engage with things; the practices are fundamental and the ontologies derivative. The purchase in this move comes from the elucidating power of the verbs that scholars use to analyze relations of practices and objects--which turn out to be specific cases of constructivist verbs. The difference between this ontological turn and constructivist work in Science and Technology Studies appears to be a matter of emphases found useful for different purposes. PMID:26477200

  20. Ontology through a Mindfulness Process

    ERIC Educational Resources Information Center

    Bearance, Deborah; Holmes, Kimberley

    2015-01-01

    Traditionally, when ontology is taught in a graduate studies course on social research, there is a tendency for this concept to be examined through the process of lectures and readings. Such an approach often leaves graduate students to grapple with a personal embodiment of this concept and to comprehend how ontology can ground their research.…

  1. Ontology-based geospatial data query and integration

    USGS Publications Warehouse

    Zhao, T.; Zhang, C.; Wei, M.; Peng, Z.-R.

    2008-01-01

    Geospatial data sharing is an increasingly important subject as large amount of data is produced by a variety of sources, stored in incompatible formats, and accessible through different GIS applications. Past efforts to enable sharing have produced standardized data format such as GML and data access protocols such as Web Feature Service (WFS). While these standards help enabling client applications to gain access to heterogeneous data stored in different formats from diverse sources, the usability of the access is limited due to the lack of data semantics encoded in the WFS feature types. Past research has used ontology languages to describe the semantics of geospatial data but ontology-based queries cannot be applied directly to legacy data stored in databases or shapefiles, or to feature data in WFS services. This paper presents a method to enable ontology query on spatial data available from WFS services and on data stored in databases. We do not create ontology instances explicitly and thus avoid the problems of data replication. Instead, user queries are rewritten to WFS getFeature requests and SQL queries to database. The method also has the benefits of being able to utilize existing tools of databases, WFS, and GML while enabling query based on ontology semantics. ?? 2008 Springer-Verlag Berlin Heidelberg.

  2. 40 CFR Table Nn-2 to Subpart Hh of... - Lookup Default Values for Calculation Methodology 2 of This Subpart

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Municipal Solid Waste Landfills Pt. 98, Subpt. NN, Table NN-2 Table NN-2 to Subpart HH of Part 98—Lookup Default...

  3. Solar-Terrestrial Ontology Development

    NASA Astrophysics Data System (ADS)

    McGuinness, D.; Fox, P.; Middleton, D.; Garcia, J.; Cinquni, L.; West, P.; Darnell, J. A.; Benedict, J.

    2005-12-01

    The development of an interdisciplinary virtual observatory (the Virtual Solar-Terrestrial Observatory; VSTO) as a scalable environment for searching, integrating, and analyzing databases distributed over the Internet requires a higher level of semantic interoperability than here-to-fore required by most (if not all) distributed data systems or discipline specific virtual observatories. The formalization of semantics using ontologies and their encodings for the internet (e.g. OWL - the Web Ontology Language), as well as the use of accompanying tools, such as reasoning, inference and explanation, open up both a substantial leap in options for interoperability and in the need for formal development principles to guide ontology development and use within modern, multi-tiered network data environments. In this presentation, we outline the formal methodologies we utilize in the VSTO project, the currently developed use-cases, ontologies and their relation to existing ontologies (such as SWEET).

  4. Keyword Ontology Development for Discovering Hydrologic Data

    NASA Astrophysics Data System (ADS)

    Piasecki, Michael; Hooper, Rick; Choi, Yoori

    2010-05-01

    Service (USGS) National Water Information System (NWIS) and the Environmental Protection Agency's STORET data system . In order to avoid overwhelming returns when searching for more general concepts, the ontology's upper layers (called navigation layers) cannot be used to search for data, which in turn prompts the need to identify general groupings of data such as Biological, or Chemical, or Physical data groups, which then must be further subdivided in a cascading fashion all the way to the leaf levels. This classification is not straightforward however and poses much potential for discussion. Finally, it is important to identify on the dimensionality of the ontology, i.e. does the keyword contain only the property measured (e.g., "temperature") or the medium and the property ("air temperature").

  5. Estimating attenuation and propagation of noise bands from a distant source using the lookup program and data base

    NASA Astrophysics Data System (ADS)

    White, Michael J.

    1994-10-01

    Unavoidable noise generated by military activities can disturb the surrounding community and become a source of complaint. Military planners must quickly and accurately predict noise levels at distant points from various sound sources to manage noisy operations on a daily basis. This study developed the Lookup computer program and data base to provide rapid estimates of outdoor noise levels from a variety of sound sources. Lookup accesses a data base of archived results (requiring about 5 MB disk space) from typical situations rather than performing fresh calculations for each consultation. Initial timing tests show that Lookup can predict the sound levels from a noise source at distances up to 20 km in 1 second on a DOS-compatible personal computer (PC). This report includes the Lookup program source code, and describes the required input for the program, the contents of the archival data base, and the program output. Lookup was written to compile with MS-Fortran, and will run under DOS on any IBM compatible with 640k random access memory. Lookup also conforms to ANSI 1978 standard Fortran and will run under the Unix operating system.

  6. An ontology of scientific experiments.

    PubMed

    Soldatova, Larisa N; King, Ross D

    2006-12-22

    The formal description of experiments for efficient analysis, annotation and sharing of results is a fundamental part of the practice of science. Ontologies are required to achieve this objective. A few subject-specific ontologies of experiments currently exist. However, despite the unity of scientific experimentation, no general ontology of experiments exists. We propose the ontology EXPO to meet this need. EXPO links the SUMO (the Suggested Upper Merged Ontology) with subject-specific ontologies of experiments by formalizing the generic concepts of experimental design, methodology and results representation. EXPO is expressed in the W3C standard ontology language OWL-DL. We demonstrate the utility of EXPO and its ability to describe different experimental domains, by applying it to two experiments: one in high-energy physics and the other in phylogenetics. The use of EXPO made the goals and structure of these experiments more explicit, revealed ambiguities, and highlighted an unexpected similarity. We conclude that, EXPO is of general value in describing experiments and a step towards the formalization of science. PMID:17015305

  7. Gene Ontology Consortium: going forward.

    PubMed

    2015-01-01

    The Gene Ontology (GO; http://www.geneontology.org) is a community-based bioinformatics resource that supplies information about gene product function using ontologies to represent biological knowledge. Here we describe improvements and expansions to several branches of the ontology, as well as updates that have allowed us to more efficiently disseminate the GO and capture feedback from the research community. The Gene Ontology Consortium (GOC) has expanded areas of the ontology such as cilia-related terms, cell-cycle terms and multicellular organism processes. We have also implemented new tools for generating ontology terms based on a set of logical rules making use of templates, and we have made efforts to increase our use of logical definitions. The GOC has a new and improved web site summarizing new developments and documentation, serving as a portal to GO data. Users can perform GO enrichment analysis, and search the GO for terms, annotations to gene products, and associated metadata across multiple species using the all-new AmiGO 2 browser. We encourage and welcome the input of the research community in all biological areas in our continued effort to improve the Gene Ontology. PMID:25428369

  8. Gene Ontology Consortium: going forward

    PubMed Central

    2015-01-01

    The Gene Ontology (GO; http://www.geneontology.org) is a community-based bioinformatics resource that supplies information about gene product function using ontologies to represent biological knowledge. Here we describe improvements and expansions to several branches of the ontology, as well as updates that have allowed us to more efficiently disseminate the GO and capture feedback from the research community. The Gene Ontology Consortium (GOC) has expanded areas of the ontology such as cilia-related terms, cell-cycle terms and multicellular organism processes. We have also implemented new tools for generating ontology terms based on a set of logical rules making use of templates, and we have made efforts to increase our use of logical definitions. The GOC has a new and improved web site summarizing new developments and documentation, serving as a portal to GO data. Users can perform GO enrichment analysis, and search the GO for terms, annotations to gene products, and associated metadata across multiple species using the all-new AmiGO 2 browser. We encourage and welcome the input of the research community in all biological areas in our continued effort to improve the Gene Ontology. PMID:25428369

  9. Ontology Research and Development. Part 2 - A Review of Ontology Mapping and Evolving.

    ERIC Educational Resources Information Center

    Ding, Ying; Foo, Schubert

    2002-01-01

    Reviews ontology research and development, specifically ontology mapping and evolving. Highlights include an overview of ontology mapping projects; maintaining existing ontologies and extending them as appropriate when new information or knowledge is acquired; and ontology's role and the future of the World Wide Web, or Semantic Web. (Contains 55…

  10. Ontology-Oriented Programming for Biomedical Informatics.

    PubMed

    Lamy, Jean-Baptiste

    2016-01-01

    Ontologies are now widely used in the biomedical domain. However, it is difficult to manipulate ontologies in a computer program and, consequently, it is not easy to integrate ontologies with databases or websites. Two main approaches have been proposed for accessing ontologies in a computer program: traditional API (Application Programming Interface) and ontology-oriented programming, either static or dynamic. In this paper, we will review these approaches and discuss their appropriateness for biomedical ontologies. We will also present an experience feedback about the integration of an ontology in a computer software during the VIIIP research project. Finally, we will present OwlReady, the solution we developed. PMID:27071878

  11. Approach for ontological modeling of database schema for the generation of semantic knowledge on the web

    NASA Astrophysics Data System (ADS)

    Rozeva, Anna

    2015-11-01

    Currently there is large quantity of content on web pages that is generated from relational databases. Conceptual domain models provide for the integration of heterogeneous content on semantic level. The use of ontology as conceptual model of a relational data sources makes them available to web agents and services and provides for the employment of ontological techniques for data access, navigation and reasoning. The achievement of interoperability between relational databases and ontologies enriches the web with semantic knowledge. The establishment of semantic database conceptual model based on ontology facilitates the development of data integration systems that use ontology as unified global view. Approach for generation of ontologically based conceptual model is presented. The ontology representing the database schema is obtained by matching schema elements to ontology concepts. Algorithm of the matching process is designed. Infrastructure for the inclusion of mediation between database and ontology for bridging legacy data with formal semantic meaning is presented. Implementation of the knowledge modeling approach on sample database is performed.

  12. An Ontology Infrastructure for an E-Learning Scenario

    ERIC Educational Resources Information Center

    Guo, Wen-Ying; Chen, De-Ren

    2007-01-01

    Selecting appropriate learning services for a learner from a large number of heterogeneous knowledge sources is a complex and challenging task. This article illustrates and discusses how Semantic Web technologies such as RDF [resource description framework] and ontology can be applied to e-learning systems to help the learner in selecting an…

  13. The SWAN Scientific Discourse Ontology

    PubMed Central

    Ciccarese, Paolo; Wu, Elizabeth; Kinoshita, June; Wong, Gwendolyn T.; Ocana, Marco; Ruttenberg, Alan

    2015-01-01

    SWAN (Semantic Web Application in Neuromedicine) is a project to construct a semantically-organized, community-curated, distributed knowledge base of Theory, Evidence, and Discussion in biomedicine. Unlike Wikipedia and similar approaches, SWAN’s ontology is designed to represent and foreground both harmonizing and contradictory assertions within the total community discourse. Releases of the software, content and ontology will be initially by and for the Alzheimer Disease (AD) research community, with the obvious potential for extension into other disease research areas. The Alzheimer Research Forum, a 4,000-member web community for AD researchers, will host SWAN’s initial public release, currently scheduled for late 2007. This paper presents the current version of SWAN’s ontology of scientific discourse and presents our current thinking about its evolution including extensions and alignment with related communities, projects and ontologies. PMID:18583197

  14. An improved lookup protocol model for peer-to-peer networks

    NASA Astrophysics Data System (ADS)

    Fan, Wei; Ye, Dongfen

    2011-12-01

    With the development of the peer-to-peer (P2P) technology, file sharing is becoming the hottest, fastest growing application on the Internet. Although we can benefit from different protocols separately, our research shows that if there exists a proper model, most of the seemingly different protocols can be classified to a same framework. In this paper, we propose an improved Chord arithmetic based on the binary tree for P2P networks. We perform extensive simulations to study our proposed protocol. The results show that the improved Chord reduces the average lookup path length without increasing the joining and departing complexity.

  15. Spatial frequency sampling look-up table method for computer-generated hologram

    NASA Astrophysics Data System (ADS)

    Zhao, Kai; Huang, Yingqing; Jiang, Xiaoyu; Yan, Xingpeng

    2016-04-01

    A spatial frequency sampling look-up table method is proposed to generate a hologram. The three-dimensional (3-D) scene is sampled as several intensity images by computer rendering. Each object point on the rendered images has a defined spatial frequency. The basis terms for calculating fringe patterns are precomputed and stored in a table to improve the calculation speed. Both numerical simulations and optical experiments are performed. The results show that the proposed approach can easily realize color reconstructions of a 3-D scene with a low computation cost. The occlusion effects and depth information are all provided accurately.

  16. SWEET 2.1 Ontologies

    NASA Astrophysics Data System (ADS)

    Raskin, R. G.

    2010-12-01

    The Semantic Web for Earth and Environmental Terminology (SWEET) ontologies represent a mid- to upper-level concept space for all of Earth and Planetary Science and associated data and applications The latest version (2.1) has been reorganized to improve long-term maintainability. Accompanying the ontologies is a mapping to the CF Standard Name Table and the GCMD Science Keywords. As a higher level concept space, terms can be readily mapped across these vocabularies through the intermediate use of SWEET.

  17. Semantic enrichment for medical ontologies.

    PubMed

    Lee, Yugyung; Geller, James

    2006-04-01

    The Unified Medical Language System (UMLS) contains two separate but interconnected knowledge structures, the Semantic Network (upper level) and the Metathesaurus (lower level). In this paper, we have attempted to work out better how the use of such a two-level structure in the medical field has led to notable advances in terminologies and ontologies. However, most ontologies and terminologies do not have such a two-level structure. Therefore, we present a method, called semantic enrichment, which generates a two-level ontology from a given one-level terminology and an auxiliary two-level ontology. During semantic enrichment, concepts of the one-level terminology are assigned to semantic types, which are the building blocks of the upper level of the auxiliary two-level ontology. The result of this process is the desired new two-level ontology. We discuss semantic enrichment of two example terminologies and how we approach the implementation of semantic enrichment in the medical domain. This implementation performs a major part of the semantic enrichment process with the medical terminologies, with difficult cases left to a human expert. PMID:16185937

  18. Ontology Based Quality Evaluation for Spatial Data

    NASA Astrophysics Data System (ADS)

    Yılmaz, C.; Cömert, Ç.

    2015-08-01

    Many institutions will be providing data to the National Spatial Data Infrastructure (NSDI). Current technical background of the NSDI is based on syntactic web services. It is expected that this will be replaced by semantic web services. The quality of the data provided is important in terms of the decision-making process and the accuracy of transactions. Therefore, the data quality needs to be tested. This topic has been neglected in Turkey. Data quality control for NSDI may be done by private or public "data accreditation" institutions. A methodology is required for data quality evaluation. There are studies for data quality including ISO standards, academic studies and software to evaluate spatial data quality. ISO 19157 standard defines the data quality elements. Proprietary software such as, 1Spatial's 1Validate and ESRI's Data Reviewer offers quality evaluation based on their own classification of rules. Commonly, rule based approaches are used for geospatial data quality check. In this study, we look for the technical components to devise and implement a rule based approach with ontologies using free and open source software in semantic web context. Semantic web uses ontologies to deliver well-defined web resources and make them accessible to end-users and processes. We have created an ontology conforming to the geospatial data and defined some sample rules to show how to test data with respect to data quality elements including; attribute, topo-semantic and geometrical consistency using free and open source software. To test data against rules, sample GeoSPARQL queries are created, associated with specifications.

  19. Spatial Data Integration Using Ontology-Based Approach

    NASA Astrophysics Data System (ADS)

    Hasani, S.; Sadeghi-Niaraki, A.; Jelokhani-Niaraki, M.

    2015-12-01

    In today's world, the necessity for spatial data for various organizations is becoming so crucial that many of these organizations have begun to produce spatial data for that purpose. In some circumstances, the need to obtain real time integrated data requires sustainable mechanism to process real-time integration. Case in point, the disater management situations that requires obtaining real time data from various sources of information. One of the problematic challenges in the mentioned situation is the high degree of heterogeneity between different organizations data. To solve this issue, we introduce an ontology-based method to provide sharing and integration capabilities for the existing databases. In addition to resolving semantic heterogeneity, better access to information is also provided by our proposed method. Our approach is consisted of three steps, the first step is identification of the object in a relational database, then the semantic relationships between them are modelled and subsequently, the ontology of each database is created. In a second step, the relative ontology will be inserted into the database and the relationship of each class of ontology will be inserted into the new created column in database tables. Last step is consisted of a platform based on service-oriented architecture, which allows integration of data. This is done by using the concept of ontology mapping. The proposed approach, in addition to being fast and low cost, makes the process of data integration easy and the data remains unchanged and thus takes advantage of the legacy application provided.

  20. Full-spectrum k-distribution look-up table for nonhomogeneous gas-soot mixtures

    NASA Astrophysics Data System (ADS)

    Wang, Chaojun; Modest, Michael F.; He, Boshu

    2016-06-01

    Full-spectrum k-distribution (FSK) look-up tables provide great accuracy combined with outstanding numerical efficiency for the evaluation of radiative transfer in nonhomogeneous gaseous media. However, previously published tables cannot be used for gas-soot mixtures that are found in most combustion scenarios since it is impossible to assemble k-distributions for a gas mixed with nongray absorbing particles from gas-only full-spectrum k-distributions. Consequently, a new FSK look-up table has been constructed by optimizing the previous table recently published by the authors and then adding one soot volume fraction to this optimized table. Two steps comprise the optimization scheme: (1) direct calculation of the nongray stretching factors (a-values) using the k-distributions (k-values) rather than tabulating them; (2) deletion of unnecessary mole fractions at many thermodynamic states. Results show that after optimization, the size of the new table is reduced from 5 GB (including the k-values and the a-values for gases only) to 3.2 GB (including the k-values for both gases and soot) while both accuracy and efficiency remain the same. Two scaled flames are used to validate the new table. It is shown that the new table gives results of excellent accuracy for those benchmark results together with cheap computational cost for both gas mixtures and gas-soot mixtures.

  1. A region segmentation based algorithm for building a crystal position lookup table in a scintillation detector

    NASA Astrophysics Data System (ADS)

    Wang, Hai-Peng; Yun, Ming-Kai; Liu, Shuang-Quan; Fan, Xin; Cao, Xue-Xiang; Chai, Pei; Shan, Bao-Ci

    2015-03-01

    In a scintillation detector, scintillation crystals are typically made into a 2-dimensional modular array. The location of incident gamma-ray needs be calibrated due to spatial response nonlinearity. Generally, position histograms-the characteristic flood response of scintillation detectors-are used for position calibration. In this paper, a position calibration method based on a crystal position lookup table which maps the inaccurate location calculated by Anger logic to the exact hitting crystal position has been proposed. Firstly, the position histogram is preprocessed, such as noise reduction and image enhancement. Then the processed position histogram is segmented into disconnected regions, and crystal marking points are labeled by finding the centroids of regions. Finally, crystal boundaries are determined and the crystal position lookup table is generated. The scheme is evaluated by the whole-body positron emission tomography (PET) scanner and breast dedicated single photon emission computed tomography scanner developed by the Institute of High Energy Physics, Chinese Academy of Sciences. The results demonstrate that the algorithm is accurate, efficient, robust and applicable to any configurations of scintillation detector. Supported by National Natural Science Foundation of China (81101175) and XIE Jia-Lin Foundation of Institute of High Energy Physics (Y3546360U2)

  2. Fast thumbnail generation for MPEG video by using a multiple-symbol lookup table

    NASA Astrophysics Data System (ADS)

    Kim, Myounghoon; Lee, Hoonjae; Yoon, Ja-Cheon; Kim, Hyeokman; Sull, Sanghoon

    2009-03-01

    A novel method using a multiple-symbol lookup table (mLUT) is proposed to fast-skip the ac coefficients (codewords) not needed to construct a dc image from MPEG-1/2 video streams, resulting in fast thumbnail generation. For MPEG-1/2 video streams, thumbnail generation schemes usually extract dc images directly in a compressed domain where a dc image is constructed using a dc coefficient and a few ac coefficients from among the discrete cosine transform (DCT) coefficients. However, it is required that all codewords for DCT coefficients should be fully decoded whether they are needed or not in generating a dc image, since the bit length of a codeword coded with variable-length coding (VLC) cannot be determined until the previous VLC codeword has been decoded. Thus, a method using a mLUT designed for fast-skipping unnecessary DCT coefficients to construct a dc image is proposed, resulting in a significantly reduced number of table lookups (LUT count) for variable-length decoding of codewords. Experimental results show that the proposed method significantly improves the performance by reducing the LUT count by 50%.

  3. On the look-up tables for the critical heat flux in tubes (history and problems)

    SciTech Connect

    Kirillov, P.L.; Smogalev, I.P.

    1995-09-01

    The complication of critical heat flux (CHF) problem for boiling in channels is caused by the large number of variable factors and the variety of two-phase flows. The existence of several hundreds of correlations for the prediction of CHF demonstrates the unsatisfactory state of this problem. The phenomenological CHF models can provide only the qualitative predictions of CHF primarily in annular-dispersed flow. The CHF look-up tables covered the results of numerous experiments received more recognition in the last 15 years. These tables are based on the statistical averaging of CHF values for each range of pressure, mass flux and quality. The CHF values for regions, where no experimental data is available, are obtained by extrapolation. The correction of these tables to account for the diameter effect is a complicated problem. There are ranges of conditions where the simple correlations cannot produce the reliable results. Therefore, diameter effect on CHF needs additional study. The modification of look-up table data for CHF in tubes to predict CHF in rod bundles must include a method which to take into account the nonuniformity of quality in a rod bundle cross section.

  4. An ontological knowledge framework for adaptive medical workflow.

    PubMed

    Dang, Jiangbo; Hedayati, Amir; Hampel, Ken; Toklu, Candemir

    2008-10-01

    As emerging technologies, semantic Web and SOA (Service-Oriented Architecture) allow BPMS (Business Process Management System) to automate business processes that can be described as services, which in turn can be used to wrap existing enterprise applications. BPMS provides tools and methodologies to compose Web services that can be executed as business processes and monitored by BPM (Business Process Management) consoles. Ontologies are a formal declarative knowledge representation model. It provides a foundation upon which machine understandable knowledge can be obtained, and as a result, it makes machine intelligence possible. Healthcare systems can adopt these technologies to make them ubiquitous, adaptive, and intelligent, and then serve patients better. This paper presents an ontological knowledge framework that covers healthcare domains that a hospital encompasses-from the medical or administrative tasks, to hospital assets, medical insurances, patient records, drugs, and regulations. Therefore, our ontology makes our vision of personalized healthcare possible by capturing all necessary knowledge for a complex personalized healthcare scenario involving patient care, insurance policies, and drug prescriptions, and compliances. For example, our ontology facilitates a workflow management system to allow users, from physicians to administrative assistants, to manage, even create context-aware new medical workflows and execute them on-the-fly. PMID:18602872

  5. Research on land registration procedure ontology of China

    NASA Astrophysics Data System (ADS)

    Zhao, Zhongjun; Du, Qingyun; Zhang, Weiwei; Liu, Tao

    2009-10-01

    Land registration is public act which is to record the state-owned land use right, collective land ownership, collective land use right and land mortgage, servitude, as well as other land rights required the registration according to laws and regulations onto land registering books. Land registration is one of the important government affairs , so it is very important to standardize, optimize and humanize the process of land registration. The management works of organization are realized through a variety of workflows. Process knowledge is in essence a kind of methodology knowledge and a system which including the core and the relational knowledge. In this paper, the ontology is introduced into the field of land registration and management, trying to optimize the flow of land registration, to promote the automation-building and intelligent Service of land registration affairs, to provide humanized and intelligent service for multi-types of users . This paper tries to build land registration procedure ontology by defining the land registration procedure ontology's key concepts which represent the kinds of processes of land registration and mapping the kinds of processes to OWL-S. The land registration procedure ontology shall be the start and the basis of the Web service.

  6. Differentiated Services: A New Reference Model.

    ERIC Educational Resources Information Center

    Whitson, William L.

    1995-01-01

    Examines advantages and disadvantages of the traditional model of undifferentiated service versus an alternative model of differentiated services, which includes directions and general information; technical assistance, "information lookup" for the client, research consultation, and library instruction. Suggests each service should fit staff…

  7. How granularity issues concern biomedical ontology integration.

    PubMed

    Schulz, Stefan; Boeker, Martin; Stenzhorn, Holger

    2008-01-01

    The application of upper ontologies has been repeatedly advocated for supporting interoperability between domain ontologies in order to facilitate shared data use both within and across disciplines. We have developed BioTop as a top-domain ontology to integrate more specialized ontologies in the biomolecular and biomedical domain. In this paper, we report on concrete integration problems of this ontology with the domain-independent Basic Formal Ontology (BFO) concerning the issue of fiat and aggregated objects in the context of different granularity levels. We conclude that the third BFO level must be ignored in order not to obviate cross-granularity integration. PMID:18487840

  8. A Foundational Approach to Designing Geoscience Ontologies

    NASA Astrophysics Data System (ADS)

    Brodaric, B.

    2009-05-01

    E-science systems are increasingly deploying ontologies to aid online geoscience research. Geoscience ontologies are typically constructed independently by isolated individuals or groups who tend to follow few design principles. This limits the usability of the ontologies due to conceptualizations that are vague, conflicting, or narrow. Advances in foundational ontologies and formal engineering approaches offer promising solutions, but these advanced techniques have had limited application in the geosciences. This paper develops a design approach for geoscience ontologies by extending aspects of the DOLCE foundational ontology and the OntoClean method. Geoscience examples will be presented to demonstrate the feasibility of the approach.

  9. A look-up table based approach to characterize crystal twinning for synchrotron X-ray Laue microdiffraction scans

    PubMed Central

    Li, Yao; Wan, Liang; Chen, Kai

    2015-01-01

    An automated method has been developed to characterize the type and spatial distribution of twinning in crystal orientation maps from synchrotron X-ray Laue microdiffraction results. The method relies on a look-up table approach. Taking into account the twin axis and twin plane for plausible rotation and reflection twins, respectively, and the point group symmetry operations for a specific crystal, a look-up table listing crystal-specific rotation angle–axis pairs, which reveal the orientation relationship between the twin and the parent lattice, is generated. By comparing these theoretical twin–parent orientation relationships in the look-up table with the measured misorientations, twin boundaries are mapped automatically from Laue microdiffraction raster scans with thousands of data points. Taking advantage of the high orientation resolution of the Laue microdiffraction method, this automated approach is also applicable to differentiating twinning elements among multiple twinning modes in any crystal system. PMID:26089764

  10. Ontology-based approach for managing personal health and wellness information.

    PubMed

    Sachinopoulou, Anna; Leppänen, Juha; Kaijanranta, Hannu; Lähteenmäki, Jaakko

    2007-01-01

    This paper describes a new approach for collecting and sharing personal health and wellness information. The approach is based on a Personal Health Record (PHR) including both clinical and non-clinical data. The PHR is located on a network server referred as Common Server. The overall service architecture for providing anonymous and private access to the PHR is described. Semantic interoperability is based on an ontology collection and usage of OID (Object Identifier) codes. The formal (upper) ontology combines a set of domain ontologies representing different aspects of personal health and wellness. The ontology collection emphasizes wellness aspects while clinical data is modelled by using OID references to existing vocabularies. Modular ontology approach enables distributed management and expansion of the data model. PMID:18002328

  11. Hydrologic Ontology for the Web

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.; Piasecki, M.

    2003-12-01

    This poster presents the conceptual development of a Hydrologic Ontology for the Web (HOW) that will facilitate data sharing among the hydrologic community. Hydrologic data is difficult to share because of its predicted vast increase in data volume, the availability of new measurement technologies and the heterogeneity of information systems used to produced, store, retrieved and used the data. The augmented capacity of the Internet and the technologies recommended by the W3C, as well as metadata standards provide sophisticated means to make data more usable and systems to be more integrated. Standard metadata is commonly used to solve interoperability issues. For the hydrologic field an explicit metadata standard does not exist, but one could be created extending metadata standards such as the FGDC-STD-001-1998 or ISO 19115. Standard metadata defines a set of elements required to describe data in a consistent manner, and their domains are sometimes restricted by a finite set of values or controlled vocabulary (e.g. code lists in ISO/DIS 19115). This controlled vocabulary is domain specific varying from one information community to another, allowing dissimilar descriptions to similar data sets. This issue is sometimes called semantic non-interoperability or semantic heterogeneity, and it is usually the main problem when sharing data. Explicit domain ontologies could be created to provide semantic interoperability among heterogeneous information communities. Domain ontologies supply the values for restricted domains of some elements in the metadata set and the semantic mapping with other domain ontologies. To achieve interoperability between applications that exchange machine-understandable information on the Web, metadata is expressed using Resource Description Framework (RDF) and domain ontologies are expressed using the Ontology Web Language (OWL), which is also based on RDF. A specific OWL ontology for hydrology is HOW. HOW presents, using a formal syntax, the

  12. An ontology for sensor networks

    NASA Astrophysics Data System (ADS)

    Compton, Michael; Neuhaus, Holger; Bermudez, Luis; Cox, Simon

    2010-05-01

    Sensors and networks of sensors are important ways of monitoring and digitizing reality. As the number and size of sensor networks grows, so too does the amount of data collected. Users of such networks typically need to discover the sensors and data that fit their needs without necessarily understanding the complexities of the network itself. The burden on users is eased if the network and its data are expressed in terms of concepts familiar to the users and their job functions, rather than in terms of the network or how it was designed. Furthermore, the task of collecting and combining data from multiple sensor networks is made easier if metadata about the data and the networks is stored in a format and conceptual models that is amenable to machine reasoning and inference. While the OGC's (Open Geospatial Consortium) SWE (Sensor Web Enablement) standards provide for the description and access to data and metadata for sensors, they do not provide facilities for abstraction, categorization, and reasoning consistent with standard technologies. Once sensors and networks are described using rich semantics (that is, by using logic to describe the sensors, the domain of interest, and the measurements) then reasoning and classification can be used to analyse and categorise data, relate measurements with similar information content, and manage, query and task sensors. This will enable types of automated processing and logical assurance built on OGC standards. The W3C SSN-XG (Semantic Sensor Networks Incubator Group) is producing a generic ontology to describe sensors, their environment and the measurements they make. The ontology provides definitions for the structure of sensors and observations, leaving the details of the observed domain unspecified. This allows abstract representations of real world entities, which are not observed directly but through their observable qualities. Domain semantics, units of measurement, time and time series, and location and mobility

  13. Complex Topographic Feature Ontology Patterns

    USGS Publications Warehouse

    Varanka, Dalia E.; Jerris, Thomas J.

    2015-01-01

    Semantic ontologies are examined as effective data models for the representation of complex topographic feature types. Complex feature types are viewed as integrated relations between basic features for a basic purpose. In the context of topographic science, such component assemblages are supported by resource systems and found on the local landscape. Ontologies are organized within six thematic modules of a domain ontology called Topography that includes within its sphere basic feature types, resource systems, and landscape types. Context is constructed not only as a spatial and temporal setting, but a setting also based on environmental processes. Types of spatial relations that exist between components include location, generative processes, and description. An example is offered in a complex feature type ‘mine.’ The identification and extraction of complex feature types are an area for future research.

  14. IDOMAL: the malaria ontology revisited

    PubMed Central

    2013-01-01

    Background With about half a billion cases, of which nearly one million fatal ones, malaria constitutes one of the major infectious diseases worldwide. A recently revived effort to eliminate the disease also focuses on IT resources for its efficient control, which prominently includes the control of the mosquito vectors that transmit the Plasmodium pathogens. As part of this effort, IDOMAL has been developed and it is continually being updated. Findings In addition to the improvement of IDOMAL’s structure and the correction of some inaccuracies, there were some major subdomain additions such as a section on natural products and remedies, and the import, from other, higher order ontologies, of several terms, which were merged with IDOMAL terms. Effort was put on rendering IDOMAL fully compatible as an extension of IDO, the Infectious Disease Ontology. The reason for the difficulties in fully reaching that target were the inherent differences between vector-borne diseases and “classical” infectious diseases, which make it necessary to specifically adjust the ontology’s architecture in order to comprise vectors and their populations. Conclusions In addition to a higher coverage of domain-specific terms and optimizing its usage by databases and decision-support systems, the new version of IDOMAL described here allows for more cross-talk between it and other ontologies, and in particular IDO. The malaria ontology is available for downloading at the OBO Foundry (http://www.obofoundry.org/cgi-bin/detail.cgi?id=malaria_ontology) and the NCBO BioPortal (http://bioportal.bioontology.org/ontologies/1311). PMID:24034841

  15. CLO: The cell line ontology

    PubMed Central

    2014-01-01

    Background Cell lines have been widely used in biomedical research. The community-based Cell Line Ontology (CLO) is a member of the OBO Foundry library that covers the domain of cell lines. Since its publication two years ago, significant updates have been made, including new groups joining the CLO consortium, new cell line cells, upper level alignment with the Cell Ontology (CL) and the Ontology for Biomedical Investigation, and logical extensions. Construction and content Collaboration among the CLO, CL, and OBI has established consensus definitions of cell line-specific terms such as ‘cell line’, ‘cell line cell’, ‘cell line culturing’, and ‘mortal’ vs. ‘immortal cell line cell’. A cell line is a genetically stable cultured cell population that contains individual cell line cells. The hierarchical structure of the CLO is built based on the hierarchy of the in vivo cell types defined in CL and tissue types (from which cell line cells are derived) defined in the UBERON cross-species anatomy ontology. The new hierarchical structure makes it easier to browse, query, and perform automated classification. We have recently added classes representing more than 2,000 cell line cells from the RIKEN BRC Cell Bank to CLO. Overall, the CLO now contains ~38,000 classes of specific cell line cells derived from over 200 in vivo cell types from various organisms. Utility and discussion The CLO has been applied to different biomedical research studies. Example case studies include annotation and analysis of EBI ArrayExpress data, bioassays, and host-vaccine/pathogen interaction. CLO’s utility goes beyond a catalogue of cell line types. The alignment of the CLO with related ontologies combined with the use of ontological reasoners will support sophisticated inferencing to advance translational informatics development. PMID:25852852

  16. Design of schistosomiasis ontology (IDOSCHISTO) extending the infectious disease ontology.

    PubMed

    Camara, Gaoussou; Despres, Sylvie; Djedidi, Rim; Lo, Moussa

    2013-01-01

    Epidemiological monitoring of the schistosomiasis' spreading brings together many practitioners working at different levels of granularity (biology, host individual, host population), who have different perspectives (biology, clinic and epidemiology) on the same phenomenon. Biological perspective deals with pathogens (e.g. life cycle) or physiopathology while clinical perspective deals with hosts (e.g. healthy or infected host, diagnosis, treatment, etc.). In an epidemiological perspective corresponding to the host population level of granularity, the schistosomiasis disease is characterized according to the way (causes, risk factors, etc.) it spreads in this population over space and time. In this paper we provide an ontological analysis and design for the Schistosomiasis domain knowledge and spreading dynamics. IDOSCHISTO - the schistosomiasis ontology - is designed as an extension of the Infectious Disease Ontology (IDO). This ontology aims at supporting the schistosomiasis monitoring process during a spreading crisis by enabling data integration, semantic interoperability, for collaborative work on one hand and for risk analysis and decision making on the other hand. PMID:23920598

  17. COMPASS: A Geospatial Knowledge Infrastructure Managed with Ontologies

    NASA Astrophysics Data System (ADS)

    Stock, K.

    2009-04-01

    COMPASS: A Geospatial Knowledge Infrastructure Managed with Ontologies Dr Kristin Stock Allworlds Geothinking, United Kingdom and EDINA, University of Edinburgh, United Kingdom and Centre for Geospatial Science University of Nottingham Nottingham United Kingdom The research and decision-making process in any discipline is supported by a vast quantity and diversity of scientific resources, including journal articles; scientific models; scientific theories; data sets and web services that implement scientific models or provide other functionality. Improved discovery and access to these scientific resources has the potential to make the process of using and developing scientific knowledge more effective and efficient. Current scientific research or decision making that relies on scientific resources requires an extensive search for relevant resources. Published journal papers may be discovered using web searches on the basis of words that appear in the title or metadata, but this approach is limited by the need to select the appropriate words, and does not identify articles that may be of interest because they use a similar approach, methodology or technique but are in a different discipline, or that are likely to be helpful despite not sharing the same keywords. The COMPASS project is developing a knowledge infrastructure that is intended to enhance the user experience in discovering scientific resources. This is being achieved with an approach that uses ontologies to manage the knowledge infrastructure in two ways: 1. A set of ontologies describe the resources in the knowledge infrastructure (for example, publications and web services) in terms of the domain concepts to which they relate, the scientific theories and models that they depend on, and the characteristics of the resources themselves. These ontologies are provided to users either directly or with assisted search tools to aid them in the discovery process. OWL-S ontologies are being used to describe web

  18. Controlled Vocabularies, Mini Ontologies and Interoperability (Invited)

    NASA Astrophysics Data System (ADS)

    King, T. A.; Walker, R. J.; Roberts, D.; Thieman, J.; Ritschel, B.; Cecconi, B.; Genot, V. N.

    2013-12-01

    Interoperability has been an elusive goal, but in recent years advances have been made using controlled vocabularies, mini-ontologies and a lot of collaboration. This has led to increased interoperability between disciplines in the U.S. and between international projects. We discuss the successful pattern followed by SPASE, IVOA and IPDA to achieve this new level of international interoperability. A key aspect of the pattern is open standards and open participation with interoperability achieved with shared services, public APIs, standard formats and open access to data. Many of these standards are expressed as controlled vocabularies and mini ontologies. To illustrate the pattern we look at SPASE related efforts and participation of North America's Heliophysics Data Environment and CDPP; Europe's Cluster Active Archive, IMPEx, EuroPlanet, ESPAS and HELIO; and Japan's magnetospheric missions. Each participating project has its own life cycle and successful standards development must always take this into account. A major challenge for sustained collaboration and interoperability is the limited lifespan of many of the participating projects. Innovative approaches and new tools and frameworks are often developed as competitively selected, limited term projects, but for sustainable interoperability successful approaches need to become part of a long term infrastructure. This is being encouraged and achieved in many domains and we are entering a golden age of interoperability.

  19. Gene Ontology annotations and resources.

    PubMed

    Blake, J A; Dolan, M; Drabkin, H; Hill, D P; Li, Ni; Sitnikov, D; Bridges, S; Burgess, S; Buza, T; McCarthy, F; Peddinti, D; Pillai, L; Carbon, S; Dietze, H; Ireland, A; Lewis, S E; Mungall, C J; Gaudet, P; Chrisholm, R L; Fey, P; Kibbe, W A; Basu, S; Siegele, D A; McIntosh, B K; Renfro, D P; Zweifel, A E; Hu, J C; Brown, N H; Tweedie, S; Alam-Faruque, Y; Apweiler, R; Auchinchloss, A; Axelsen, K; Bely, B; Blatter, M -C; Bonilla, C; Bouguerleret, L; Boutet, E; Breuza, L; Bridge, A; Chan, W M; Chavali, G; Coudert, E; Dimmer, E; Estreicher, A; Famiglietti, L; Feuermann, M; Gos, A; Gruaz-Gumowski, N; Hieta, R; Hinz, C; Hulo, C; Huntley, R; James, J; Jungo, F; Keller, G; Laiho, K; Legge, D; Lemercier, P; Lieberherr, D; Magrane, M; Martin, M J; Masson, P; Mutowo-Muellenet, P; O'Donovan, C; Pedruzzi, I; Pichler, K; Poggioli, D; Porras Millán, P; Poux, S; Rivoire, C; Roechert, B; Sawford, T; Schneider, M; Stutz, A; Sundaram, S; Tognolli, M; Xenarios, I; Foulgar, R; Lomax, J; Roncaglia, P; Khodiyar, V K; Lovering, R C; Talmud, P J; Chibucos, M; Giglio, M Gwinn; Chang, H -Y; Hunter, S; McAnulla, C; Mitchell, A; Sangrador, A; Stephan, R; Harris, M A; Oliver, S G; Rutherford, K; Wood, V; Bahler, J; Lock, A; Kersey, P J; McDowall, D M; Staines, D M; Dwinell, M; Shimoyama, M; Laulederkind, S; Hayman, T; Wang, S -J; Petri, V; Lowry, T; D'Eustachio, P; Matthews, L; Balakrishnan, R; Binkley, G; Cherry, J M; Costanzo, M C; Dwight, S S; Engel, S R; Fisk, D G; Hitz, B C; Hong, E L; Karra, K; Miyasato, S R; Nash, R S; Park, J; Skrzypek, M S; Weng, S; Wong, E D; Berardini, T Z; Huala, E; Mi, H; Thomas, P D; Chan, J; Kishore, R; Sternberg, P; Van Auken, K; Howe, D; Westerfield, M

    2013-01-01

    The Gene Ontology (GO) Consortium (GOC, http://www.geneontology.org) is a community-based bioinformatics resource that classifies gene product function through the use of structured, controlled vocabularies. Over the past year, the GOC has implemented several processes to increase the quantity, quality and specificity of GO annotations. First, the number of manual, literature-based annotations has grown at an increasing rate. Second, as a result of a new 'phylogenetic annotation' process, manually reviewed, homology-based annotations are becoming available for a broad range of species. Third, the quality of GO annotations has been improved through a streamlined process for, and automated quality checks of, GO annotations deposited by different annotation groups. Fourth, the consistency and correctness of the ontology itself has increased by using automated reasoning tools. Finally, the GO has been expanded not only to cover new areas of biology through focused interaction with experts, but also to capture greater specificity in all areas of the ontology using tools for adding new combinatorial terms. The GOC works closely with other ontology developers to support integrated use of terminologies. The GOC supports its user community through the use of e-mail lists, social media and web-based resources. PMID:23161678

  20. The SWAN biomedical discourse ontology.

    PubMed

    Ciccarese, Paolo; Wu, Elizabeth; Wong, Gwen; Ocana, Marco; Kinoshita, June; Ruttenberg, Alan; Clark, Tim

    2008-10-01

    Developing cures for highly complex diseases, such as neurodegenerative disorders, requires extensive interdisciplinary collaboration and exchange of biomedical information in context. Our ability to exchange such information across sub-specialties today is limited by the current scientific knowledge ecosystem's inability to properly contextualize and integrate data and discourse in machine-interpretable form. This inherently limits the productivity of research and the progress toward cures for devastating diseases such as Alzheimer's and Parkinson's. SWAN (Semantic Web Applications in Neuromedicine) is an interdisciplinary project to develop a practical, common, semantically structured, framework for biomedical discourse initially applied, but not limited, to significant problems in Alzheimer Disease (AD) research. The SWAN ontology has been developed in the context of building a series of applications for biomedical researchers, as well as in extensive discussions and collaborations with the larger bio-ontologies community. In this paper, we present and discuss the SWAN ontology of biomedical discourse. We ground its development theoretically, present its design approach, explain its main classes and their application, and show its relationship to other ongoing activities in biomedicine and bio-ontologies. PMID:18583197

  1. Gene Ontology Annotations and Resources

    PubMed Central

    2013-01-01

    The Gene Ontology (GO) Consortium (GOC, http://www.geneontology.org) is a community-based bioinformatics resource that classifies gene product function through the use of structured, controlled vocabularies. Over the past year, the GOC has implemented several processes to increase the quantity, quality and specificity of GO annotations. First, the number of manual, literature-based annotations has grown at an increasing rate. Second, as a result of a new ‘phylogenetic annotation’ process, manually reviewed, homology-based annotations are becoming available for a broad range of species. Third, the quality of GO annotations has been improved through a streamlined process for, and automated quality checks of, GO annotations deposited by different annotation groups. Fourth, the consistency and correctness of the ontology itself has increased by using automated reasoning tools. Finally, the GO has been expanded not only to cover new areas of biology through focused interaction with experts, but also to capture greater specificity in all areas of the ontology using tools for adding new combinatorial terms. The GOC works closely with other ontology developers to support integrated use of terminologies. The GOC supports its user community through the use of e-mail lists, social media and web-based resources. PMID:23161678

  2. Ontology driven image search engine

    NASA Astrophysics Data System (ADS)

    Bei, Yun; Dmitrieva, Julia; Belmamoune, Mounia; Verbeek, Fons J.

    2007-01-01

    Image collections are most often domain specific. We have developed a system for image retrieval of multimodal microscopy images. That is, the same object of study visualized with a range of microscope techniques and with a range of different resolutions. In microscopy, image content is depending on the preparation method of the object under study as well as the microscope technique. Both are taken into account in the submission phase as metadata whilst at the same time (domain specific) ontologies are employed as controlled vocabularies to annotate the image. From that point onward, image data are interrelated through the relationships derived from annotated concepts in the ontology. By using concepts and relationships of an ontology, complex queries can be built with true semantic content. Image metadata can be used as powerful criteria to query image data which are directly or indirectly related to original data. The results of image retrieval can be represented using a structural graph by exploiting relationships from ontology rather than a listed table. Applying this to retrieve images from the same subject at different levels of resolution opens a new field for the analysis of image content.

  3. Emotion Education without Ontological Commitment?

    ERIC Educational Resources Information Center

    Kristjansson, Kristjan

    2010-01-01

    Emotion education is enjoying new-found popularity. This paper explores the "cosy consensus" that seems to have developed in education circles, according to which approaches to emotion education are immune from metaethical considerations such as contrasting rationalist and sentimentalist views about the moral ontology of emotions. I spell out five…

  4. Ontology for Vector Surveillance and Management

    PubMed Central

    LOZANO-FUENTES, SAUL; BANDYOPADHYAY, ARITRA; COWELL, LINDSAY G.; GOLDFAIN, ALBERT; EISEN, LARS

    2013-01-01

    Ontologies, which are made up by standardized and defined controlled vocabulary terms and their interrelationships, are comprehensive and readily searchable repositories for knowledge in a given domain. The Open Biomedical Ontologies (OBO) Foundry was initiated in 2001 with the aims of becoming an “umbrella” for life-science ontologies and promoting the use of ontology development best practices. A software application (OBO-Edit; *.obo file format) was developed to facilitate ontology development and editing. The OBO Foundry now comprises over 100 ontologies and candidate ontologies, including the NCBI organismal classification ontology (NCBITaxon), the Mosquito Insecticide Resistance Ontology (MIRO), the Infectious Disease Ontology (IDO), the IDOMAL malaria ontology, and ontologies for mosquito gross anatomy and tick gross anatomy. We previously developed a disease data management system for dengue and malaria control programs, which incorporated a set of information trees built upon ontological principles, including a “term tree” to promote the use of standardized terms. In the course of doing so, we realized that there were substantial gaps in existing ontologies with regards to concepts, processes, and, especially, physical entities (e.g., vector species, pathogen species, and vector surveillance and management equipment) in the domain of surveillance and management of vectors and vector-borne pathogens. We therefore produced an ontology for vector surveillance and management, focusing on arthropod vectors and vector-borne pathogens with relevance to humans or domestic animals, and with special emphasis on content to support operational activities through inclusion in databases, data management systems, or decision support systems. The Vector Surveillance and Management Ontology (VSMO) includes >2,200 unique terms, of which the vast majority (>80%) were newly generated during the development of this ontology. One core feature of the VSMO is the linkage

  5. Ontology for vector surveillance and management.

    PubMed

    Lozano-Fuentes, Saul; Bandyopadhyay, Aritra; Cowell, Lindsay G; Goldfain, Albert; Eisen, Lars

    2013-01-01

    Ontologies, which are made up by standardized and defined controlled vocabulary terms and their interrelationships, are comprehensive and readily searchable repositories for knowledge in a given domain. The Open Biomedical Ontologies (OBO) Foundry was initiated in 2001 with the aims of becoming an "umbrella" for life-science ontologies and promoting the use of ontology development best practices. A software application (OBO-Edit; *.obo file format) was developed to facilitate ontology development and editing. The OBO Foundry now comprises over 100 ontologies and candidate ontologies, including the NCBI organismal classification ontology (NCBITaxon), the Mosquito Insecticide Resistance Ontology (MIRO), the Infectious Disease Ontology (IDO), the IDOMAL malaria ontology, and ontologies for mosquito gross anatomy and tick gross anatomy. We previously developed a disease data management system for dengue and malaria control programs, which incorporated a set of information trees built upon ontological principles, including a "term tree" to promote the use of standardized terms. In the course of doing so, we realized that there were substantial gaps in existing ontologies with regards to concepts, processes, and, especially, physical entities (e.g., vector species, pathogen species, and vector surveillance and management equipment) in the domain of surveillance and management of vectors and vector-borne pathogens. We therefore produced an ontology for vector surveillance and management, focusing on arthropod vectors and vector-borne pathogens with relevance to humans or domestic animals, and with special emphasis on content to support operational activities through inclusion in databases, data management systems, or decision support systems. The Vector Surveillance and Management Ontology (VSMO) includes >2,200 unique terms, of which the vast majority (>80%) were newly generated during the development of this ontology. One core feature of the VSMO is the linkage, through

  6. Gradient Learning Algorithms for Ontology Computing

    PubMed Central

    Gao, Wei; Zhu, Linli

    2014-01-01

    The gradient learning model has been raising great attention in view of its promising perspectives for applications in statistics, data dimensionality reducing, and other specific fields. In this paper, we raise a new gradient learning model for ontology similarity measuring and ontology mapping in multidividing setting. The sample error in this setting is given by virtue of the hypothesis space and the trick of ontology dividing operator. Finally, two experiments presented on plant and humanoid robotics field verify the efficiency of the new computation model for ontology similarity measure and ontology mapping applications in multidividing setting. PMID:25530752

  7. Semantic Similarity in Biomedical Ontologies

    PubMed Central

    Pesquita, Catia; Faria, Daniel; Falcão, André O.; Lord, Phillip; Couto, Francisco M.

    2009-01-01

    In recent years, ontologies have become a mainstream topic in biomedical research. When biological entities are described using a common schema, such as an ontology, they can be compared by means of their annotations. This type of comparison is called semantic similarity, since it assesses the degree of relatedness between two entities by the similarity in meaning of their annotations. The application of semantic similarity to biomedical ontologies is recent; nevertheless, several studies have been published in the last few years describing and evaluating diverse approaches. Semantic similarity has become a valuable tool for validating the results drawn from biomedical studies such as gene clustering, gene expression data analysis, prediction and validation of molecular interactions, and disease gene prioritization. We review semantic similarity measures applied to biomedical ontologies and propose their classification according to the strategies they employ: node-based versus edge-based and pairwise versus groupwise. We also present comparative assessment studies and discuss the implications of their results. We survey the existing implementations of semantic similarity measures, and we describe examples of applications to biomedical research. This will clarify how biomedical researchers can benefit from semantic similarity measures and help them choose the approach most suitable for their studies. Biomedical ontologies are evolving toward increased coverage, formality, and integration, and their use for annotation is increasingly becoming a focus of both effort by biomedical experts and application of automated annotation procedures to create corpora of higher quality and completeness than are currently available. Given that semantic similarity measures are directly dependent on these evolutions, we can expect to see them gaining more relevance and even becoming as essential as sequence similarity is today in biomedical research. PMID:19649320

  8. Scalable representations of diseases in biomedical ontologies

    PubMed Central

    2011-01-01

    Background The realm of pathological entities can be subdivided into pathological dispositions, pathological processes, and pathological structures. The latter are the bearer of dispositions, which can then be realized by their manifestations — pathologic processes. Despite its ontological soundness, implementing this model via purpose-oriented domain ontologies will likely require considerable effort, both in ontology construction and maintenance, which constitutes a considerable problem for SNOMED CT, presently the largest biomedical ontology. Results We describe an ontology design pattern which allows ontologists to make assertions that blur the distinctions between dispositions, processes, and structures until necessary. Based on the domain upper-level ontology BioTop, it permits ascriptions of location and participation in the definition of pathological phenomena even without an ontological commitment to a distinction between these three categories. An analysis of SNOMED CT revealed that numerous classes in the findings/disease hierarchy are ambiguous with respect to process vs. disposition. Here our proposed approach can easily be applied to create unambiguous classes. No ambiguities could be defined regarding the distinction of structure and non-structure classes, but here we have found problematic duplications. Conclusions We defend a judicious use of disjunctive, and therefore ambiguous, classes in biomedical ontologies during the process of ontology construction and in the practice of ontology application. The use of these classes is permitted to span across several top-level categories, provided it contributes to ontology simplification and supports the intended reasoning scenarios. PMID:21624161

  9. Ontologies as integrative tools for plant science

    PubMed Central

    Walls, Ramona L.; Athreya, Balaji; Cooper, Laurel; Elser, Justin; Gandolfo, Maria A.; Jaiswal, Pankaj; Mungall, Christopher J.; Preece, Justin; Rensing, Stefan; Smith, Barry; Stevenson, Dennis W.

    2012-01-01

    Premise of the study Bio-ontologies are essential tools for accessing and analyzing the rapidly growing pool of plant genomic and phenomic data. Ontologies provide structured vocabularies to support consistent aggregation of data and a semantic framework for automated analyses and reasoning. They are a key component of the semantic web. Methods This paper provides background on what bio-ontologies are, why they are relevant to botany, and the principles of ontology development. It includes an overview of ontologies and related resources that are relevant to plant science, with a detailed description of the Plant Ontology (PO). We discuss the challenges of building an ontology that covers all green plants (Viridiplantae). Key results Ontologies can advance plant science in four keys areas: (1) comparative genetics, genomics, phenomics, and development; (2) taxonomy and systematics; (3) semantic applications; and (4) education. Conclusions Bio-ontologies offer a flexible framework for comparative plant biology, based on common botanical understanding. As genomic and phenomic data become available for more species, we anticipate that the annotation of data with ontology terms will become less centralized, while at the same time, the need for cross-species queries will become more common, causing more researchers in plant science to turn to ontologies. PMID:22847540

  10. Lightweight Community-Driven Ontology Evolution

    NASA Astrophysics Data System (ADS)

    Siorpaes, Katharina

    Only few well-maintained domain ontologies can be found on the Web. The likely reasons for the lack of useful domain ontologies include that (1) informal means to convey intended meaning more efficiently are used for ontology specification only to a very limited extent, (2) many relevant domains of discourse show a substantial degree of conceptual dynamics, (3) ontology representation languages are hard to understand for the majority of (potential) ontology users and domain experts, and (4) the community does not have control over the ontology evolution. In this thesis, we propose to (1) ground a methodology for community-grounded ontology building on the culture and philosophy of wikis by giving users who have no or little expertise in ontology engineering the opportunity to contribute in all stages of the ontology lifecycle and (2) exploit the combination of human and computational intelligence to discover and resolve inconsistencies and align lightweight domain ontologies. The contribution of this thesis is a methodology and prototype for community-grounded building and evolution of lightweight domain ontologies.

  11. CLASSIFYING PROCESSES: AN ESSAY IN APPLIED ONTOLOGY

    PubMed Central

    Smith, Barry

    2013-01-01

    We begin by describing recent developments in the burgeoning discipline of applied ontology, focusing especially on the ways ontologies are providing a means for the consistent representation of scientific data. We then introduce Basic Formal Ontology (BFO), a top-level ontology that is serving as domain-neutral framework for the development of lower level ontologies in many specialist disciplines, above all in biology and medicine. BFO is a bicategorial ontology, embracing both three-dimensionalist (continuant) and four-dimensionalist (occurrent) perspectives within a single framework. We examine how BFO-conformant domain ontologies can deal with the consistent representation of scientific data deriving from the measurement of processes of different types, and we outline on this basis the first steps of an approach to the classification of such processes within the BFO framework.1 PMID:23888086

  12. How orthogonal are the OBO Foundry ontologies?

    PubMed Central

    2011-01-01

    Background Ontologies in biomedicine facilitate information integration, data exchange, search and query of biomedical data, and other critical knowledge-intensive tasks. The OBO Foundry is a collaborative effort to establish a set of principles for ontology development with the eventual goal of creating a set of interoperable reference ontologies in the domain of biomedicine. One of the key requirements to achieve this goal is to ensure that ontology developers reuse term definitions that others have already created rather than create their own definitions, thereby making the ontologies orthogonal. Methods We used a simple lexical algorithm to analyze the extent to which the set of OBO Foundry candidate ontologies identified from September 2009 to September 2010 conforms to this vision. Specifically, we analyzed (1) the level of explicit term reuse in this set of ontologies, (2) the level of overlap, where two ontologies define similar terms independently, and (3) how the levels of reuse and overlap changed during the course of this year. Results We found that 30% of the ontologies reuse terms from other Foundry candidates and 96% of the candidate ontologies contain terms that overlap with terms from the other ontologies. We found that while term reuse increased among the ontologies between September 2009 and September 2010, the level of overlap among the ontologies remained relatively constant. Additionally, we analyzed the six ontologies announced as OBO Foundry members on March 5, 2010, and identified that the level of overlap was extremely low, but, notably, so was the level of term reuse. Conclusions We have created a prototype web application that allows OBO Foundry ontology developers to see which classes from their ontologies overlap with classes from other ontologies in the OBO Foundry (http://obomap.bioontology.org). From our analysis, we conclude that while the OBO Foundry has made significant progress toward orthogonality during the period of this

  13. Ontological realism: A methodology for coordinated evolution of scientific ontologies

    PubMed Central

    Smith, Barry; Ceusters, Werner

    2011-01-01

    Since 2002 we have been testing and refining a methodology for ontology development that is now being used by multiple groups of researchers in different life science domains. Gary Merrill, in a recent paper in this journal, describes some of the reasons why this methodology has been found attractive by researchers in the biological and biomedical sciences. At the same time he assails the methodology on philosophical grounds, focusing specifically on our recommendation that ontologies developed for scientific purposes should be constructed in such a way that their terms are seen as referring to what we call universals or types in reality. As we show, Merrill’s critique is of little relevance to the success of our realist project, since it not only reveals no actual errors in our work but also criticizes views on universals that we do not in fact hold. However, it nonetheless provides us with a valuable opportunity to clarify the realist methodology, and to show how some of its principles are being applied, especially within the framework of the OBO (Open Biomedical Ontologies) Foundry initiative. PMID:21637730

  14. Track-Level-Compensation Look-Up Table Improves Antenna Pointing Precision

    NASA Technical Reports Server (NTRS)

    Gawronski, W.; Baher, F.; Gama, E.

    2006-01-01

    This article presents the improvement of the beam-waveguide antenna pointing accuracy due to the implementation of the track-level-compensation look-up table. It presents the development of the table, from the measurements of the inclinometer tilts to the processing of the measurement data and the determination of the threeaxis alidade rotations. The table consists of three axis rotations of the alidade as a function of the azimuth position. The article also presents the equations to determine the elevation and cross-elevation errors of the antenna as a function of the alidade rotations and the antenna azimuth and elevation positions. The table performance was verified using radio beam pointing data. The pointing error decreased from 4.5 mdeg to 1.4 mdeg in elevation and from 14.5 mdeg to 3.1 mdeg in cross-elevation. I. Introduction The Deep Space Station 25 (DSS 25) antenna shown in Fig. 1 is one of NASA s Deep Space Network beam-waveguide (BWG) antennas. At 34 GHz (Ka-band) operation, it is necessary to be able to track with a pointing accuracy of 2-mdeg root-mean-square (rms). Repeatable pointing errors of several millidegrees of magnitude have been observed during the BWG antenna calibration measurements. The systematic errors of order 4 and lower are eliminated using the antenna pointing model. However, repeatable pointing errors of higher order are out of reach of the model. The most prominent high-order systematic errors are the ones caused by the uneven azimuth track. The track is shown in Fig. 2. Manufacturing and installation tolerances, as well as gaps between the segments of the track, are the sources of the pointing errors that reach over 14-mdeg peak-to-peak magnitude, as reported in [1,2]. This article presents a continuation of the investigations and measurements of the pointing errors caused by the azimuth-track-level unevenness that were presented in [1] and [2], and it presents the implementation results. Track-level-compensation (TLC) look-up

  15. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  16. The ChEBI reference database and ontology for biologically relevant chemistry: enhancements for 2013.

    PubMed

    Hastings, Janna; de Matos, Paula; Dekker, Adriano; Ennis, Marcus; Harsha, Bhavana; Kale, Namrata; Muthukrishnan, Venkatesh; Owen, Gareth; Turner, Steve; Williams, Mark; Steinbeck, Christoph

    2013-01-01

    ChEBI (http://www.ebi.ac.uk/chebi) is a database and ontology of chemical entities of biological interest. Over the past few years, ChEBI has continued to grow steadily in content, and has added several new features. In addition to incorporating all user-requested compounds, our annotation efforts have emphasized immunology, natural products and metabolites in many species. All database entries are now 'is_a' classified within the ontology, meaning that all of the chemicals are available to semantic reasoning tools that harness the classification hierarchy. We have completely aligned the ontology with the Open Biomedical Ontologies (OBO) Foundry-recommended upper level Basic Formal Ontology. Furthermore, we have aligned our chemical classification with the classification of chemical-involving processes in the Gene Ontology (GO), and as a result of this effort, the majority of chemical-involving processes in GO are now defined in terms of the ChEBI entities that participate in them. This effort necessitated incorporating many additional biologically relevant compounds. We have incorporated additional data types including reference citations, and the species and component for metabolites. Finally, our website and web services have had several enhancements, most notably the provision of a dynamic new interactive graph-based ontology visualization. PMID:23180789

  17. Definition of an Ontology Matching Algorithm for Context Integration in Smart Cities.

    PubMed

    Otero-Cerdeira, Lorena; Rodríguez-Martínez, Francisco J; Gómez-Rodríguez, Alma

    2014-01-01

    In this paper we describe a novel proposal in the field of smart cities: using an ontology matching algorithm to guarantee the automatic information exchange between the agents and the smart city. A smart city is composed by different types of agents that behave as producers and/or consumers of the information in the smart city. In our proposal, the data from the context is obtained by sensor and device agents while users interact with the smart city by means of user or system agents. The knowledge of each agent, as well as the smart city's knowledge, is semantically represented using different ontologies. To have an open city, that is fully accessible to any agent and therefore to provide enhanced services to the users, there is the need to ensure a seamless communication between agents and the city, regardless of their inner knowledge representations, i.e., ontologies. To meet this goal we use ontology matching techniques, specifically we have defined a new ontology matching algorithm called OntoPhil to be deployed within a smart city, which has never been done before. OntoPhil was tested on the benchmarks provided by the well known evaluation initiative, Ontology Alignment Evaluation Initiative, and also compared to other matching algorithms, although these algorithms were not specifically designed for smart cities. Additionally, specific tests involving a smart city's ontology and different types of agents were conducted to validate the usefulness of OntoPhil in the smart city environment. PMID:25494353

  18. Definition of an Ontology Matching Algorithm for Context Integration in Smart Cities

    PubMed Central

    Otero-Cerdeira, Lorena; Rodríguez-Martínez, Francisco J.; Gómez-Rodríguez, Alma

    2014-01-01

    In this paper we describe a novel proposal in the field of smart cities: using an ontology matching algorithm to guarantee the automatic information exchange between the agents and the smart city. A smart city is composed by different types of agents that behave as producers and/or consumers of the information in the smart city. In our proposal, the data from the context is obtained by sensor and device agents while users interact with the smart city by means of user or system agents. The knowledge of each agent, as well as the smart city's knowledge, is semantically represented using different ontologies. To have an open city, that is fully accessible to any agent and therefore to provide enhanced services to the users, there is the need to ensure a seamless communication between agents and the city, regardless of their inner knowledge representations, i.e., ontologies. To meet this goal we use ontology matching techniques, specifically we have defined a new ontology matching algorithm called OntoPhil to be deployed within a smart city, which has never been done before. OntoPhil was tested on the benchmarks provided by the well known evaluation initiative, Ontology Alignment Evaluation Initiative, and also compared to other matching algorithms, although these algorithms were not specifically designed for smart cities. Additionally, specific tests involving a smart city's ontology and different types of agents were conducted to validate the usefulness of OntoPhil in the smart city environment. PMID:25494353

  19. COBE: A Conjunctive Ontology Browser and Explorer for Visualizing SNOMED CT Fragments.

    PubMed

    Sun, Mengmeng; Zhu, Wei; Tao, Shiqiang; Cui, Licong; Zhang, Guo-Qiang

    2015-01-01

    Ontology search interfaces can benefit from the latest information retrieval advances. This paper introduces a Conjunctive Ontology Browser and Explorer (COBE) for searching and exploring SNOMED CT concepts and visualizing SNOMED CT fragments. COBE combines navigational exploration (NE) with direct lookup (DL) as two complementary modes for finding specific SNOMED CT concepts. The NE mode allows a user to interactively and incrementally narrow down (hence conjunctive) the search space by adding word stems, one at a time. Such word stems serve as attribute constraints, or "attributes" in Formal Concept Analysis, which allows the user to navigate to specific SNOMED CT concept clusters. The DL mode represents the common search mechanism by using a collection of key words, as well as concept identifiers. With respect to the DL mode, evaluation against manually created reference standard showed that COBE attains an example-based precision of 0.958, recall of 0.917, and F1 measure of 0.875. With respect to the NE mode, COBE leverages 28,371 concepts in non-lattice fragments to construct the stem cloud. With merely 9.37% of the total SNOMED CT stem cloud, our navigational exploration mode covers 98.97% of the entire concept collection. PMID:26958309

  20. COBE: A Conjunctive Ontology Browser and Explorer for Visualizing SNOMED CT Fragments

    PubMed Central

    Sun, Mengmeng; Zhu, Wei; Tao, Shiqiang; Cui, Licong; Zhang, Guo-Qiang

    2015-01-01

    Ontology search interfaces can benefit from the latest information retrieval advances. This paper introduces a Conjunctive Ontology Browser and Explorer (COBE) for searching and exploring SNOMED CT concepts and visualizing SNOMED CT fragments. COBE combines navigational exploration (NE) with direct lookup (DL) as two complementary modes for finding specific SNOMED CT concepts. The NE mode allows a user to interactively and incrementally narrow down (hence conjunctive) the search space by adding word stems, one at a time. Such word stems serve as attribute constraints, or “attributes” in Formal Concept Analysis, which allows the user to navigate to specific SNOMED CT concept clusters. The DL mode represents the common search mechanism by using a collection of key words, as well as concept identifiers. With respect to the DL mode, evaluation against manually created reference standard showed that COBE attains an example-based precision of 0.958, recall of 0.917, and F1 measure of 0.875. With respect to the NE mode, COBE leverages 28,371 concepts in non-lattice fragments to construct the stem cloud. With merely 9.37% of the total SNOMED CT stem cloud, our navigational exploration mode covers 98.97% of the entire concept collection. PMID:26958309

  1. Ontology Reuse in Geoscience Semantic Applications

    NASA Astrophysics Data System (ADS)

    Mayernik, M. S.; Gross, M. B.; Daniels, M. D.; Rowan, L. R.; Stott, D.; Maull, K. E.; Khan, H.; Corson-Rikert, J.

    2015-12-01

    The tension between local ontology development and wider ontology connections is fundamental to the Semantic web. It is often unclear, however, what the key decision points should be for new semantic web applications in deciding when to reuse existing ontologies and when to develop original ontologies. In addition, with the growth of semantic web ontologies and applications, new semantic web applications can struggle to efficiently and effectively identify and select ontologies to reuse. This presentation will describe the ontology comparison, selection, and consolidation effort within the EarthCollab project. UCAR, Cornell University, and UNAVCO are collaborating on the EarthCollab project to use semantic web technologies to enable the discovery of the research output from a diverse array of projects. The EarthCollab project is using the VIVO Semantic web software suite to increase discoverability of research information and data related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) diverse research projects informed by geodesy through the UNAVCO geodetic facility and consortium. This presentation will outline of EarthCollab use cases, and provide an overview of key ontologies being used, including the VIVO-Integrated Semantic Framework (VIVO-ISF), Global Change Information System (GCIS), and Data Catalog (DCAT) ontologies. We will discuss issues related to bringing these ontologies together to provide a robust ontological structure to support the EarthCollab use cases. It is rare that a single pre-existing ontology meets all of a new application's needs. New projects need to stitch ontologies together in ways that fit into the broader semantic web ecosystem.

  2. An Evolutionary Ontology Approach for Community-Based Competency Management

    NASA Astrophysics Data System (ADS)

    de Baer, Peter; Meersman, Robert; Zhao, Gang

    In this article we describe an evolutionary ontology approach that distinguishes between major ontology changes and minor ontology changes. We divide the community in three (possibly overlapping) groups, i.e. facilitators, contributors, and users. Facilitators are a selected group of domain experts who represent the intended community. These facilitators define the intended goals of the ontology and will be responsible for major ontology and ontology platform changes. A larger group of contributors consists of all participating domain experts. The contributors will carry out minor ontology changes, like instantiation of concepts and description of concept instances. Users of the ontology may explore the ontology content via the ontology platform and/or make use of the published ontology content in XML or HTML format. The approach makes use of goal and group specific user interfaces to guide the ontology evolution process. For the minor ontology changes, the approach relies on the wisdom of crowds.

  3. Revealing ontological commitments by magic.

    PubMed

    Griffiths, Thomas L

    2015-03-01

    Considering the appeal of different magical transformations exposes some systematic asymmetries. For example, it is more interesting to transform a vase into a rose than a rose into a vase. An experiment in which people judged how interesting they found different magic tricks showed that these asymmetries reflect the direction a transformation moves in an ontological hierarchy: transformations in the direction of animacy and intelligence are favored over the opposite. A second and third experiment demonstrated that judgments of the plausibility of machines that perform the same transformations do not show the same asymmetries, but judgments of the interestingness of such machines do. A formal argument relates this sense of interestingness to evidence for an alternative to our current physical theory, with magic tricks being a particularly pure source of such evidence. These results suggest that people's intuitions about magic tricks can reveal the ontological commitments that underlie human cognition. PMID:25490128

  4. Ontological Model for EHR Interoperability.

    PubMed

    Bouanani-Oukhaled, Zahra; Verdier, Christine; Dupuy-Chessa, Sophie; Fouladi, Karan; Breda, Laurent

    2016-01-01

    The main purpose of this paper is to design a data model for Electronic Health Records which main goal is to enable cooperation of various heterogeneous health information systems. We investigate the interest of the meta-ontologies proposed in [1] by instantiating it with real data. We tested the feasibility of our model on real anonymous medical data provided by the Médibase Systèmes company. PMID:27350489

  5. Track Level Compensation Look-up Table Improves Antenna Pointing Precision

    NASA Technical Reports Server (NTRS)

    Gawronski, Wodek; Baher, Farrokh; Gama, Eric

    2006-01-01

    The pointing accuracy of the NASA Deep Space Network antennas is significantly impacted by the unevenness of the antenna azimuth track. The track unevenness causes repeatable antenna rotations, and repeatable pointing errors. The paper presents the improvement of the pointing accuracy of the antennas by implementing the track-level-compensation look-up table. The table consists of three axis rotations of the alidade as a function of the azimuth position. The paper presents the development of the table, based on the measurements of the inclinometer tilts, processing the measurement data, and determination of the three-axis alidade rotations from the tilt data. It also presents the determination of the elevation and cross-elevation errors of the antenna as a function of the alidade rotations. The pointing accuracy of the antenna with and without a table was measured using various radio beam pointing techniques. The pointing error decreased when the table was used, from 1.5 mdeg to 1.2 mdeg in elevation, and from 20.4 mdeg to 2.2 mdeg in cross-elevation.

  6. Modeling high-energy cosmic ray induced terrestrial muon flux: A lookup table

    NASA Astrophysics Data System (ADS)

    Atri, Dimitra; Melott, Adrian L.

    2011-06-01

    On geological timescales, the Earth is likely to be exposed to an increased flux of high-energy cosmic rays (HECRs) from astrophysical sources such as nearby supernovae, gamma-ray bursts or by galactic shocks. Typical cosmic ray energies may be much higher than the ≤1GeV flux which normally dominates. These high-energy particles strike the Earth's atmosphere initiating an extensive air shower. As the air shower propagates deeper, it ionizes the atmosphere by producing charged secondary particles. Secondary particles such as muons and thermal neutrons produced as a result of nuclear interactions are able to reach the ground, enhancing the radiation dose. Muons contribute 85% to the radiation dose from cosmic rays. This enhanced dose could be potentially harmful to the biosphere. This mechanism has been discussed extensively in literature but has never been quantified. Here, we have developed a lookup table that can be used to quantify this effect by modeling terrestrial muon flux from any arbitrary cosmic ray spectra with 10 GeV to 1 PeV primaries. This will enable us to compute the radiation dose on terrestrial planetary surfaces from a number of astrophysical sources.

  7. Lookup Tables for Predicting CHF and Film-Boiling Heat Transfer: Past, Present, and Future

    SciTech Connect

    Groeneveld, D.C.; Leung, L.K. H.; Guo, Y.; Vasic, A.; El Nakla, M.; Peng, S.W.; Yang, J.; Cheng, S.C.

    2005-10-15

    Lookup tables (LUTs) have been used widely for the prediction of critical heat flux (CHF) and film-boiling heat transfer for water-cooled tubes. LUTs are basically normalized data banks. They eliminate the need to choose between the many different CHF and film-boiling heat transfer prediction methods available.The LUTs have many advantages; e.g., (a) they are simple to use, (b) there is no iteration required, (c) they have a wide range of applications, (d) they may be applied to nonaqueous fluids using fluid-to-fluid modeling relationships, and (e) they are based on a very large database. Concerns associated with the use of LUTs include (a) there are fluctuations in the value of the CHF or film-boiling heat transfer coefficient (HTC) with pressure, mass flux, and quality, (b) there are large variations in the CHF or the film-boiling HTC between the adjacent table entries, and (c) there is a lack or scarcity of data at certain flow conditions.Work on the LUTs is continuing. This will resolve the aforementioned concerns and improve the LUT prediction capability. This work concentrates on better smoothing of the LUT entries, increasing the database, and improving models at conditions where data are sparse or absent.

  8. Legal Ontologies and Loopholes in the Law

    NASA Astrophysics Data System (ADS)

    Lovrenčić, Sandra; Tomac, Ivorka Jurenec; Mavrek, Blaženka

    The use of ontologies is today widely spread across many different domains. The main effort today is, with the development of Semantic Web, to make them available across the Internet community with the purpose of reuse. The legal domain has also been explored concerning ontologies, both on the general as on the sub-domain level. In this paper are explored problems of formal ontology development regarding areas in specific legislation acts that are understated or unequally described across the act — popularly said: loopholes in the law. An example of such a problematic act is shown. For ontology implementation, a well-known tool, Protégé, is used. The ontology is made in formal way, using PAL — Protégé Axiom Language, for expressing constraints, where needed. Ontology is evaluated using known evaluation methods.

  9. Anatomy Ontology Matching Using Markov Logic Networks

    PubMed Central

    Li, Chunhua; Zhao, Pengpeng; Wu, Jian; Cui, Zhiming

    2016-01-01

    The anatomy of model species is described in ontologies, which are used to standardize the annotations of experimental data, such as gene expression patterns. To compare such data between species, we need to establish relationships between ontologies describing different species. Ontology matching is a kind of solutions to find semantic correspondences between entities of different ontologies. Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. We combine several different matching strategies through first-order logic formulas according to the structure of anatomy ontologies. Experiments on the adult mouse anatomy and the human anatomy have demonstrated the effectiveness of proposed approach in terms of the quality of result alignment. PMID:27382498

  10. A Monte Carlo based lookup table for spectrum analysis of turbid media in the reflectance probe regime

    SciTech Connect

    Xiang Wen; Xiewei Zhong; Tingting Yu; Dan Zhu

    2014-07-31

    Fibre-optic diffuse reflectance spectroscopy offers a method for characterising phantoms of biotissue with specified optical properties. For a commercial reflectance probe (six source fibres surrounding a central collection fibre with an inter-fibre spacing of 480 μm; R400-7, Ocean Optics, USA) we have constructed a Monte Carlo based lookup table to create a function called getR(μ{sub a}, μ'{sub s}), where μ{sub a} is the absorption coefficient and μ'{sub s} is the reduced scattering coefficient. Experimental measurements of reflectance from homogeneous calibrated phantoms with given optical properties are compared with the predicted reflectance from the lookup table. The deviation between experiment and prediction is on average 12.1%. (laser biophotonics)

  11. Toward cognitivist ontologies : on the role of selective attention for upper ontologies.

    PubMed

    Carstensen, Kai-Uwe

    2011-11-01

    Ontologies play a key role in modern information society although there are still many fundamental questions regarding their structure to be answered. In this paper, some of these are presented, and it is argued that they require a shift from realist to cognitivist ontologies, with ontology design crucially depending on taking both cognitive and linguistic aspects into consideration. A detailed discussion of central parts of a proposed cognitivist upper ontology based on qualitative representations of selective attention is presented. PMID:21523446

  12. A Marketplace for Ontologies and Ontology-Based Tools and Applications in the Life Sciences

    SciTech Connect

    McEntire, R; Goble, C; Stevens, R; Neumann, E; Matuszek, P; Critchlow, T; Tarczy-Hornoch, P

    2005-06-30

    This paper describes a strategy for the development of ontologies in the life sciences, tools to support the creation and use of those ontologies, and a framework whereby these ontologies can support the development of commercial applications within the field. At the core of these efforts is the need for an organization that will provide a focus for ontology work that will engage researchers as well as drive forward the commercial aspects of this effort.

  13. Efficient table lookup without inverse square roots for calculation of pair wise atomic interactions in classical simulations.

    PubMed

    Nilsson, Lennart

    2009-07-15

    A major bottleneck in classical atomistic simulations of biomolecular systems is the calculation of the pair wise nonbonded (Coulomb, van der Waals) interactions. This remains an issue even when methods are used (e.g., lattice summation or spherical cutoffs) in which the number of interactions is reduced from O(N(2)) to O(NlogN) or O(N). The interaction forces and energies can either be calculated directly each time they are needed or retrieved using precomputed values in a lookup table; the choice between direct calculation and table lookup methods depends on the characteristics of the system studied (total number of particles and the number of particle kinds) as well as the hardware used (CPU speed, size and speed of cache, and main memory). A recently developed lookup table code, implemented in portable and easily maintained FORTRAN 95 in the CHARMM program (www.charmm.org), achieves a 1.5- to 2-fold speedup compared with standard calculations using highly optimized FORTRAN code in real molecular dynamics simulations for a wide range of molecular system sizes. No approximations other than the finite resolution of the tables are introduced, and linear interpolation in a table with the relatively modest density of 100 points/A(2) yields the same accuracy as the standard double precision calculations. For proteins in explicit water a less dense table (10 points/A(2)) is 10-20% faster than using the larger table, and only slightly less accurate. The lookup table is even faster than hand coded assembler routines in most cases, mainly due to a significantly smaller operation count inside the inner loop. PMID:19072764

  14. Vaccine and Drug Ontology Studies (VDOS 2014).

    PubMed

    Tao, Cui; He, Yongqun; Arabandi, Sivaram

    2016-01-01

    The "Vaccine and Drug Ontology Studies" (VDOS) international workshop series focuses on vaccine- and drug-related ontology modeling and applications. Drugs and vaccines have been critical to prevent and treat human and animal diseases. Work in both (drugs and vaccines) areas is closely related - from preclinical research and development to manufacturing, clinical trials, government approval and regulation, and post-licensure usage surveillance and monitoring. Over the last decade, tremendous efforts have been made in the biomedical ontology community to ontologically represent various areas associated with vaccines and drugs - extending existing clinical terminology systems such as SNOMED, RxNorm, NDF-RT, and MedDRA, developing new models such as the Vaccine Ontology (VO) and Ontology of Adverse Events (OAE), vernacular medical terminologies such as the Consumer Health Vocabulary (CHV). The VDOS workshop series provides a platform for discussing innovative solutions as well as the challenges in the development and applications of biomedical ontologies for representing and analyzing drugs and vaccines, their administration, host immune responses, adverse events, and other related topics. The five full-length papers included in this 2014 thematic issue focus on two main themes: (i) General vaccine/drug-related ontology development and exploration, and (ii) Interaction and network-related ontology studies. PMID:26918107

  15. FYPO: the fission yeast phenotype ontology

    PubMed Central

    Harris, Midori A.; Lock, Antonia; Bähler, Jürg; Oliver, Stephen G.; Wood, Valerie

    2013-01-01

    Motivation: To provide consistent computable descriptions of phenotype data, PomBase is developing a formal ontology of phenotypes observed in fission yeast. Results: The fission yeast phenotype ontology (FYPO) is a modular ontology that uses several existing ontologies from the open biological and biomedical ontologies (OBO) collection as building blocks, including the phenotypic quality ontology PATO, the Gene Ontology and Chemical Entities of Biological Interest. Modular ontology development facilitates partially automated effective organization of detailed phenotype descriptions with complex relationships to each other and to underlying biological phenomena. As a result, FYPO supports sophisticated querying, computational analysis and comparison between different experiments and even between species. Availability: FYPO releases are available from the Subversion repository at the PomBase SourceForge project page (https://sourceforge.net/p/pombase/code/HEAD/tree/phenotype_ontology/). The current version of FYPO is also available on the OBO Foundry Web site (http://obofoundry.org/). Contact: mah79@cam.ac.uk or vw253@cam.ac.uk PMID:23658422

  16. Predicting the Extension of Biomedical Ontologies

    PubMed Central

    Pesquita, Catia; Couto, Francisco M.

    2012-01-01

    Developing and extending a biomedical ontology is a very demanding task that can never be considered complete given our ever-evolving understanding of the life sciences. Extension in particular can benefit from the automation of some of its steps, thus releasing experts to focus on harder tasks. Here we present a strategy to support the automation of change capturing within ontology extension where the need for new concepts or relations is identified. Our strategy is based on predicting areas of an ontology that will undergo extension in a future version by applying supervised learning over features of previous ontology versions. We used the Gene Ontology as our test bed and obtained encouraging results with average f-measure reaching 0.79 for a subset of biological process terms. Our strategy was also able to outperform state of the art change capturing methods. In addition we have identified several issues concerning prediction of ontology evolution, and have delineated a general framework for ontology extension prediction. Our strategy can be applied to any biomedical ontology with versioning, to help focus either manual or semi-automated extension methods on areas of the ontology that need extension. PMID:23028267

  17. Scientific Digital Libraries, Interoperability, and Ontologies

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel J.; Mattmann, Chris A.

    2009-01-01

    Scientific digital libraries serve complex and evolving research communities. Justifications for the development of scientific digital libraries include the desire to preserve science data and the promises of information interconnectedness, correlative science, and system interoperability. Shared ontologies are fundamental to fulfilling these promises. We present a tool framework, some informal principles, and several case studies where shared ontologies are used to guide the implementation of scientific digital libraries. The tool framework, based on an ontology modeling tool, was configured to develop, manage, and keep shared ontologies relevant within changing domains and to promote the interoperability, interconnectedness, and correlation desired by scientists.

  18. How Ontologies are Made: Studying the Hidden Social Dynamics Behind Collaborative Ontology Engineering Projects.

    PubMed

    Strohmaier, Markus; Walk, Simon; Pöschko, Jan; Lamprecht, Daniel; Tudorache, Tania; Nyulas, Csongor; Musen, Mark A; Noy, Natalya F

    2013-05-01

    Traditionally, evaluation methods in the field of semantic technologies have focused on the end result of ontology engineering efforts, mainly, on evaluating ontologies and their corresponding qualities and characteristics. This focus has led to the development of a whole arsenal of ontology-evaluation techniques that investigate the quality of ontologies as a product. In this paper, we aim to shed light on the process of ontology engineering construction by introducing and applying a set of measures to analyze hidden social dynamics. We argue that especially for ontologies which are constructed collaboratively, understanding the social processes that have led to its construction is critical not only in understanding but consequently also in evaluating the ontology. With the work presented in this paper, we aim to expose the texture of collaborative ontology engineering processes that is otherwise left invisible. Using historical change-log data, we unveil qualitative differences and commonalities between different collaborative ontology engineering projects. Explaining and understanding these differences will help us to better comprehend the role and importance of social factors in collaborative ontology engineering projects. We hope that our analysis will spur a new line of evaluation techniques that view ontologies not as the static result of deliberations among domain experts, but as a dynamic, collaborative and iterative process that needs to be understood, evaluated and managed in itself. We believe that advances in this direction would help our community to expand the existing arsenal of ontology evaluation techniques towards more holistic approaches. PMID:24311994

  19. How Ontologies are Made: Studying the Hidden Social Dynamics Behind Collaborative Ontology Engineering Projects

    PubMed Central

    Strohmaier, Markus; Walk, Simon; Pöschko, Jan; Lamprecht, Daniel; Tudorache, Tania; Nyulas, Csongor; Musen, Mark A.; Noy, Natalya F.

    2013-01-01

    Traditionally, evaluation methods in the field of semantic technologies have focused on the end result of ontology engineering efforts, mainly, on evaluating ontologies and their corresponding qualities and characteristics. This focus has led to the development of a whole arsenal of ontology-evaluation techniques that investigate the quality of ontologies as a product. In this paper, we aim to shed light on the process of ontology engineering construction by introducing and applying a set of measures to analyze hidden social dynamics. We argue that especially for ontologies which are constructed collaboratively, understanding the social processes that have led to its construction is critical not only in understanding but consequently also in evaluating the ontology. With the work presented in this paper, we aim to expose the texture of collaborative ontology engineering processes that is otherwise left invisible. Using historical change-log data, we unveil qualitative differences and commonalities between different collaborative ontology engineering projects. Explaining and understanding these differences will help us to better comprehend the role and importance of social factors in collaborative ontology engineering projects. We hope that our analysis will spur a new line of evaluation techniques that view ontologies not as the static result of deliberations among domain experts, but as a dynamic, collaborative and iterative process that needs to be understood, evaluated and managed in itself. We believe that advances in this direction would help our community to expand the existing arsenal of ontology evaluation techniques towards more holistic approaches. PMID:24311994

  20. Where to Publish and Find Ontologies? A Survey of Ontology Libraries

    PubMed Central

    d'Aquin, Mathieu; Noy, Natalya F.

    2011-01-01

    One of the key promises of the Semantic Web is its potential to enable and facilitate data interoperability. The ability of data providers and application developers to share and reuse ontologies is a critical component of this data interoperability: if different applications and data sources use the same set of well defined terms for describing their domain and data, it will be much easier for them to “talk” to one another. Ontology libraries are the systems that collect ontologies from different sources and facilitate the tasks of finding, exploring, and using these ontologies. Thus ontology libraries can serve as a link in enabling diverse users and applications to discover, evaluate, use, and publish ontologies. In this paper, we provide a survey of the growing—and surprisingly diverse—landscape of ontology libraries. We highlight how the varying scope and intended use of the libraries a ects their features, content, and potential exploitation in applications. From reviewing eleven ontology libraries, we identify a core set of questions that ontology practitioners and users should consider in choosing an ontology library for finding ontologies or publishing their own. We also discuss the research challenges that emerge from this survey, for the developers of ontology libraries to address. PMID:22408576

  1. Towards Ontology-Driven Information Systems: Guidelines to the Creation of New Methodologies to Build Ontologies

    ERIC Educational Resources Information Center

    Soares, Andrey

    2009-01-01

    This research targeted the area of Ontology-Driven Information Systems, where ontology plays a central role both at development time and at run time of Information Systems (IS). In particular, the research focused on the process of building domain ontologies for IS modeling. The motivation behind the research was the fact that researchers have…

  2. Surreptitious, Evolving and Participative Ontology Development: An End-User Oriented Ontology Development Methodology

    ERIC Educational Resources Information Center

    Bachore, Zelalem

    2012-01-01

    Ontology not only is considered to be the backbone of the semantic web but also plays a significant role in distributed and heterogeneous information systems. However, ontology still faces limited application and adoption to date. One of the major problems is that prevailing engineering-oriented methodologies for building ontologies do not…

  3. Research on geo-ontology construction based on spatial affairs

    NASA Astrophysics Data System (ADS)

    Li, Bin; Liu, Jiping; Shi, Lihong

    2008-12-01

    Geo-ontology, a kind of domain ontology, is used to make the knowledge, information and data of concerned geographical science in the abstract to form a series of single object or entity with common cognition. These single object or entity can compose a specific system in some certain way and can be disposed on conception and given specific definition at the same time. Ultimately, these above-mentioned worked results can be expressed in some manners of formalization. The main aim of constructing geo-ontology is to get the knowledge of the domain of geography, and provide the commonly approbatory vocabularies in the domain, as well as give the definite definition about these geographical vocabularies and mutual relations between them in the mode of formalization at different hiberarchy. Consequently, the modeling tool of conception model of describing geographic Information System at the hiberarchy of semantic meaning and knowledge can be provided to solve the semantic conception of information exchange in geographical space and make them possess the comparatively possible characters of accuracy, maturity and universality, etc. In fact, some experiments have been made to validate geo-ontology. During the course of studying, Geo-ontology oriented to flood can be described and constructed by making the method based on geo-spatial affairs to serve the governmental departments at all levels to deal with flood. Thereinto, intelligent retrieve and service based on geoontology of disaster are main functions known from the traditional manner by using keywords. For instance, the function of dealing with disaster information based on geo-ontology can be provided when a supposed flood happened in a certain city. The correlative officers can input some words, such as "city name, flood", which have been realized semantic label, to get the information they needed when they browse different websites. The information, including basic geographical information and flood distributing

  4. Ontodog: a web-based ontology community view generation tool.

    PubMed

    Zheng, Jie; Xiang, Zuoshuang; Stoeckert, Christian J; He, Yongqun

    2014-05-01

    Biomedical ontologies are often very large and complex. Only a subset of the ontology may be needed for a specified application or community. For ontology end users, it is desirable to have community-based labels rather than the labels generated by ontology developers. Ontodog is a web-based system that can generate an ontology subset based on Excel input, and support generation of an ontology community view, which is defined as the whole or a subset of the source ontology with user-specified annotations including user-preferred labels. Ontodog allows users to easily generate community views with minimal ontology knowledge and no programming skills or installation required. Currently >100 ontologies including all OBO Foundry ontologies are available to generate the views based on user needs. We demonstrate the application of Ontodog for the generation of community views using the Ontology for Biomedical Investigations as the source ontology. PMID:24413522

  5. Federated ontology-based queries over cancer data

    PubMed Central

    2012-01-01

    Background Personalised medicine provides patients with treatments that are specific to their genetic profiles. It requires efficient data sharing of disparate data types across a variety of scientific disciplines, such as molecular biology, pathology, radiology and clinical practice. Personalised medicine aims to offer the safest and most effective therapeutic strategy based on the gene variations of each subject. In particular, this is valid in oncology, where knowledge about genetic mutations has already led to new therapies. Current molecular biology techniques (microarrays, proteomics, epigenetic technology and improved DNA sequencing technology) enable better characterisation of cancer tumours. The vast amounts of data, however, coupled with the use of different terms - or semantic heterogeneity - in each discipline makes the retrieval and integration of information difficult. Results Existing software infrastructures for data-sharing in the cancer domain, such as caGrid, support access to distributed information. caGrid follows a service-oriented model-driven architecture. Each data source in caGrid is associated with metadata at increasing levels of abstraction, including syntactic, structural, reference and domain metadata. The domain metadata consists of ontology-based annotations associated with the structural information of each data source. However, caGrid's current querying functionality is given at the structural metadata level, without capitalising on the ontology-based annotations. This paper presents the design of and theoretical foundations for distributed ontology-based queries over cancer research data. Concept-based queries are reformulated to the target query language, where join conditions between multiple data sources are found by exploiting the semantic annotations. The system has been implemented, as a proof of concept, over the caGrid infrastructure. The approach is applicable to other model-driven architectures. A graphical user

  6. A method of extracting ontology module using concept relations for sharing knowledge in mobile cloud computing environment.

    PubMed

    Lee, Keonsoo; Rho, Seungmin; Lee, Seok-Won

    2014-01-01

    In mobile cloud computing environment, the cooperation of distributed computing objects is one of the most important requirements for providing successful cloud services. To satisfy this requirement, all the members, who are employed in the cooperation group, need to share the knowledge for mutual understanding. Even if ontology can be the right tool for this goal, there are several issues to make a right ontology. As the cost and complexity of managing knowledge increase according to the scale of the knowledge, reducing the size of ontology is one of the critical issues. In this paper, we propose a method of extracting ontology module to increase the utility of knowledge. For the given signature, this method extracts the ontology module, which is semantically self-contained to fulfill the needs of the service, by considering the syntactic structure and semantic relation of concepts. By employing this module, instead of the original ontology, the cooperation of computing objects can be performed with less computing load and complexity. In particular, when multiple external ontologies need to be combined for more complex services, this method can be used to optimize the size of shared knowledge. PMID:25250374

  7. XOA: Web-Enabled Cross-Ontological Analytics

    SciTech Connect

    Riensche, Roderick M.; Baddeley, Bob; Sanfilippo, Antonio P.; Posse, Christian; Gopalan, Banu

    2007-07-09

    The paper being submitted (as an "extended abstract" prior to conference acceptance) provides a technical description of our proof-of-concept prototype for the XOA method. Abstract: To address meaningful questions, scientists need to relate information across diverse classification schemes such as ontologies, terminologies and thesauri. These resources typically address a single knowledge domain at a time and are not cross-indexed. Information that is germane to the same object may therefore remain unlinked with consequent loss of knowledge discovery across disciplines and even sub-domains of the same discipline. We propose to address these problems by fostering semantic interoperability through the development of ontology alignment web services capable of enabling cross-scale knowledge discovery, and demonstrate a specific application of such an approach to the biomedical domain.

  8. Ontology driven health information systems architectures enable pHealth for empowered patients.

    PubMed

    Blobel, Bernd

    2011-02-01

    The paradigm shift from organization-centered to managed care and on to personal health settings increases specialization and distribution of actors and services related to the health of patients or even citizens before becoming patients. As a consequence, extended communication and cooperation is required between all principals involved in health services such as persons, organizations, devices, systems, applications, and components. Personal health (pHealth) environments range over many disciplines, where domain experts present their knowledge by using domain-specific terminologies and ontologies. Therefore, the mapping of domain ontologies is inevitable for ensuring interoperability. The paper introduces the care paradigms and the related requirements as well as an architectural approach for meeting the business objectives. Furthermore, it discusses some theoretical challenges and practical examples of ontologies, concept and knowledge representations, starting general and then focusing on security and privacy related services. The requirements and solutions for empowering the patient or the citizen before becoming a patient are especially emphasized. PMID:21036660

  9. BioPortal: An Open-Source Community-Based Ontology Repository

    NASA Astrophysics Data System (ADS)

    Noy, N.; NCBO Team

    2011-12-01

    Advances in computing power and new computational techniques have changed the way researchers approach science. In many fields, one of the most fruitful approaches has been to use semantically aware software to break down the barriers among disparate domains, systems, data sources, and technologies. Such software facilitates data aggregation, improves search, and ultimately allows the detection of new associations that were previously not detectable. Achieving these analyses requires software systems that take advantage of the semantics and that can intelligently negotiate domains and knowledge sources, identifying commonality across systems that use different and conflicting vocabularies, while understanding apparent differences that may be concealed by the use of superficially similar terms. An ontology, a semantically rich vocabulary for a domain of interest, is the cornerstone of software for bridging systems, domains, and resources. However, as ontologies become the foundation of all semantic technologies in e-science, we must develop an infrastructure for sharing ontologies, finding and evaluating them, integrating and mapping among them, and using ontologies in applications that help scientists process their data. BioPortal [1] is an open-source on-line community-based ontology repository that has been used as a critical component of semantic infrastructure in several domains, including biomedicine and bio-geochemical data. BioPortal, uses the social approaches in the Web 2.0 style to bring structure and order to the collection of biomedical ontologies. It enables users to provide and discuss a wide array of knowledge components, from submitting the ontologies themselves, to commenting on and discussing classes in the ontologies, to reviewing ontologies in the context of their own ontology-based projects, to creating mappings between overlapping ontologies and discussing and critiquing the mappings. Critically, it provides web-service access to all its

  10. A Gross Anatomy Ontology for Hymenoptera

    PubMed Central

    Yoder, Matthew J.; Mikó, István; Seltmann, Katja C.; Bertone, Matthew A.; Deans, Andrew R.

    2010-01-01

    Hymenoptera is an extraordinarily diverse lineage, both in terms of species numbers and morphotypes, that includes sawflies, bees, wasps, and ants. These organisms serve critical roles as herbivores, predators, parasitoids, and pollinators, with several species functioning as models for agricultural, behavioral, and genomic research. The collective anatomical knowledge of these insects, however, has been described or referred to by labels derived from numerous, partially overlapping lexicons. The resulting corpus of information—millions of statements about hymenopteran phenotypes—remains inaccessible due to language discrepancies. The Hymenoptera Anatomy Ontology (HAO) was developed to surmount this challenge and to aid future communication related to hymenopteran anatomy. The HAO was built using newly developed interfaces within mx, a Web-based, open source software package, that enables collaborators to simultaneously contribute to an ontology. Over twenty people contributed to the development of this ontology by adding terms, genus differentia, references, images, relationships, and annotations. The database interface returns an Open Biomedical Ontology (OBO) formatted version of the ontology and includes mechanisms for extracting candidate data and for publishing a searchable ontology to the Web. The application tools are subject-agnostic and may be used by others initiating and developing ontologies. The present core HAO data constitute 2,111 concepts, 6,977 terms (labels for concepts), 3,152 relations, 4,361 sensus (links between terms, concepts, and references) and over 6,000 text and graphical annotations. The HAO is rooted with the Common Anatomy Reference Ontology (CARO), in order to facilitate interoperability with and future alignment to other anatomy ontologies, and is available through the OBO Foundry ontology repository and BioPortal. The HAO provides a foundation through which connections between genomic, evolutionary developmental biology

  11. Issues in learning an ontology from text

    PubMed Central

    Brewster, Christopher; Jupp, Simon; Luciano, Joanne; Shotton, David; Stevens, Robert D; Zhang, Ziqi

    2009-01-01

    Ontology construction for any domain is a labour intensive and complex process. Any methodology that can reduce the cost and increase efficiency has the potential to make a major impact in the life sciences. This paper describes an experiment in ontology construction from text for the animal behaviour domain. Our objective was to see how much could be done in a simple and relatively rapid manner using a corpus of journal papers. We used a sequence of pre-existing text processing steps, and here describe the different choices made to clean the input, to derive a set of terms and to structure those terms in a number of hierarchies. We describe some of the challenges, especially that of focusing the ontology appropriately given a starting point of a heterogeneous corpus. Using mainly automated techniques, we were able to construct an 18055 term ontology-like structure with 73% recall of animal behaviour terms, but a precision of only 26%. We were able to clean unwanted terms from the nascent ontology using lexico-syntactic patterns that tested the validity of term inclusion within the ontology. We used the same technique to test for subsumption relationships between the remaining terms to add structure to the initially broad and shallow structure we generated. All outputs are available at . We present a systematic method for the initial steps of ontology or structured vocabulary construction for scientific domains that requires limited human effort and can make a contribution both to ontology learning and maintenance. The method is useful both for the exploration of a scientific domain and as a stepping stone towards formally rigourous ontologies. The filtering of recognised terms from a heterogeneous corpus to focus upon those that are the topic of the ontology is identified to be one of the main challenges for research in ontology learning. PMID:19426458

  12. Global Aerosol Optical Models and Lookup Tables for the New MODIS Aerosol Retrieval over Land

    NASA Technical Reports Server (NTRS)

    Levy, Robert C.; Remer, Loraine A.; Dubovik, Oleg

    2007-01-01

    Since 2000, MODIS has been deriving aerosol properties over land from MODIS observed spectral reflectance, by matching the observed reflectance with that simulated for selected aerosol optical models, aerosol loadings, wavelengths and geometrical conditions (that are contained in a lookup table or 'LUT'). Validation exercises have showed that MODIS tends to under-predict aerosol optical depth (tau) in cases of large tau (tau greater than 1.0), signaling errors in the assumed aerosol optical properties. Using the climatology of almucantur retrievals from the hundreds of global AERONET sunphotometer sites, we found that three spherical-derived models (describing fine-sized dominated aerosol), and one spheroid-derived model (describing coarse-sized dominated aerosol, presumably dust) generally described the range of observed global aerosol properties. The fine dominated models were separated mainly by their single scattering albedo (omega(sub 0)), ranging from non-absorbing aerosol (omega(sub 0) approx. 0.95) in developed urban/industrial regions, to neutrally absorbing aerosol (omega(sub 0) approx.90) in forest fire burning and developing industrial regions, to absorbing aerosol (omega(sub 0) approx. 0.85) in regions of savanna/grassland burning. We determined the dominant model type in each region and season, to create a 1 deg. x 1 deg. grid of assumed aerosol type. We used vector radiative transfer code to create a new LUT, simulating the four aerosol models, in four MODIS channels. Independent AERONET observations of spectral tau agree with the new models, indicating that the new models are suitable for use by the MODIS aerosol retrieval.

  13. Nosology, ontology and promiscuous realism.

    PubMed

    Binney, Nicholas

    2015-06-01

    Medics may consider worrying about their metaphysics and ontology to be a waste of time. I will argue here that this is not the case. Promiscuous realism is a metaphysical position which holds that multiple, equally valid, classification schemes should be applied to objects (such as patients) to capture different aspects of their complex and heterogeneous nature. As medics at the bedside may need to capture different aspects of their patients' problems, they may need to use multiple classification schemes (multiple nosologies), and thus consider adopting a different metaphysics to the one commonly in use. PMID:25389077

  14. Ontology-Driven Information Integration

    NASA Technical Reports Server (NTRS)

    Tissot, Florence; Menzel, Chris

    2005-01-01

    Ontology-driven information integration (ODII) is a method of computerized, automated sharing of information among specialists who have expertise in different domains and who are members of subdivisions of a large, complex enterprise (e.g., an engineering project, a government agency, or a business). In ODII, one uses rigorous mathematical techniques to develop computational models of engineering and/or business information and processes. These models are then used to develop software tools that support the reliable processing and exchange of information among the subdivisions of this enterprise or between this enterprise and other enterprises.

  15. Inflammation ontology design pattern: an exercise in building a core biomedical ontology with descriptions and situations.

    PubMed

    Gangemi, Aldo; Catenacci, Carola; Battaglia, Massimo

    2004-01-01

    Formal ontology has proved to be an extremely useful tool for negotiating intended meaning, for building explicit, formal data sheets, and for the discovery of novel views on existing data structures. This paper describes an example of application of formal ontological methods to the creation of biomedical ontologies. Addressed here is the ambiguous notion of inflammation, which spans across multiple linguistic meanings, multiple layers of reality, and multiple details of granularity. We use UML class diagrams, description logics, and the DOLCE foundational ontology, augmented with the Description and Situation theory, in order to provide the representational and ontological primitives that are necessary for the development of detailed, flexible, and functional biomedical ontologies. An ontology design pattern is proposed as a modelling template for inflammations. PMID:15853264

  16. Ontology Design Patterns as Interfaces (invited)

    NASA Astrophysics Data System (ADS)

    Janowicz, K.

    2015-12-01

    In recent years ontology design patterns (ODP) have gained popularity among knowledge engineers. ODPs are modular but self-contained building blocks that are reusable and extendible. They minimize the amount of ontological commitments and thereby are easier to integrate than large monolithic ontologies. Typically, patterns are not directly used to annotate data or to model certain domain problems but are combined and extended to form data and purpose-driven local ontologies that serve the needs of specific applications or communities. By relying on a common set of patterns these local ontologies can be aligned to improve interoperability and enable federated queries without enforcing a top-down model of the domain. In previous work, we introduced ontological views as layer on top of ontology design patterns to ease the reuse, combination, and integration of patterns. While the literature distinguishes multiple types of patterns, e.g., content patterns or logical patterns, we propose to use them as interfaces here to guide the development of ontology-driven systems.

  17. Developing Domain Ontologies for Course Content

    ERIC Educational Resources Information Center

    Boyce, Sinead; Pahl, Claus

    2007-01-01

    Ontologies have the potential to play an important role in instructional design and the development of course content. They can be used to represent knowledge about content, supporting instructors in creating content or learners in accessing content in a knowledge-guided way. While ontologies exist for many subject domains, their quality and…

  18. Statistical mechanics of ontology based annotations

    NASA Astrophysics Data System (ADS)

    Hoyle, David C.; Brass, Andrew

    2016-01-01

    We present a statistical mechanical theory of the process of annotating an object with terms selected from an ontology. The term selection process is formulated as an ideal lattice gas model, but in a highly structured inhomogeneous field. The model enables us to explain patterns recently observed in real-world annotation data sets, in terms of the underlying graph structure of the ontology. By relating the external field strengths to the information content of each node in the ontology graph, the statistical mechanical model also allows us to propose a number of practical metrics for assessing the quality of both the ontology, and the annotations that arise from its use. Using the statistical mechanical formalism we also study an ensemble of ontologies of differing size and complexity; an analysis not readily performed using real data alone. Focusing on regular tree ontology graphs we uncover a rich set of scaling laws describing the growth in the optimal ontology size as the number of objects being annotated increases. In doing so we provide a further possible measure for assessment of ontologies.

  19. Automating Ontological Annotation with WordNet

    SciTech Connect

    Sanfilippo, Antonio P.; Tratz, Stephen C.; Gregory, Michelle L.; Chappell, Alan R.; Whitney, Paul D.; Posse, Christian; Paulson, Patrick R.; Baddeley, Bob L.; Hohimer, Ryan E.; White, Amanda M.

    2006-01-22

    Semantic Web applications require robust and accurate annotation tools that are capable of automating the assignment of ontological classes to words in naturally occurring text (ontological annotation). Most current ontologies do not include rich lexical databases and are therefore not easily integrated with word sense disambiguation algorithms that are needed to automate ontological annotation. WordNet provides a potentially ideal solution to this problem as it offers a highly structured lexical conceptual representation that has been extensively used to develop word sense disambiguation algorithms. However, WordNet has not been designed as an ontology, and while it can be easily turned into one, the result of doing this would present users with serious practical limitations due to the great number of concepts (synonym sets) it contains. Moreover, mapping WordNet to an existing ontology may be difficult and requires substantial labor. We propose to overcome these limitations by developing an analytical platform that (1) provides a WordNet-based ontology offering a manageable and yet comprehensive set of concept classes, (2) leverages the lexical richness of WordNet to give an extensive characterization of concept class in terms of lexical instances, and (3) integrates a class recognition algorithm that automates the assignment of concept classes to words in naturally occurring text. The ensuing framework makes available an ontological annotation platform that can be effectively integrated with intelligence analysis systems to facilitate evidence marshaling and sustain the creation and validation of inference models.

  20. Ontological Annotation with WordNet

    SciTech Connect

    Sanfilippo, Antonio P.; Tratz, Stephen C.; Gregory, Michelle L.; Chappell, Alan R.; Whitney, Paul D.; Posse, Christian; Paulson, Patrick R.; Baddeley, Bob; Hohimer, Ryan E.; White, Amanda M.

    2006-06-06

    Semantic Web applications require robust and accurate annotation tools that are capable of automating the assignment of ontological classes to words in naturally occurring text (ontological annotation). Most current ontologies do not include rich lexical databases and are therefore not easily integrated with word sense disambiguation algorithms that are needed to automate ontological annotation. WordNet provides a potentially ideal solution to this problem as it offers a highly structured lexical conceptual representation that has been extensively used to develop word sense disambiguation algorithms. However, WordNet has not been designed as an ontology, and while it can be easily turned into one, the result of doing this would present users with serious practical limitations due to the great number of concepts (synonym sets) it contains. Moreover, mapping WordNet to an existing ontology may be difficult and requires substantial labor. We propose to overcome these limitations by developing an analytical platform that (1) provides a WordNet-based ontology offering a manageable and yet comprehensive set of concept classes, (2) leverages the lexical richness of WordNet to give an extensive characterization of concept class in terms of lexical instances, and (3) integrates a class recognition algorithm that automates the assignment of concept classes to words in naturally occurring text. The ensuing framework makes available an ontological annotation platform that can be effectively integrated with intelligence analysis systems to facilitate evidence marshaling and sustain the creation and validation of inference models.

  1. Representing default knowledge in biomedical ontologies: application to the integration of anatomy and phenotype ontologies

    PubMed Central

    Hoehndorf, Robert; Loebe, Frank; Kelso, Janet; Herre, Heinrich

    2007-01-01

    Background Current efforts within the biomedical ontology community focus on achieving interoperability between various biomedical ontologies that cover a range of diverse domains. Achieving this interoperability will contribute to the creation of a rich knowledge base that can be used for querying, as well as generating and testing novel hypotheses. The OBO Foundry principles, as applied to a number of biomedical ontologies, are designed to facilitate this interoperability. However, semantic extensions are required to meet the OBO Foundry interoperability goals. Inconsistencies may arise when ontologies of properties – mostly phenotype ontologies – are combined with ontologies taking a canonical view of a domain – such as many anatomical ontologies. Currently, there is no support for a correct and consistent integration of such ontologies. Results We have developed a methodology for accurately representing canonical domain ontologies within the OBO Foundry. This is achieved by adding an extension to the semantics for relationships in the biomedical ontologies that allows for treating canonical information as default. Conclusions drawn from default knowledge may be revoked when additional information becomes available. We show how this extension can be used to achieve interoperability between ontologies, and further allows for the inclusion of more knowledge within them. We apply the formalism to ontologies of mouse anatomy and mammalian phenotypes in order to demonstrate the approach. Conclusion Biomedical ontologies require a new class of relations that can be used in conjunction with default knowledge, thereby extending those currently in use. The inclusion of default knowledge is necessary in order to ensure interoperability between ontologies. PMID:17925014

  2. Towards an Ontology for Reef Islands

    NASA Astrophysics Data System (ADS)

    Duce, Stephanie

    Reef islands are complex, dynamic and vulnerable environments with a diverse range of stake holders. Communication and data sharing between these different groups of stake holders is often difficult. An ontology for the reef island domain would improve the understanding of reef island geomorphology and improve communication between stake holders as well as forming a platform from which to move towards interoperability and the application of Information Technology to forecast and monitor these environments. This paper develops a small, prototypical reef island domain ontology, based on informal, natural language relations, aligned to the DOLCE upper-level ontology, for 20 fundamental terms within the domain. A subset of these terms and their relations are discussed in detail. This approach reveals and discusses challenges which must be overcome in the creation of a reef island domain ontology and which could be relevant to other ontologies in dynamic geospatial domains.

  3. An Ontology-Based Collaborative Design System

    NASA Astrophysics Data System (ADS)

    Su, Tieming; Qiu, Xinpeng; Yu, Yunlong

    A collaborative design system architecture based on ontology is proposed. In the architecture, OWL is used to construct global shared ontology and local ontology; both of them are machine-interpretable. The former provides a semantic basis for the communication among designers so as to make the designers share the common understanding of knowledge. The latter which describes knowledge of designer’s own is the basis of design by reasoning. SWRL rule base comprising rules defined based on local ontology is constructed to enhance the reasoning capability of local knowledge base. The designers can complete collaborative design at a higher level based on the local knowledge base and the global shared ontology, which enhances the intelligence of design. Finally, a collaborative design case is presented and analyzed.

  4. An Ontology Based Approach to Information Security

    NASA Astrophysics Data System (ADS)

    Pereira, Teresa; Santos, Henrique

    The semantically structure of knowledge, based on ontology approaches have been increasingly adopted by several expertise from diverse domains. Recently ontologies have been moved from the philosophical and metaphysics disciplines to be used in the construction of models to describe a specific theory of a domain. The development and the use of ontologies promote the creation of a unique standard to represent concepts within a specific knowledge domain. In the scope of information security systems the use of an ontology to formalize and represent the concepts of security information challenge the mechanisms and techniques currently used. This paper intends to present a conceptual implementation model of an ontology defined in the security domain. The model presented contains the semantic concepts based on the information security standard ISO/IEC_JTC1, and their relationships to other concepts, defined in a subset of the information security domain.

  5. FROG - Fingerprinting Genomic Variation Ontology.

    PubMed

    Abinaya, E; Narang, Pankaj; Bhardwaj, Anshu

    2015-01-01

    Genetic variations play a crucial role in differential phenotypic outcomes. Given the complexity in establishing this correlation and the enormous data available today, it is imperative to design machine-readable, efficient methods to store, label, search and analyze this data. A semantic approach, FROG: "FingeRprinting Ontology of Genomic variations" is implemented to label variation data, based on its location, function and interactions. FROG has six levels to describe the variation annotation, namely, chromosome, DNA, RNA, protein, variations and interactions. Each level is a conceptual aggregation of logically connected attributes each of which comprises of various properties for the variant. For example, in chromosome level, one of the attributes is location of variation and which has two properties, allosomes or autosomes. Another attribute is variation kind which has four properties, namely, indel, deletion, insertion, substitution. Likewise, there are 48 attributes and 278 properties to capture the variation annotation across six levels. Each property is then assigned a bit score which in turn leads to generation of a binary fingerprint based on the combination of these properties (mostly taken from existing variation ontologies). FROG is a novel and unique method designed for the purpose of labeling the entire variation data generated till date for efficient storage, search and analysis. A web-based platform is designed as a test case for users to navigate sample datasets and generate fingerprints. The platform is available at http://ab-openlab.csir.res.in/frog. PMID:26244889

  6. [Towards a structuring fibrillar ontology].

    PubMed

    Guimberteau, J-C

    2012-10-01

    Over previous decades and centuries, the difficulty encountered in the manner in which the tissue of our bodies is organised, and structured, is clearly explained by the impossibility of exploring it in detail. Since the creation of the microscope, the perception of the basic unity, which is the cell, has been essential in understanding the functioning of reproduction and of transmission, but has not been able to explain the notion of form; since the cells are not everywhere and are not distributed in an apparently balanced manner. The problems that remain are those of form and volume and also of connection. The concept of multifibrillar architecture, shaping the interfibrillar microvolumes in space, represents a solution to all these questions. The architectural structures revealed, made up of fibres, fibrils and microfibrils, from the mesoscopic to the microscopic level, provide the concept of a living form with structural rationalism that permits the association of psychochemical molecular biodynamics and quantum physics: the form can thus be described and interpreted, and a true structural ontology is elaborated from a basic functional unity, which is the microvacuole, the intra and interfibrillar volume of the fractal organisation, and the chaotic distribution. Naturally, new, less linear, less conclusive, and less specific concepts will be implied by this ontology, leading one to believe that the emergence of life takes place under submission to forces that the original form will have imposed and oriented the adaptive finality. PMID:22921289

  7. Geo-Ontologies Are Scale Dependent

    NASA Astrophysics Data System (ADS)

    Frank, A. U.

    2009-04-01

    Philosophers aim at a single ontology that describes "how the world is"; for information systems we aim only at ontologies that describe a conceptualization of reality (Guarino 1995; Gruber 2005). A conceptualization of the world implies a spatial and temporal scale: what are the phenomena, the objects and the speed of their change? Few articles (Reitsma et al. 2003) seem to address that an ontology is scale specific (but many articles indicate that ontologies are scale-free in another sense namely that they are scale free in the link densities between concepts). The scale in the conceptualization can be linked to the observation process. The extent of the support of the physical observation instrument and the sampling theorem indicate what level of detail we find in a dataset. These rules apply for remote sensing or sensor networks alike. An ontology of observations must include scale or level of detail, and concepts derived from observations should carry this relation forward. A simple example: in high resolution remote sensing image agricultural plots and roads between them are shown, at lower resolution, only the plots and not the roads are visible. This gives two ontologies, one with plots and roads, the other with plots only. Note that a neighborhood relation in the two different ontologies also yield different results. References Gruber, T. (2005). "TagOntology - a way to agree on the semantics of tagging data." Retrieved October 29, 2005., from http://tomgruber.org/writing/tagontology-tagcapm-talk.pdf. Guarino, N. (1995). "Formal Ontology, Conceptual Analysis and Knowledge Representation." International Journal of Human and Computer Studies. Special Issue on Formal Ontology, Conceptual Analysis and Knowledge Representation, edited by N. Guarino and R. Poli 43(5/6). Reitsma, F. and T. Bittner (2003). Process, Hierarchy, and Scale. Spatial Information Theory. Cognitive and Computational Foundations of Geographic Information ScienceInternational Conference

  8. Temporal Ontologies for Geoscience: Alignment Challenges

    NASA Astrophysics Data System (ADS)

    Cox, S. J. D.

    2014-12-01

    Time is a central concept in geoscience. Geologic histories are composed of sequences of geologic processes and events. Calibration of their timing ties a local history into a broader context, and enables correlation of events between locations. The geologic timescale is standardized in the International Chronostratigraphic Chart, which specifies interval names, and calibrations for the ages of the interval boundaries. Time is also a key concept in the world at large. A number of general purpose temporal ontologies have been developed, both stand-alone and as parts of general purpose or upper ontologies. A temporal ontology for geoscience should apply or extend a suitable general purpose temporal ontology. However, geologic time presents two challenges: Geology involves greater spans of time than in other temporal ontologies, inconsistent with the year-month-day/hour-minute-second formalization that is a basic assumption of most general purpose temporal schemes; The geologic timescale is a temporal topology. Its calibration in terms of an absolute (numeric) scale is a scientific issue in its own right supporting a significant community. In contrast, the general purpose temporal ontologies are premised on exact numeric values for temporal position, and do not allow for temporal topology as a primary structure. We have developed an ontology for the geologic timescale to account for these concerns. It uses the ISO 19108 distinctions between different types of temporal reference system, also linking to an explicit temporal topology model. Stratotypes used in the calibration process are modelled as sampling-features following the ISO 19156 Observations and Measurements model. A joint OGC-W3C harmonization project is underway, with standardization of the W3C OWL-Time ontology as one of its tasks. The insights gained from the geologic timescale ontology will assist in development of a general ontology capable of modelling a richer set of use-cases from geoscience.

  9. The MMI Device Ontology: Enabling Sensor Integration

    NASA Astrophysics Data System (ADS)

    Rueda, C.; Galbraith, N.; Morris, R. A.; Bermudez, L. E.; Graybeal, J.; Arko, R. A.; Mmi Device Ontology Working Group

    2010-12-01

    The Marine Metadata Interoperability (MMI) project has developed an ontology for devices to describe sensors and sensor networks. This ontology is implemented in the W3C Web Ontology Language (OWL) and provides an extensible conceptual model and controlled vocabularies for describing heterogeneous instrument types, with different data characteristics, and their attributes. It can help users populate metadata records for sensors; associate devices with their platforms, deployments, measurement capabilities and restrictions; aid in discovery of sensor data, both historic and real-time; and improve the interoperability of observational oceanographic data sets. We developed the MMI Device Ontology following a community-based approach. By building on and integrating other models and ontologies from related disciplines, we sought to facilitate semantic interoperability while avoiding duplication. Key concepts and insights from various communities, including the Open Geospatial Consortium (eg., SensorML and Observations and Measurements specifications), Semantic Web for Earth and Environmental Terminology (SWEET), and W3C Semantic Sensor Network Incubator Group, have significantly enriched the development of the ontology. Individuals ranging from instrument designers, science data producers and consumers to ontology specialists and other technologists contributed to the work. Applications of the MMI Device Ontology are underway for several community use cases. These include vessel-mounted multibeam mapping sonars for the Rolling Deck to Repository (R2R) program and description of diverse instruments on deepwater Ocean Reference Stations for the OceanSITES program. These trials involve creation of records completely describing instruments, either by individual instances or by manufacturer and model. Individual terms in the MMI Device Ontology can be referenced with their corresponding Uniform Resource Identifiers (URIs) in sensor-related metadata specifications (e

  10. Multiangle Implementation of Atmospheric Correction (MAIAC):. 1; Radiative Transfer Basis and Look-up Tables

    NASA Technical Reports Server (NTRS)

    Lyapustin, Alexei; Martonchik, John; Wang, Yujie; Laszlo, Istvan; Korkin, Sergey

    2011-01-01

    This paper describes a radiative transfer basis of the algorithm MAIAC which performs simultaneous retrievals of atmospheric aerosol and bidirectional surface reflectance from the Moderate Resolution Imaging Spectroradiometer (MODIS). The retrievals are based on an accurate semianalytical solution for the top-of-atmosphere reflectance expressed as an explicit function of three parameters of the Ross-Thick Li-Sparse model of surface bidirectional reflectance. This solution depends on certain functions of atmospheric properties and geometry which are precomputed in the look-up table (LUT). This paper further considers correction of the LUT functions for variations of surface pressure/height and of atmospheric water vapor, which is a common task in the operational remote sensing. It introduces a new analytical method for the water vapor correction of the multiple ]scattering path radiance. It also summarizes the few basic principles that provide a high efficiency and accuracy of the LUT ]based radiative transfer for the aerosol/surface retrievals and optimize the size of LUT. For example, the single-scattering path radiance is calculated analytically for a given surface pressure and atmospheric water vapor. The same is true for the direct surface-reflected radiance, which along with the single-scattering path radiance largely defines the angular dependence of measurements. For these calculations, the aerosol phase functions and kernels of the surface bidirectional reflectance model are precalculated at a high angular resolution. The other radiative transfer functions depend rather smoothly on angles because of multiple scattering and can be calculated at coarser angular resolution to reduce the LUT size. At the same time, this resolution should be high enough to use the nearest neighbor geometry angles to avoid costly three ]dimensional interpolation. The pressure correction is implemented via linear interpolation between two LUTs computed for the standard and reduced

  11. SPONGY (SPam ONtoloGY): Email Classification Using Two-Level Dynamic Ontology

    PubMed Central

    2014-01-01

    Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance. PMID:25254240

  12. SPONGY (SPam ONtoloGY): email classification using two-level dynamic ontology.

    PubMed

    Youn, Seongwook

    2014-01-01

    Email is one of common communication methods between people on the Internet. However, the increase of email misuse/abuse has resulted in an increasing volume of spam emails over recent years. An experimental system has been designed and implemented with the hypothesis that this method would outperform existing techniques, and the experimental results showed that indeed the proposed ontology-based approach improves spam filtering accuracy significantly. In this paper, two levels of ontology spam filters were implemented: a first level global ontology filter and a second level user-customized ontology filter. The use of the global ontology filter showed about 91% of spam filtered, which is comparable with other methods. The user-customized ontology filter was created based on the specific user's background as well as the filtering mechanism used in the global ontology filter creation. The main contributions of the paper are (1) to introduce an ontology-based multilevel filtering technique that uses both a global ontology and an individual filter for each user to increase spam filtering accuracy and (2) to create a spam filter in the form of ontology, which is user-customized, scalable, and modularized, so that it can be embedded to many other systems for better performance. PMID:25254240

  13. Reasoning Based Quality Assurance of Medical Ontologies: A Case Study

    PubMed Central

    Horridge, Matthew; Parsia, Bijan; Noy, Natalya F.; Musenm, Mark A.

    2014-01-01

    The World Health Organisation is using OWL as a key technology to develop ICD-11 – the next version of the well-known International Classification of Diseases. Besides providing better opportunities for data integration and linkages to other well-known ontologies such as SNOMED-CT, one of the main promises of using OWL is that it will enable various forms of automated error checking. In this paper we investigate how automated OWL reasoning, along with a Justification Finding Service can be used as a Quality Assurance technique for the development of large and complex ontologies such as ICD-11. Using the International Classification of Traditional Medicine (ICTM) – Chapter 24 of ICD-11 – as a case study, and an expert panel of knowledge engineers, we reveal the kinds of problems that can occur, how they can be detected, and how they can be fixed. Specifically, we found that a logically inconsistent version of the ICTM ontology could be repaired using justifications (minimal entailing subsets of an ontology). Although over 600 justifications for the inconsistency were initially computed, we found that there were three main manageable patterns or categories of justifications involving TBox and ABox axioms. These categories represented meaningful domain errors to an expert panel of ICTM project knowledge engineers, who were able to use them to successfully determine the axioms that needed to be revised in order to fix the problem. All members of the expert panel agreed that the approach was useful for debugging and ensuring the quality of ICTM. PMID:25954373

  14. Ontologies and tag-statistics

    NASA Astrophysics Data System (ADS)

    Tibély, Gergely; Pollner, Péter; Vicsek, Tamás; Palla, Gergely

    2012-05-01

    Due to the increasing popularity of collaborative tagging systems, the research on tagged networks, hypergraphs, ontologies, folksonomies and other related concepts is becoming an important interdisciplinary area with great potential and relevance for practical applications. In most collaborative tagging systems the tagging by the users is completely ‘flat’, while in some cases they are allowed to define a shallow hierarchy for their own tags. However, usually no overall hierarchical organization of the tags is given, and one of the interesting challenges of this area is to provide an algorithm generating the ontology of the tags from the available data. In contrast, there are also other types of tagged networks available for research, where the tags are already organized into a directed acyclic graph (DAG), encapsulating the ‘is a sub-category of’ type of hierarchy between each other. In this paper, we study how this DAG affects the statistical distribution of tags on the nodes marked by the tags in various real networks. The motivation for this research was the fact that understanding the tagging based on a known hierarchy can help in revealing the hidden hierarchy of tags in collaborative tagging systems. We analyse the relation between the tag-frequency and the position of the tag in the DAG in two large sub-networks of the English Wikipedia and a protein-protein interaction network. We also study the tag co-occurrence statistics by introducing a two-dimensional (2D) tag-distance distribution preserving both the difference in the levels and the absolute distance in the DAG for the co-occurring pairs of tags. Our most interesting finding is that the local relevance of tags in the DAG (i.e. their rank or significance as characterized by, e.g., the length of the branches starting from them) is much more important than their global distance from the root. Furthermore, we also introduce a simple tagging model based on random walks on the DAG, capable of

  15. CiTO, the Citation Typing Ontology.

    PubMed

    Shotton, David

    2010-01-01

    CiTO, the Citation Typing Ontology, is an ontology for describing the nature of reference citations in scientific research articles and other scholarly works, both to other such publications and also to Web information resources, and for publishing these descriptions on the Semantic Web. Citation are described in terms of the factual and rhetorical relationships between citing publication and cited publication, the in-text and global citation frequencies of each cited work, and the nature of the cited work itself, including its publication and peer review status. This paper describes CiTO and illustrates its usefulness both for the annotation of bibliographic reference lists and for the visualization of citation networks. The latest version of CiTO, which this paper describes, is CiTO Version 1.6, published on 19 March 2010. CiTO is written in the Web Ontology Language OWL, uses the namespace http://purl.org/net/cito/, and is available from http://purl.org/net/cito/. This site uses content negotiation to deliver to the user an OWLDoc Web version of the ontology if accessed via a Web browser, or the OWL ontology itself if accessed from an ontology management tool such as Protégé 4 (http://protege.stanford.edu/). Collaborative work is currently under way to harmonize CiTO with other ontologies describing bibliographies and the rhetorical structure of scientific discourse. PMID:20626926

  16. Types of Concepts in Geoscience Ontologies

    NASA Astrophysics Data System (ADS)

    Brodaric, B.

    2006-05-01

    Ontologies are increasingly viewed as a key enabler of scientific research in cyber-infrastructures. They provide a way of digitally representing the meaning of concepts embedded in the theories and models of geoscience, enabling such representations to be compared and contrasted computationally. This facilitates the discovery, integration and communication of digitally accessible geoscience resources, and potentially helps geoscientists attain new knowledge. As ontologies are typically built to closely reflect some aspect or viewpoint of a domain, recognizing significant ontological patterns within the domain should thus lead to more useful and robust ontologies. A key idea then motivating this work is the notion that geoscience concepts possess an ontological pattern that helps not only structure them, but also aids ontology development in disciplines where concepts are similarly abstracted from geospatial regions, such as in ecology, soil science, etc. Proposed is an ontology structure in which six basic concept types are identified, defined, and organized in increasing levels of abstraction, including a level for general concepts (e.g. 'granite') and a level for concepts specific to a geospace-time region (e.g. 'granites of Ireland'). Discussed will be the six concept types, the proposed structure that organizes them, and several examples from geoscience. Also mentioned will be the significant implementation challenges faced but not addressed by the proposed structure. In general, the proposal prioritizes conceptual granularity over its engineering deficits, but this prioritization remains to be tested in serious applications.

  17. Ontology-Based Multiple Choice Question Generation

    PubMed Central

    Al-Yahya, Maha

    2014-01-01

    With recent advancements in Semantic Web technologies, a new trend in MCQ item generation has emerged through the use of ontologies. Ontologies are knowledge representation structures that formally describe entities in a domain and their relationships, thus enabling automated inference and reasoning. Ontology-based MCQ item generation is still in its infancy, but substantial research efforts are being made in the field. However, the applicability of these models for use in an educational setting has not been thoroughly evaluated. In this paper, we present an experimental evaluation of an ontology-based MCQ item generation system known as OntoQue. The evaluation was conducted using two different domain ontologies. The findings of this study show that ontology-based MCQ generation systems produce satisfactory MCQ items to a certain extent. However, the evaluation also revealed a number of shortcomings with current ontology-based MCQ item generation systems with regard to the educational significance of an automatically constructed MCQ item, the knowledge level it addresses, and its language structure. Furthermore, for the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework. PMID:24982937

  18. Ontology-based multiple choice question generation.

    PubMed

    Al-Yahya, Maha

    2014-01-01

    With recent advancements in Semantic Web technologies, a new trend in MCQ item generation has emerged through the use of ontologies. Ontologies are knowledge representation structures that formally describe entities in a domain and their relationships, thus enabling automated inference and reasoning. Ontology-based MCQ item generation is still in its infancy, but substantial research efforts are being made in the field. However, the applicability of these models for use in an educational setting has not been thoroughly evaluated. In this paper, we present an experimental evaluation of an ontology-based MCQ item generation system known as OntoQue. The evaluation was conducted using two different domain ontologies. The findings of this study show that ontology-based MCQ generation systems produce satisfactory MCQ items to a certain extent. However, the evaluation also revealed a number of shortcomings with current ontology-based MCQ item generation systems with regard to the educational significance of an automatically constructed MCQ item, the knowledge level it addresses, and its language structure. Furthermore, for the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework. PMID:24982937

  19. An improved version of the table look-up algorithm for pattern recognition. [for MSS data processing

    NASA Technical Reports Server (NTRS)

    Eppler, W. G.

    1974-01-01

    The table look-up approach to pattern recognition has been used for 3 years at several research centers in a variety of applications. A new version has been developed which is faster, requires significantly less core memory, and retains full precision of the input data. The new version can be used on low-cost minicomputers having 32K words (16 bits each) of core memory and fixed-point arithmetic; no special-purpose hardware is required. An initial FORTRAN version of this system can classify an ERTS computer-compatible tape into 24 classes in less than 15 minutes.

  20. A Knowledge Engineering Approach to Develop Domain Ontology

    ERIC Educational Resources Information Center

    Yun, Hongyan; Xu, Jianliang; Xiong, Jing; Wei, Moji

    2011-01-01

    Ontologies are one of the most popular and widespread means of knowledge representation and reuse. A few research groups have proposed a series of methodologies for developing their own standard ontologies. However, because this ontological construction concerns special fields, there is no standard method to build domain ontology. In this paper,…

  1. Nuclear Nonproliferation Ontology Assessment Team Final Report

    SciTech Connect

    Strasburg, Jana D.; Hohimer, Ryan E.

    2012-01-01

    Final Report for the NA22 Simulations, Algorithm and Modeling (SAM) Ontology Assessment Team's efforts from FY09-FY11. The Ontology Assessment Team began in May 2009 and concluded in September 2011. During this two-year time frame, the Ontology Assessment team had two objectives: (1) Assessing the utility of knowledge representation and semantic technologies for addressing nuclear nonproliferation challenges; and (2) Developing ontological support tools that would provide a framework for integrating across the Simulation, Algorithm and Modeling (SAM) program. The SAM Program was going through a large assessment and strategic planning effort during this time and as a result, the relative importance of these two objectives changed, altering the focus of the Ontology Assessment Team. In the end, the team conducted an assessment of the state of art, created an annotated bibliography, and developed a series of ontological support tools, demonstrations and presentations. A total of more than 35 individuals from 12 different research institutions participated in the Ontology Assessment Team. These included subject matter experts in several nuclear nonproliferation-related domains as well as experts in semantic technologies. Despite the diverse backgrounds and perspectives, the Ontology Assessment team functioned very well together and aspects could serve as a model for future inter-laboratory collaborations and working groups. While the team encountered several challenges and learned many lessons along the way, the Ontology Assessment effort was ultimately a success that led to several multi-lab research projects and opened up a new area of scientific exploration within the Office of Nuclear Nonproliferation and Verification.

  2. Toward a patient safety upper level ontology.

    PubMed

    Souvignet, Julien; Rodrigues, Jean-Marie

    2015-01-01

    Patient Safety (PS) standardization is the key to improve interoperability and expand international share of incident reporting system knowledge. By aligning the Patient Safety Categorial Structure (PS-CAST) to the Basic Formal Ontology version 2 (BFO2) upper level ontology, we aim to provide more rigor on the underlying organization on the one hand, and to share instances of concepts of categorial structure on the other hand. This alignment is a big step in the top-down approach, to build a complete and standardized domain ontology in order to facilitate the basis to a WHO accepted new information model for Patient Safety. PMID:25991122

  3. Hierarchical Analysis of the Omega Ontology

    SciTech Connect

    Joslyn, Cliff A.; Paulson, Patrick R.

    2009-12-01

    Initial delivery for mathematical analysis of the Omega Ontology. We provide an analysis of the hierarchical structure of a version of the Omega Ontology currently in use within the US Government. After providing an initial statistical analysis of the distribution of all link types in the ontology, we then provide a detailed order theoretical analysis of each of the four main hierarchical links present. This order theoretical analysis includes the distribution of components and their properties, their parent/child and multiple inheritance structure, and the distribution of their vertical ranks.

  4. Software Engineering Approaches to Ontology Development

    NASA Astrophysics Data System (ADS)

    Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan

    Ontologies, as formal representations of domain knowledge, enable knowledge sharing between different knowledge-based applications. Diverse techniques originating from the field of artificial intelligence are aimed at facilitating ontology development. However, these techniques, although well known to AI experts, are typically unknown to a large population of software engineers. In order to overcome the gap between the knowledge of software engineering practitioners and AI techniques, a few proposals have been made suggesting the use of well-known software engineering techniques, such as UML, for ontology development (Cranefield 2001a).

  5. Utilizing a structural meta-ontology for family-based quality assurance of the BioPortal ontologies.

    PubMed

    Ochs, Christopher; He, Zhe; Zheng, Ling; Geller, James; Perl, Yehoshua; Hripcsak, George; Musen, Mark A

    2016-06-01

    An Abstraction Network is a compact summary of an ontology's structure and content. In previous research, we showed that Abstraction Networks support quality assurance (QA) of biomedical ontologies. The development of an Abstraction Network and its associated QA methodologies, however, is a labor-intensive process that previously was applicable only to one ontology at a time. To improve the efficiency of the Abstraction-Network-based QA methodology, we introduced a QA framework that uses uniform Abstraction Network derivation techniques and QA methodologies that are applicable to whole families of structurally similar ontologies. For the family-based framework to be successful, it is necessary to develop a method for classifying ontologies into structurally similar families. We now describe a structural meta-ontology that classifies ontologies according to certain structural features that are commonly used in the modeling of ontologies (e.g., object properties) and that are important for Abstraction Network derivation. Each class of the structural meta-ontology represents a family of ontologies with identical structural features, indicating which types of Abstraction Networks and QA methodologies are potentially applicable to all of the ontologies in the family. We derive a collection of 81 families, corresponding to classes of the structural meta-ontology, that enable a flexible, streamlined family-based QA methodology, offering multiple choices for classifying an ontology. The structure of 373 ontologies from the NCBO BioPortal is analyzed and each ontology is classified into multiple families modeled by the structural meta-ontology. PMID:26988001

  6. Ontology-Based Annotation of Brain MRI Images

    PubMed Central

    Mechouche, Ammar; Golbreich, Christine; Morandi, Xavier; Gibaud, Bernard

    2008-01-01

    This paper describes a hybrid system for annotating anatomical structures in brain Magnetic Resonance Images. The system involves both numerical knowledge from an atlas and symbolic knowledge represented in a rule-extended ontology, written in standard web languages, and symbolic constraints. The system combines this knowledge with graphical data automatically extracted from the images. The annotations of the parts of sulci and of gyri located in a region of interest selected by the user are obtained with a reasoning based on a Constraint Satisfaction Problem solving combined with Description Logics inference services. The first results obtained with both normal and pathological data are promising. PMID:18998967

  7. Rapid spatial frequency domain inverse problem solutions using look-up tables for real-time processing (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Angelo, Joseph P.; Bigio, Irving J.; Gioux, Sylvain

    2016-03-01

    Imaging technologies working in the spatial frequency domain are becoming increasingly popular for generating wide-field optical property maps, enabling further analysis of tissue parameters such as absorption or scattering. While acquisition methods have witnessed a very rapid growth and are now performing in real-time, processing methods are yet slow preventing information to be acquired and displayed in real-time. In this work, we present solutions for rapid inverse problem solving for optical properties by use of advanced look-up tables. In particular, we present methods and results from a dense, linearized look-up table and an analytical representation that currently run 100 times faster than the standard method and within 10% in both absorption and scattering. With the resulting computation time in the tens of milliseconds range, the proposed techniques enable video-rate feedback of real-time techniques such as snapshot of optical properties (SSOP) imaging, making full video-rate guidance in the clinic possible.

  8. Ontology-Driven Disability-Aware E-Learning Personalisation with ONTODAPS

    ERIC Educational Resources Information Center

    Nganji, Julius T.; Brayshaw, Mike; Tompsett, Brian

    2013-01-01

    Purpose: The purpose of this paper is to show how personalisation of learning resources and services can be achieved for students with and without disabilities, particularly responding to the needs of those with multiple disabilities in e-learning systems. The paper aims to introduce ONTODAPS, the Ontology-Driven Disability-Aware Personalised…

  9. Metadata and Ontologies in Learning Resources Design

    NASA Astrophysics Data System (ADS)

    Vidal C., Christian; Segura Navarrete, Alejandra; Menéndez D., Víctor; Zapata Gonzalez, Alfredo; Prieto M., Manuel

    Resource design and development requires knowledge about educational goals, instructional context and information about learner's characteristics among other. An important information source about this knowledge are metadata. However, metadata by themselves do not foresee all necessary information related to resource design. Here we argue the need to use different data and knowledge models to improve understanding the complex processes related to e-learning resources and their management. This paper presents the use of semantic web technologies, as ontologies, supporting the search and selection of resources used in design. Classification is done, based on instructional criteria derived from a knowledge acquisition process, using information provided by IEEE-LOM metadata standard. The knowledge obtained is represented in an ontology using OWL and SWRL. In this work we give evidence of the implementation of a Learning Object Classifier based on ontology. We demonstrate that the use of ontologies can support the design activities in e-learning.

  10. The Gene Ontology: enhancements for 2011.

    PubMed

    2012-01-01

    The Gene Ontology (GO) (http://www.geneontology.org) is a community bioinformatics resource that represents gene product function through the use of structured, controlled vocabularies. The number of GO annotations of gene products has increased due to curation efforts among GO Consortium (GOC) groups, including focused literature-based annotation and ortholog-based functional inference. The GO ontologies continue to expand and improve as a result of targeted ontology development, including the introduction of computable logical definitions and development of new tools for the streamlined addition of terms to the ontology. The GOC continues to support its user community through the use of e-mail lists, social media and web-based resources. PMID:22102568