Sample records for automatic ontology builder

  1. Standardized terminology for clinical trial protocols based on top-level ontological categories.

    PubMed

    Heller, B; Herre, H; Lippoldt, K; Loeffler, M

    2004-01-01

    This paper describes a new method for the ontologically based standardization of concepts with regard to the quality assurance of clinical trial protocols. We developed a data dictionary for medical and trial-specific terms in which concepts and relations are defined context-dependently. The data dictionary is provided to different medical research networks by means of the software tool Onto-Builder via the internet. The data dictionary is based on domain-specific ontologies and the top-level ontology of GOL. The concepts and relations described in the data dictionary are represented in natural language, semi-formally or formally according to their use.

  2. Semi Automatic Ontology Instantiation in the domain of Risk Management

    NASA Astrophysics Data System (ADS)

    Makki, Jawad; Alquier, Anne-Marie; Prince, Violaine

    One of the challenging tasks in the context of Ontological Engineering is to automatically or semi-automatically support the process of Ontology Learning and Ontology Population from semi-structured documents (texts). In this paper we describe a Semi-Automatic Ontology Instantiation method from natural language text, in the domain of Risk Management. This method is composed from three steps 1 ) Annotation with part-of-speech tags, 2) Semantic Relation Instances Extraction, 3) Ontology instantiation process. It's based on combined NLP techniques using human intervention between steps 2 and 3 for control and validation. Since it heavily relies on linguistic knowledge it is not domain dependent which is a good feature for portability between the different fields of risk management application. The proposed methodology uses the ontology of the PRIMA1 project (supported by the European community) as a Generic Domain Ontology and populates it via an available corpus. A first validation of the approach is done through an experiment with Chemical Fact Sheets from Environmental Protection Agency2.

  3. FOCIH: Form-Based Ontology Creation and Information Harvesting

    NASA Astrophysics Data System (ADS)

    Tao, Cui; Embley, David W.; Liddle, Stephen W.

    Creating an ontology and populating it with data are both labor-intensive tasks requiring a high degree of expertise. Thus, scaling ontology creation and population to the size of the web in an effort to create a web of data—which some see as Web 3.0—is prohibitive. Can we find ways to streamline these tasks and lower the barrier enough to enable Web 3.0? Toward this end we offer a form-based approach to ontology creation that provides a way to create Web 3.0 ontologies without the need for specialized training. And we offer a way to semi-automatically harvest data from the current web of pages for a Web 3.0 ontology. In addition to harvesting information with respect to an ontology, the approach also annotates web pages and links facts in web pages to ontological concepts, resulting in a web of data superimposed over the web of pages. Experience with our prototype system shows that mappings between conceptual-model-based ontologies and forms are sufficient for creating the kind of ontologies needed for Web 3.0, and experiments with our prototype system show that automatic harvesting, automatic annotation, and automatic superimposition of a web of data over a web of pages work well.

  4. Terminologies for text-mining; an experiment in the lipoprotein metabolism domain

    PubMed Central

    Alexopoulou, Dimitra; Wächter, Thomas; Pickersgill, Laura; Eyre, Cecilia; Schroeder, Michael

    2008-01-01

    Background The engineering of ontologies, especially with a view to a text-mining use, is still a new research field. There does not yet exist a well-defined theory and technology for ontology construction. Many of the ontology design steps remain manual and are based on personal experience and intuition. However, there exist a few efforts on automatic construction of ontologies in the form of extracted lists of terms and relations between them. Results We share experience acquired during the manual development of a lipoprotein metabolism ontology (LMO) to be used for text-mining. We compare the manually created ontology terms with the automatically derived terminology from four different automatic term recognition (ATR) methods. The top 50 predicted terms contain up to 89% relevant terms. For the top 1000 terms the best method still generates 51% relevant terms. In a corpus of 3066 documents 53% of LMO terms are contained and 38% can be generated with one of the methods. Conclusions Given high precision, automatic methods can help decrease development time and provide significant support for the identification of domain-specific vocabulary. The coverage of the domain vocabulary depends strongly on the underlying documents. Ontology development for text mining should be performed in a semi-automatic way; taking ATR results as input and following the guidelines we described. Availability The TFIDF term recognition is available as Web Service, described at PMID:18460175

  5. Quantitative Microbial Risk Assessment Tutorial - SDMProjectBuilder: Import Local Data Files to Identify and Modify Contamination Sources and Input ParametersUpdated 2017

    EPA Science Inventory

    Twelve example local data support files are automatically downloaded when the SDMProjectBuilder is installed on a computer. They allow the user to modify values to parameters that impact the release, migration, fate, and transport of microbes within a watershed, and control delin...

  6. Quantitative Microbial Risk Assessment Tutorial – SDMProjectBuilder: Import Local Data Files to Identify and Modify Contamination Sources and Input Parameters

    EPA Science Inventory

    Twelve example local data support files are automatically downloaded when the SDMProjectBuilder is installed on a computer. They allow the user to modify values to parameters that impact the release, migration, fate, and transport of microbes within a watershed, and control delin...

  7. An Ontology of Therapies

    NASA Astrophysics Data System (ADS)

    Eccher, Claudio; Ferro, Antonella; Pisanelli, Domenico M.

    Ontologies are the essential glue to build interoperable systems and the talk of the day in the medical community. In this paper we present the ontology of medical therapies developed in the course of the Oncocure project, aimed at building a guideline based decision support integrated with a legacy Electronic Patient Record (EPR). The therapy ontology is based upon the DOLCE top level ontology. It is our opinion that our ontology, besides constituting a model capturing the precise meaning of therapy-related concepts, can serve for several practical purposes: interfacing automatic support systems with a legacy EPR, allowing the automatic data analysis, and controlling possible medical errors made during EPR data input.

  8. Towards natural language question generation for the validation of ontologies and mappings.

    PubMed

    Ben Abacha, Asma; Dos Reis, Julio Cesar; Mrabet, Yassine; Pruski, Cédric; Da Silveira, Marcos

    2016-08-08

    The increasing number of open-access ontologies and their key role in several applications such as decision-support systems highlight the importance of their validation. Human expertise is crucial for the validation of ontologies from a domain point-of-view. However, the growing number of ontologies and their fast evolution over time make manual validation challenging. We propose a novel semi-automatic approach based on the generation of natural language (NL) questions to support the validation of ontologies and their evolution. The proposed approach includes the automatic generation, factorization and ordering of NL questions from medical ontologies. The final validation and correction is performed by submitting these questions to domain experts and automatically analyzing their feedback. We also propose a second approach for the validation of mappings impacted by ontology changes. The method exploits the context of the changes to propose correction alternatives presented as Multiple Choice Questions. This research provides a question optimization strategy to maximize the validation of ontology entities with a reduced number of questions. We evaluate our approach for the validation of three medical ontologies. We also evaluate the feasibility and efficiency of our mappings validation approach in the context of ontology evolution. These experiments are performed with different versions of SNOMED-CT and ICD9. The obtained experimental results suggest the feasibility and adequacy of our approach to support the validation of interconnected and evolving ontologies. Results also suggest that taking into account RDFS and OWL entailment helps reducing the number of questions and validation time. The application of our approach to validate mapping evolution also shows the difficulty of adapting mapping evolution over time and highlights the importance of semi-automatic validation.

  9. Space Fabrication Demonstration System

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The completion of assembly of the beam builder and its first automatic production of truss is discussed. A four bay, hand assembled, roll formed members truss was built and tested to ultimate load. Detail design of the fabrication facility (beam builder) was completed and designs for subsystem debugging are discussed. Many one bay truss specimens were produced to demonstrate subsystem operation and to detect problem areas.

  10. Information Pre-Processing using Domain Meta-Ontology and Rule Learning System

    NASA Astrophysics Data System (ADS)

    Ranganathan, Girish R.; Biletskiy, Yevgen

    Around the globe, extraordinary amounts of documents are being created by Enterprises and by users outside these Enterprises. The documents created in the Enterprises constitute the main focus of the present chapter. These documents are used to perform numerous amounts of machine processing. While using thesedocuments for machine processing, lack of semantics of the information in these documents may cause misinterpretation of the information, thereby inhibiting the productiveness of computer assisted analytical work. Hence, it would be profitable to the Enterprises if they use well defined domain ontologies which will serve as rich source(s) of semantics for the information in the documents. These domain ontologies can be created manually, semi-automatically or fully automatically. The focus of this chapter is to propose an intermediate solution which will enable relatively easy creation of these domain ontologies. The process of extracting and capturing domain ontologies from these voluminous documents requires extensive involvement of domain experts and application of methods of ontology learning that are substantially labor intensive; therefore, some intermediate solutions which would assist in capturing domain ontologies must be developed. This chapter proposes a solution in this direction which involves building a meta-ontology that will serve as an intermediate information source for the main domain ontology. This chapter proposes a solution in this direction which involves building a meta-ontology as a rapid approach in conceptualizing a domain of interest from huge amount of source documents. This meta-ontology can be populated by ontological concepts, attributes and relations from documents, and then refined in order to form better domain ontology either through automatic ontology learning methods or some other relevant ontology building approach.

  11. Analysis of Technique to Extract Data from the Web for Improved Performance

    NASA Astrophysics Data System (ADS)

    Gupta, Neena; Singh, Manish

    2010-11-01

    The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.

  12. Interoperability between biomedical ontologies through relation expansion, upper-level ontologies and automatic reasoning.

    PubMed

    Hoehndorf, Robert; Dumontier, Michel; Oellrich, Anika; Rebholz-Schuhmann, Dietrich; Schofield, Paul N; Gkoutos, Georgios V

    2011-01-01

    Researchers design ontologies as a means to accurately annotate and integrate experimental data across heterogeneous and disparate data- and knowledge bases. Formal ontologies make the semantics of terms and relations explicit such that automated reasoning can be used to verify the consistency of knowledge. However, many biomedical ontologies do not sufficiently formalize the semantics of their relations and are therefore limited with respect to automated reasoning for large scale data integration and knowledge discovery. We describe a method to improve automated reasoning over biomedical ontologies and identify several thousand contradictory class definitions. Our approach aligns terms in biomedical ontologies with foundational classes in a top-level ontology and formalizes composite relations as class expressions. We describe the semi-automated repair of contradictions and demonstrate expressive queries over interoperable ontologies. Our work forms an important cornerstone for data integration, automatic inference and knowledge discovery based on formal representations of knowledge. Our results and analysis software are available at http://bioonto.de/pmwiki.php/Main/ReasonableOntologies.

  13. Ontorat: automatic generation of new ontology terms, annotations, and axioms based on ontology design patterns.

    PubMed

    Xiang, Zuoshuang; Zheng, Jie; Lin, Yu; He, Yongqun

    2015-01-01

    It is time-consuming to build an ontology with many terms and axioms. Thus it is desired to automate the process of ontology development. Ontology Design Patterns (ODPs) provide a reusable solution to solve a recurrent modeling problem in the context of ontology engineering. Because ontology terms often follow specific ODPs, the Ontology for Biomedical Investigations (OBI) developers proposed a Quick Term Templates (QTTs) process targeted at generating new ontology classes following the same pattern, using term templates in a spreadsheet format. Inspired by the ODPs and QTTs, the Ontorat web application is developed to automatically generate new ontology terms, annotations of terms, and logical axioms based on a specific ODP(s). The inputs of an Ontorat execution include axiom expression settings, an input data file, ID generation settings, and a target ontology (optional). The axiom expression settings can be saved as a predesigned Ontorat setting format text file for reuse. The input data file is generated based on a template file created by a specific ODP (text or Excel format). Ontorat is an efficient tool for ontology expansion. Different use cases are described. For example, Ontorat was applied to automatically generate over 1,000 Japan RIKEN cell line cell terms with both logical axioms and rich annotation axioms in the Cell Line Ontology (CLO). Approximately 800 licensed animal vaccines were represented and annotated in the Vaccine Ontology (VO) by Ontorat. The OBI team used Ontorat to add assay and device terms required by ENCODE project. Ontorat was also used to add missing annotations to all existing Biobank specific terms in the Biobank Ontology. A collection of ODPs and templates with examples are provided on the Ontorat website and can be reused to facilitate ontology development. With ever increasing ontology development and applications, Ontorat provides a timely platform for generating and annotating a large number of ontology terms by following design patterns. http://ontorat.hegroup.org/.

  14. Designing integrated computational biology pipelines visually.

    PubMed

    Jamil, Hasan M

    2013-01-01

    The long-term cost of developing and maintaining a computational pipeline that depends upon data integration and sophisticated workflow logic is too high to even contemplate "what if" or ad hoc type queries. In this paper, we introduce a novel application building interface for computational biology research, called VizBuilder, by leveraging a recent query language called BioFlow for life sciences databases. Using VizBuilder, it is now possible to develop ad hoc complex computational biology applications at throw away costs. The underlying query language supports data integration and workflow construction almost transparently and fully automatically, using a best effort approach. Users express their application by drawing it with VizBuilder icons and connecting them in a meaningful way. Completed applications are compiled and translated as BioFlow queries for execution by the data management system LifeDB, for which VizBuilder serves as a front end. We discuss VizBuilder features and functionalities in the context of a real life application after we briefly introduce BioFlow. The architecture and design principles of VizBuilder are also discussed. Finally, we outline future extensions of VizBuilder. To our knowledge, VizBuilder is a unique system that allows visually designing computational biology pipelines involving distributed and heterogeneous resources in an ad hoc manner.

  15. GeoBuilder: a geometric algorithm visualization and debugging system for 2D and 3D geometric computing.

    PubMed

    Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai

    2009-01-01

    Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications.

  16. Intelligent E-Learning Systems: Automatic Construction of Ontologies

    NASA Astrophysics Data System (ADS)

    Peso, Jesús del; de Arriaga, Fernando

    2008-05-01

    During the last years a new generation of Intelligent E-Learning Systems (ILS) has emerged with enhanced functionality due, mainly, to influences from Distributed Artificial Intelligence, to the use of cognitive modelling, to the extensive use of the Internet, and to new educational ideas such as the student-centered education and Knowledge Management. The automatic construction of ontologies provides means of automatically updating the knowledge bases of their respective ILS, and of increasing their interoperability and communication among them, sharing the same ontology. The paper presents a new approach, able to produce ontologies from a small number of documents such as those obtained from the Internet, without the assistance of large corpora, by using simple syntactic rules and some semantic information. The method is independent of the natural language used. The use of a multi-agent system increases the flexibility and capability of the method. Although the method can be easily improved, the results so far obtained, are promising.

  17. Unsupervised Ontology Generation from Unstructured Text. CRESST Report 827

    ERIC Educational Resources Information Center

    Mousavi, Hamid; Kerr, Deirdre; Iseli, Markus R.

    2013-01-01

    Ontologies are a vital component of most knowledge acquisition systems, and recently there has been a huge demand for generating ontologies automatically since manual or supervised techniques are not scalable. In this paper, we introduce "OntoMiner", a rule-based, iterative method to extract and populate ontologies from unstructured or…

  18. Time-related patient data retrieval for the case studies from the pharmacogenomics research network

    PubMed Central

    Zhu, Qian; Tao, Cui; Ding, Ying; Chute, Christopher G.

    2012-01-01

    There are lots of question-based data elements from the pharmacogenomics research network (PGRN) studies. Many data elements contain temporal information. To semantically represent these elements so that they can be machine processiable is a challenging problem for the following reasons: (1) the designers of these studies usually do not have the knowledge of any computer modeling and query languages, so that the original data elements usually are represented in spreadsheets in human languages; and (2) the time aspects in these data elements can be too complex to be represented faithfully in a machine-understandable way. In this paper, we introduce our efforts on representing these data elements using semantic web technologies. We have developed an ontology, CNTRO, for representing clinical events and their temporal relations in the web ontology language (OWL). Here we use CNTRO to represent the time aspects in the data elements. We have evaluated 720 time-related data elements from PGRN studies. We adapted and extended the knowledge representation requirements for EliXR-TIME to categorize our data elements. A CNTRO-based SPARQL query builder has been developed to customize users’ own SPARQL queries for each knowledge representation requirement. The SPARQL query builder has been evaluated with a simulated EHR triple store to ensure its functionalities. PMID:23076712

  19. Time-related patient data retrieval for the case studies from the pharmacogenomics research network.

    PubMed

    Zhu, Qian; Tao, Cui; Ding, Ying; Chute, Christopher G

    2012-11-01

    There are lots of question-based data elements from the pharmacogenomics research network (PGRN) studies. Many data elements contain temporal information. To semantically represent these elements so that they can be machine processiable is a challenging problem for the following reasons: (1) the designers of these studies usually do not have the knowledge of any computer modeling and query languages, so that the original data elements usually are represented in spreadsheets in human languages; and (2) the time aspects in these data elements can be too complex to be represented faithfully in a machine-understandable way. In this paper, we introduce our efforts on representing these data elements using semantic web technologies. We have developed an ontology, CNTRO, for representing clinical events and their temporal relations in the web ontology language (OWL). Here we use CNTRO to represent the time aspects in the data elements. We have evaluated 720 time-related data elements from PGRN studies. We adapted and extended the knowledge representation requirements for EliXR-TIME to categorize our data elements. A CNTRO-based SPARQL query builder has been developed to customize users' own SPARQL queries for each knowledge representation requirement. The SPARQL query builder has been evaluated with a simulated EHR triple store to ensure its functionalities.

  20. Automatic Generation of Tests from Domain and Multimedia Ontologies

    ERIC Educational Resources Information Center

    Papasalouros, Andreas; Kotis, Konstantinos; Kanaris, Konstantinos

    2011-01-01

    The aim of this article is to present an approach for generating tests in an automatic way. Although other methods have been already reported in the literature, the proposed approach is based on ontologies, representing both domain and multimedia knowledge. The article also reports on a prototype implementation of this approach, which…

  1. Owlready: Ontology-oriented programming in Python with automatic classification and high level constructs for biomedical ontologies.

    PubMed

    Lamy, Jean-Baptiste

    2017-07-01

    Ontologies are widely used in the biomedical domain. While many tools exist for the edition, alignment or evaluation of ontologies, few solutions have been proposed for ontology programming interface, i.e. for accessing and modifying an ontology within a programming language. Existing query languages (such as SPARQL) and APIs (such as OWLAPI) are not as easy-to-use as object programming languages are. Moreover, they provide few solutions to difficulties encountered with biomedical ontologies. Our objective was to design a tool for accessing easily the entities of an OWL ontology, with high-level constructs helping with biomedical ontologies. From our experience on medical ontologies, we identified two difficulties: (1) many entities are represented by classes (rather than individuals), but the existing tools do not permit manipulating classes as easily as individuals, (2) ontologies rely on the open-world assumption, whereas the medical reasoning must consider only evidence-based medical knowledge as true. We designed a Python module for ontology-oriented programming. It allows access to the entities of an OWL ontology as if they were objects in the programming language. We propose a simple high-level syntax for managing classes and the associated "role-filler" constraints. We also propose an algorithm for performing local closed world reasoning in simple situations. We developed Owlready, a Python module for a high-level access to OWL ontologies. The paper describes the architecture and the syntax of the module version 2. It details how we integrated the OWL ontology model with the Python object model. The paper provides examples based on Gene Ontology (GO). We also demonstrate the interest of Owlready in a use case focused on the automatic comparison of the contraindications of several drugs. This use case illustrates the use of the specific syntax proposed for manipulating classes and for performing local closed world reasoning. Owlready has been successfully used in a medical research project. It has been published as Open-Source software and then used by many other researchers. Future developments will focus on the support of vagueness and additional non-monotonic reasoning feature, and automatic dialog box generation. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. MiDas: Automatic Extraction of a Common Domain of Discourse in Sleep Medicine for Multi-center Data Integration

    PubMed Central

    Sahoo, Satya S.; Ogbuji, Chimezie; Luo, Lingyun; Dong, Xiao; Cui, Licong; Redline, Susan S.; Zhang, Guo-Qiang

    2011-01-01

    Clinical studies often use data dictionaries with controlled sets of terms to facilitate data collection, limited interoperability and sharing at a local site. Multi-center retrospective clinical studies require that these data dictionaries, originating from individual participating centers, be harmonized in preparation for the integration of the corresponding clinical research data. Domain ontologies are often used to facilitate multi-center data integration by modeling terms from data dictionaries in a logic-based language, but interoperability among domain ontologies (using automated techniques) is an unresolved issue. Although many upper-level reference ontologies have been proposed to address this challenge, our experience in integrating multi-center sleep medicine data highlights the need for an upper level ontology that models a common set of terms at multiple-levels of abstraction, which is not covered by the existing upper-level ontologies. We introduce a methodology underpinned by a Minimal Domain of Discourse (MiDas) algorithm to automatically extract a minimal common domain of discourse (upper-domain ontology) from an existing domain ontology. Using the Multi-Modality, Multi-Resource Environment for Physiological and Clinical Research (Physio-MIMI) multi-center project in sleep medicine as a use case, we demonstrate the use of MiDas in extracting a minimal domain of discourse for sleep medicine, from Physio-MIMI’s Sleep Domain Ontology (SDO). We then extend the resulting domain of discourse with terms from the data dictionary of the Sleep Heart and Health Study (SHHS) to validate MiDas. To illustrate the wider applicability of MiDas, we automatically extract the respective domains of discourse from 6 sample domain ontologies from the National Center for Biomedical Ontologies (NCBO) and the OBO Foundry. PMID:22195180

  3. MiDas: automatic extraction of a common domain of discourse in sleep medicine for multi-center data integration.

    PubMed

    Sahoo, Satya S; Ogbuji, Chimezie; Luo, Lingyun; Dong, Xiao; Cui, Licong; Redline, Susan S; Zhang, Guo-Qiang

    2011-01-01

    Clinical studies often use data dictionaries with controlled sets of terms to facilitate data collection, limited interoperability and sharing at a local site. Multi-center retrospective clinical studies require that these data dictionaries, originating from individual participating centers, be harmonized in preparation for the integration of the corresponding clinical research data. Domain ontologies are often used to facilitate multi-center data integration by modeling terms from data dictionaries in a logic-based language, but interoperability among domain ontologies (using automated techniques) is an unresolved issue. Although many upper-level reference ontologies have been proposed to address this challenge, our experience in integrating multi-center sleep medicine data highlights the need for an upper level ontology that models a common set of terms at multiple-levels of abstraction, which is not covered by the existing upper-level ontologies. We introduce a methodology underpinned by a Minimal Domain of Discourse (MiDas) algorithm to automatically extract a minimal common domain of discourse (upper-domain ontology) from an existing domain ontology. Using the Multi-Modality, Multi-Resource Environment for Physiological and Clinical Research (Physio-MIMI) multi-center project in sleep medicine as a use case, we demonstrate the use of MiDas in extracting a minimal domain of discourse for sleep medicine, from Physio-MIMI's Sleep Domain Ontology (SDO). We then extend the resulting domain of discourse with terms from the data dictionary of the Sleep Heart and Health Study (SHHS) to validate MiDas. To illustrate the wider applicability of MiDas, we automatically extract the respective domains of discourse from 6 sample domain ontologies from the National Center for Biomedical Ontologies (NCBO) and the OBO Foundry.

  4. Development of a beam builder for automatic fabrication of large composite space structures

    NASA Technical Reports Server (NTRS)

    Bodle, J. G.

    1979-01-01

    The composite material beam builder which will produce triangular beams from pre-consolidated graphite/glass/thermoplastic composite material through automated mechanical processes is presented, side member storage, feed and positioning, ultrasonic welding, and beam cutoff are formed. Each process lends itself to modular subsystem development. Initial development is concentrated on the key processes for roll forming and ultrasonic welding composite thermoplastic materials. The construction and test of an experimental roll forming machine and ultrasonic welding process control techniques are described.

  5. Building a semi-automatic ontology learning and construction system for geosciences

    NASA Astrophysics Data System (ADS)

    Babaie, H. A.; Sunderraman, R.; Zhu, Y.

    2013-12-01

    We are developing an ontology learning and construction framework that allows continuous, semi-automatic knowledge extraction, verification, validation, and maintenance by potentially a very large group of collaborating domain experts in any geosciences field. The system brings geoscientists from the side-lines to the center stage of ontology building, allowing them to collaboratively construct and enrich new ontologies, and merge, align, and integrate existing ontologies and tools. These constantly evolving ontologies can more effectively address community's interests, purposes, tools, and change. The goal is to minimize the cost and time of building ontologies, and maximize the quality, usability, and adoption of ontologies by the community. Our system will be a domain-independent ontology learning framework that applies natural language processing, allowing users to enter their ontology in a semi-structured form, and a combined Semantic Web and Social Web approach that lets direct participation of geoscientists who have no skill in the design and development of their domain ontologies. A controlled natural language (CNL) interface and an integrated authoring and editing tool automatically convert syntactically correct CNL text into formal OWL constructs. The WebProtege-based system will allow a potentially large group of geoscientists, from multiple domains, to crowd source and participate in the structuring of their knowledge model by sharing their knowledge through critiquing, testing, verifying, adopting, and updating of the concept models (ontologies). We will use cloud storage for all data and knowledge base components of the system, such as users, domain ontologies, discussion forums, and semantic wikis that can be accessed and queried by geoscientists in each domain. We will use NoSQL databases such as MongoDB as a service in the cloud environment. MongoDB uses the lightweight JSON format, which makes it convenient and easy to build Web applications using just HTML5 and Javascript, thereby avoiding cumbersome server side coding present in the traditional approaches. The JSON format used in MongoDB is also suitable for storing and querying RDF data. We will store the domain ontologies and associated linked data in JSON/RDF formats. Our Web interface will be built upon the open source and configurable WebProtege ontology editor. We will develop a simplified mobile version of our user interface which will automatically detect the hosting device and adjust the user interface layout to accommodate different screen sizes. We will also use the Semantic Media Wiki that allows the user to store and query the data within the wiki pages. By using HTML 5, JavaScript, and WebGL, we aim to create an interactive, dynamic, and multi-dimensional user interface that presents various geosciences data sets in a natural and intuitive way.

  6. Automatic definition of the oncologic EHR data elements from NCIT in OWL.

    PubMed

    Cuggia, Marc; Bourdé, Annabel; Turlin, Bruno; Vincendeau, Sebastien; Bertaud, Valerie; Bohec, Catherine; Duvauferrier, Régis

    2011-01-01

    Semantic interoperability based on ontologies allows systems to combine their information and process them automatically. The ability to extract meaningful fragments from ontology is a key for the ontology re-use and the construction of a subset will help to structure clinical data entries. The aim of this work is to provide a method for extracting a set of concepts for a specific domain, in order to help to define data elements of an oncologic EHR. a generic extraction algorithm was developed to extract, from the NCIT and for a specific disease (i.e. prostate neoplasm), all the concepts of interest into a sub-ontology. We compared all the concepts extracted to the concepts encoded manually contained into the multi-disciplinary meeting report form (MDMRF). We extracted two sub-ontologies: sub-ontology 1 by using a single key concept and sub-ontology 2 by using 5 additional keywords. The coverage of sub-ontology 2 to the MDMRF concepts was 51%. The low rate of coverage is due to the lack of definition or mis-classification of the NCIT concepts. By providing a subset of concepts focused on a particular domain, this extraction method helps at optimizing the binding process of data elements and at maintaining and enriching a domain ontology.

  7. A Semi-Automatic Approach to Construct Vietnamese Ontology from Online Text

    ERIC Educational Resources Information Center

    Nguyen, Bao-An; Yang, Don-Lin

    2012-01-01

    An ontology is an effective formal representation of knowledge used commonly in artificial intelligence, semantic web, software engineering, and information retrieval. In open and distance learning, ontologies are used as knowledge bases for e-learning supplements, educational recommenders, and question answering systems that support students with…

  8. Recognizing lexical and semantic change patterns in evolving life science ontologies to inform mapping adaptation.

    PubMed

    Dos Reis, Julio Cesar; Dinh, Duy; Da Silveira, Marcos; Pruski, Cédric; Reynaud-Delaître, Chantal

    2015-03-01

    Mappings established between life science ontologies require significant efforts to maintain them up to date due to the size and frequent evolution of these ontologies. In consequence, automatic methods for applying modifications on mappings are highly demanded. The accuracy of such methods relies on the available description about the evolution of ontologies, especially regarding concepts involved in mappings. However, from one ontology version to another, a further understanding of ontology changes relevant for supporting mapping adaptation is typically lacking. This research work defines a set of change patterns at the level of concept attributes, and proposes original methods to automatically recognize instances of these patterns based on the similarity between attributes denoting the evolving concepts. This investigation evaluates the benefits of the proposed methods and the influence of the recognized change patterns to select the strategies for mapping adaptation. The summary of the findings is as follows: (1) the Precision (>60%) and Recall (>35%) achieved by comparing manually identified change patterns with the automatic ones; (2) a set of potential impact of recognized change patterns on the way mappings is adapted. We found that the detected correlations cover ∼66% of the mapping adaptation actions with a positive impact; and (3) the influence of the similarity coefficient calculated between concept attributes on the performance of the recognition algorithms. The experimental evaluations conducted with real life science ontologies showed the effectiveness of our approach to accurately characterize ontology evolution at the level of concept attributes. This investigation confirmed the relevance of the proposed change patterns to support decisions on mapping adaptation. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Space fabrication demonstration system, technical volume

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The automatic beam builder ABB was developed, fabricated, and demonstrated within the established contract cost and schedule constraints. The ABB demonstrated the feasibility of: producing lightweight beams automatically within the required rate of 1 to 5 ft of completed beam per minute and producing structurally sound beams with axial design load of 5538 N based on the Grumman photovoltaic satellite solar power system design reference structure.

  10. An Ontology to Support the Classification of Learning Material in an Organizational Learning Environment: An Evaluation

    ERIC Educational Resources Information Center

    Valaski, Joselaine; Reinehr, Sheila; Malucelli, Andreia

    2017-01-01

    Purpose: The purpose of this research was to evaluate whether ontology integrated in an organizational learning environment may support the automatic learning material classification in a specific knowledge area. Design/methodology/approach: An ontology for recommending learning material was integrated in the organizational learning environment…

  11. Adaptive Semantic and Social Web-based learning and assessment environment for the STEM

    NASA Astrophysics Data System (ADS)

    Babaie, Hassan; Atchison, Chris; Sunderraman, Rajshekhar

    2014-05-01

    We are building a cloud- and Semantic Web-based personalized, adaptive learning environment for the STEM fields that integrates and leverages Social Web technologies to allow instructors and authors of learning material to collaborate in semi-automatic development and update of their common domain and task ontologies and building their learning resources. The semi-automatic ontology learning and development minimize issues related to the design and maintenance of domain ontologies by knowledge engineers who do not have any knowledge of the domain. The social web component of the personal adaptive system will allow individual and group learners to interact with each other and discuss their own learning experience and understanding of course material, and resolve issues related to their class assignments. The adaptive system will be capable of representing key knowledge concepts in different ways and difficulty levels based on learners' differences, and lead to different understanding of the same STEM content by different learners. It will adapt specific pedagogical strategies to individual learners based on their characteristics, cognition, and preferences, allow authors to assemble remotely accessed learning material into courses, and provide facilities for instructors to assess (in real time) the perception of students of course material, monitor their progress in the learning process, and generate timely feedback based on their understanding or misconceptions. The system applies a set of ontologies that structure the learning process, with multiple user friendly Web interfaces. These include the learning ontology (models learning objects, educational resources, and learning goal); context ontology (supports adaptive strategy by detecting student situation), domain ontology (structures concepts and context), learner ontology (models student profile, preferences, and behavior), task ontologies, technological ontology (defines devices and places that surround the student), pedagogy ontology, and learner ontology (defines time constraint, comment, profile).

  12. An ontology-driven tool for structured data acquisition using Web forms.

    PubMed

    Gonçalves, Rafael S; Tu, Samson W; Nyulas, Csongor I; Tierney, Michael J; Musen, Mark A

    2017-08-01

    Structured data acquisition is a common task that is widely performed in biomedicine. However, current solutions for this task are far from providing a means to structure data in such a way that it can be automatically employed in decision making (e.g., in our example application domain of clinical functional assessment, for determining eligibility for disability benefits) based on conclusions derived from acquired data (e.g., assessment of impaired motor function). To use data in these settings, we need it structured in a way that can be exploited by automated reasoning systems, for instance, in the Web Ontology Language (OWL); the de facto ontology language for the Web. We tackle the problem of generating Web-based assessment forms from OWL ontologies, and aggregating input gathered through these forms as an ontology of "semantically-enriched" form data that can be queried using an RDF query language, such as SPARQL. We developed an ontology-based structured data acquisition system, which we present through its specific application to the clinical functional assessment domain. We found that data gathered through our system is highly amenable to automatic analysis using queries. We demonstrated how ontologies can be used to help structuring Web-based forms and to semantically enrich the data elements of the acquired structured data. The ontologies associated with the enriched data elements enable automated inferences and provide a rich vocabulary for performing queries.

  13. Ontology-based automatic identification of public health-related Turkish tweets.

    PubMed

    Küçük, Emine Ela; Yapar, Kürşad; Küçük, Dilek; Küçük, Doğan

    2017-04-01

    Social media analysis, such as the analysis of tweets, is a promising research topic for tracking public health concerns including epidemics. In this paper, we present an ontology-based approach to automatically identify public health-related Turkish tweets. The system is based on a public health ontology that we have constructed through a semi-automated procedure. The ontology concepts are expanded through a linguistically motivated relaxation scheme as the last stage of ontology development, before being integrated into our system to increase its coverage. The ultimate lexical resource which includes the terms corresponding to the ontology concepts is used to filter the Twitter stream so that a plausible tweet subset, including mostly public-health related tweets, can be obtained. Experiments are carried out on two million genuine tweets and promising precision rates are obtained. Also implemented within the course of the current study is a Web-based interface, to track the results of this identification system, to be used by the related public health staff. Hence, the current social media analysis study has both technical and practical contributions to the significant domain of public health. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Integration of low level and ontology derived features for automatic weapon recognition and identification

    NASA Astrophysics Data System (ADS)

    Sirakov, Nikolay M.; Suh, Sang; Attardo, Salvatore

    2011-06-01

    This paper presents a further step of a research toward the development of a quick and accurate weapons identification methodology and system. A basic stage of this methodology is the automatic acquisition and updating of weapons ontology as a source of deriving high level weapons information. The present paper outlines the main ideas used to approach the goal. In the next stage, a clustering approach is suggested on the base of hierarchy of concepts. An inherent slot of every node of the proposed ontology is a low level features vector (LLFV), which facilitates the search through the ontology. Part of the LLFV is the information about the object's parts. To partition an object a new approach is presented capable of defining the objects concavities used to mark the end points of weapon parts, considered as convexities. Further an existing matching approach is optimized to determine whether an ontological object matches the objects from an input image. Objects from derived ontological clusters will be considered for the matching process. Image resizing is studied and applied to decrease the runtime of the matching approach and investigate its rotational and scaling invariance. Set of experiments are preformed to validate the theoretical concepts.

  15. An ontology for Autism Spectrum Disorder (ASD) to infer ASD phenotypes from Autism Diagnostic Interview-Revised data.

    PubMed

    Mugzach, Omri; Peleg, Mor; Bagley, Steven C; Guter, Stephen J; Cook, Edwin H; Altman, Russ B

    2015-08-01

    Our goal is to create an ontology that will allow data integration and reasoning with subject data to classify subjects, and based on this classification, to infer new knowledge on Autism Spectrum Disorder (ASD) and related neurodevelopmental disorders (NDD). We take a first step toward this goal by extending an existing autism ontology to allow automatic inference of ASD phenotypes and Diagnostic & Statistical Manual of Mental Disorders (DSM) criteria based on subjects' Autism Diagnostic Interview-Revised (ADI-R) assessment data. Knowledge regarding diagnostic instruments, ASD phenotypes and risk factors was added to augment an existing autism ontology via Ontology Web Language class definitions and semantic web rules. We developed a custom Protégé plugin for enumerating combinatorial OWL axioms to support the many-to-many relations of ADI-R items to diagnostic categories in the DSM. We utilized a reasoner to infer whether 2642 subjects, whose data was obtained from the Simons Foundation Autism Research Initiative, meet DSM-IV-TR (DSM-IV) and DSM-5 diagnostic criteria based on their ADI-R data. We extended the ontology by adding 443 classes and 632 rules that represent phenotypes, along with their synonyms, environmental risk factors, and frequency of comorbidities. Applying the rules on the data set showed that the method produced accurate results: the true positive and true negative rates for inferring autistic disorder diagnosis according to DSM-IV criteria were 1 and 0.065, respectively; the true positive rate for inferring ASD based on DSM-5 criteria was 0.94. The ontology allows automatic inference of subjects' disease phenotypes and diagnosis with high accuracy. The ontology may benefit future studies by serving as a knowledge base for ASD. In addition, by adding knowledge of related NDDs, commonalities and differences in manifestations and risk factors could be automatically inferred, contributing to the understanding of ASD pathophysiology. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Ontology driven modeling for the knowledge of genetic susceptibility to disease.

    PubMed

    Lin, Yu; Sakamoto, Norihiro

    2009-05-12

    For the machine helped exploring the relationships between genetic factors and complex diseases, a well-structured conceptual framework of the background knowledge is needed. However, because of the complexity of determining a genetic susceptibility factor, there is no formalization for the knowledge of genetic susceptibility to disease, which makes the interoperability between systems impossible. Thus, the ontology modeling language OWL was used for formalization in this paper. After introducing the Semantic Web and OWL language propagated by W3C, we applied text mining technology combined with competency questions to specify the classes of the ontology. Then, an N-ary pattern was adopted to describe the relationships among these defined classes. Based on the former work of OGSF-DM (Ontology of Genetic Susceptibility Factors to Diabetes Mellitus), we formalized the definition of "Genetic Susceptibility", "Genetic Susceptibility Factor" and other classes by using OWL-DL modeling language; and a reasoner automatically performed the classification of the class "Genetic Susceptibility Factor". The ontology driven modeling is used for formalization the knowledge of genetic susceptibility to complex diseases. More importantly, when a class has been completely formalized in an ontology, the OWL reasoning can automatically compute the classification of the class, in our case, the class of "Genetic Susceptibility Factors". With more types of genetic susceptibility factors obtained from the laboratory research, our ontologies always needs to be refined, and many new classes must be taken into account to harmonize with the ontologies. Using the ontologies to develop the semantic web needs to be applied in the future.

  17. Ion Channel ElectroPhysiology Ontology (ICEPO) - a case study of text mining assisted ontology development.

    PubMed

    Elayavilli, Ravikumar Komandur; Liu, Hongfang

    2016-01-01

    Computational modeling of biological cascades is of great interest to quantitative biologists. Biomedical text has been a rich source for quantitative information. Gathering quantitative parameters and values from biomedical text is one significant challenge in the early steps of computational modeling as it involves huge manual effort. While automatically extracting such quantitative information from bio-medical text may offer some relief, lack of ontological representation for a subdomain serves as impedance in normalizing textual extractions to a standard representation. This may render textual extractions less meaningful to the domain experts. In this work, we propose a rule-based approach to automatically extract relations involving quantitative data from biomedical text describing ion channel electrophysiology. We further translated the quantitative assertions extracted through text mining to a formal representation that may help in constructing ontology for ion channel events using a rule based approach. We have developed Ion Channel ElectroPhysiology Ontology (ICEPO) by integrating the information represented in closely related ontologies such as, Cell Physiology Ontology (CPO), and Cardiac Electro Physiology Ontology (CPEO) and the knowledge provided by domain experts. The rule-based system achieved an overall F-measure of 68.93% in extracting the quantitative data assertions system on an independently annotated blind data set. We further made an initial attempt in formalizing the quantitative data assertions extracted from the biomedical text into a formal representation that offers potential to facilitate the integration of text mining into ontological workflow, a novel aspect of this study. This work is a case study where we created a platform that provides formal interaction between ontology development and text mining. We have achieved partial success in extracting quantitative assertions from the biomedical text and formalizing them in ontological framework. The ICEPO ontology is available for download at http://openbionlp.org/mutd/supplementarydata/ICEPO/ICEPO.owl.

  18. Automated software system for checking the structure and format of ACM SIG documents

    NASA Astrophysics Data System (ADS)

    Mirza, Arsalan Rahman; Sah, Melike

    2017-04-01

    Microsoft (MS) Office Word is one of the most commonly used software tools for creating documents. MS Word 2007 and above uses XML to represent the structure of MS Word documents. Metadata about the documents are automatically created using Office Open XML (OOXML) syntax. We develop a new framework, which is called ADFCS (Automated Document Format Checking System) that takes the advantage of the OOXML metadata, in order to extract semantic information from MS Office Word documents. In particular, we develop a new ontology for Association for Computing Machinery (ACM) Special Interested Group (SIG) documents for representing the structure and format of these documents by using OWL (Web Ontology Language). Then, the metadata is extracted automatically in RDF (Resource Description Framework) according to this ontology using the developed software. Finally, we generate extensive rules in order to infer whether the documents are formatted according to ACM SIG standards. This paper, introduces ACM SIG ontology, metadata extraction process, inference engine, ADFCS online user interface, system evaluation and user study evaluations.

  19. MedSynDiKATe--design considerations for an ontology-based medical text understanding system.

    PubMed Central

    Hahn, U.; Romacker, M.; Schulz, S.

    2000-01-01

    MedSynDiKATe is a natural language processor for automatically acquiring knowledge from medical finding reports. The content of these documents is transferred to formal representation structures which constitute a corresponding text knowledge base. The general system architecture we present integrates requirements from the analysis of single sentences, as well as those of referentially linked sentences forming cohesive texts. The strong demands MedSynDiKATe poses to the availability of expressive knowledge sources are accounted for by two alternative approaches to (semi)automatic ontology engineering. PMID:11079899

  20. Feature-opinion pair identification of product reviews in Chinese: a domain ontology modeling method

    NASA Astrophysics Data System (ADS)

    Yin, Pei; Wang, Hongwei; Guo, Kaiqiang

    2013-03-01

    With the emergence of the new economy based on social media, a great amount of consumer feedback on particular products are conveyed through wide-spreading product online reviews, making opinion mining a growing interest for both academia and industry. According to the characteristic mode of expression in Chinese, this research proposes an ontology-based linguistic model to identify the basic appraisal expression in Chinese product reviews-"feature-opinion pair (FOP)." The product-oriented domain ontology is constructed automatically at first, then algorithms to identify FOP are designed by mapping product features and opinions to the conceptual space of the domain ontology, and finally comparative experiments are conducted to evaluate the model. Experimental results indicate that the performance of the proposed approach in this paper is efficient in obtaining a more accurate result compared to the state-of-art algorithms. Furthermore, through identifying and analyzing FOPs, the unstructured product reviews are converted into structured and machine-sensible expression, which provides valuable information for business application. This paper contributes to the related research in opinion mining by developing a solid foundation for further sentiment analysis at a fine-grained level and proposing a general way for automatic ontology construction.

  1. Multi-label literature classification based on the Gene Ontology graph.

    PubMed

    Jin, Bo; Muller, Brian; Zhai, Chengxiang; Lu, Xinghua

    2008-12-08

    The Gene Ontology is a controlled vocabulary for representing knowledge related to genes and proteins in a computable form. The current effort of manually annotating proteins with the Gene Ontology is outpaced by the rate of accumulation of biomedical knowledge in literature, which urges the development of text mining approaches to facilitate the process by automatically extracting the Gene Ontology annotation from literature. The task is usually cast as a text classification problem, and contemporary methods are confronted with unbalanced training data and the difficulties associated with multi-label classification. In this research, we investigated the methods of enhancing automatic multi-label classification of biomedical literature by utilizing the structure of the Gene Ontology graph. We have studied three graph-based multi-label classification algorithms, including a novel stochastic algorithm and two top-down hierarchical classification methods for multi-label literature classification. We systematically evaluated and compared these graph-based classification algorithms to a conventional flat multi-label algorithm. The results indicate that, through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods can significantly improve predictions of the Gene Ontology terms implied by the analyzed text. Furthermore, the graph-based multi-label classifiers are capable of suggesting Gene Ontology annotations (to curators) that are closely related to the true annotations even if they fail to predict the true ones directly. A software package implementing the studied algorithms is available for the research community. Through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods have better potential than the conventional flat multi-label classification approach to facilitate protein annotation based on the literature.

  2. Ontology construction and application in practice case study of health tourism in Thailand.

    PubMed

    Chantrapornchai, Chantana; Choksuchat, Chidchanok

    2016-01-01

    Ontology is one of the key components in semantic webs. It contains the core knowledge for an effective search. However, building ontology requires the carefully-collected knowledge which is very domain-sensitive. In this work, we present the practice of ontology construction for a case study of health tourism in Thailand. The whole process follows the METHONTOLOGY approach, which consists of phases: information gathering, corpus study, ontology engineering, evaluation, publishing, and the application construction. Different sources of data such as structure web documents like HTML and other documents are acquired in the information gathering process. The tourism corpora from various tourism texts and standards are explored. The ontology is evaluated in two aspects: automatic reasoning using Pellet, and RacerPro, and the questionnaires, used to evaluate by experts of the domains: tourism domain experts and ontology experts. The ontology usability is demonstrated via the semantic web application and via example axioms. The developed ontology is actually the first health tourism ontology in Thailand with the published application.

  3. Business Solutions Case Study: Marketing Zero Energy Homes: Tommy Williams Homes, Gainesville, Florida

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Building America research has shown that high-performance homes can potentially give builders an edge in the marketplace and can boost sales, but it doesn't happen automatically. It requires a tailored, easy-to-understand marketing campaign, and sometimes a little flair. This case study highlights the successful marketing approach of Tommy Williams Homes, which devotes resources to advertising, targeted social media outlets and blogs, realtor education seminars, and groundbreaking and open house celebrations. As a result, in one community, 2013 property sales records show that TWH outsells the only other builder in the development at a higher price, with fewer days on themore » market.« less

  4. Managing changes in distributed biomedical ontologies using hierarchical distributed graph transformation.

    PubMed

    Shaban-Nejad, Arash; Haarslev, Volker

    2015-01-01

    The issue of ontology evolution and change management is inadequately addressed by available tools and algorithms, mostly due to the lack of suitable knowledge representation formalisms to deal with temporal abstract notations and the overreliance on human factors. Also most of the current approaches have been focused on changes within the internal structure of ontologies and interactions with other existing ontologies have been widely neglected. In our research, after revealing and classifying some of the common alterations in a number of popular biomedical ontologies, we present a novel agent-based framework, Represent, Legitimate and Reproduce (RLR), to semi-automatically manage the evolution of bio-ontologies, with emphasis on the FungalWeb Ontology, with minimal human intervention. RLR assists and guides ontology engineers through the change management process in general and aids in tracking and representing the changes, particularly through the use of category theory and hierarchical graph transformation.

  5. OpenTox predictive toxicology framework: toxicological ontology and semantic media wiki-based OpenToxipedia.

    PubMed

    Tcheremenskaia, Olga; Benigni, Romualdo; Nikolova, Ivelina; Jeliazkova, Nina; Escher, Sylvia E; Batke, Monika; Baier, Thomas; Poroikov, Vladimir; Lagunin, Alexey; Rautenberg, Micha; Hardy, Barry

    2012-04-24

    The OpenTox Framework, developed by the partners in the OpenTox project (http://www.opentox.org), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing. The following related ontologies have been developed for OpenTox: a) Toxicological ontology - listing the toxicological endpoints; b) Organs system and Effects ontology - addressing organs, targets/examinations and effects observed in in vivo studies; c) ToxML ontology - representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology- representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink-ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology.OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources.The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists). The OpenTox toxicological ontology projects may be accessed via the OpenTox ontology development page http://www.opentox.org/dev/ontology; the OpenTox ontology is available as OWL at http://opentox.org/api/1 1/opentox.owl, the ToxML - OWL conversion utility is an open source resource available at http://ambit.svn.sourceforge.net/viewvc/ambit/branches/toxml-utils/

  6. Ontology-based automatic generation of computerized cognitive exercises.

    PubMed

    Leonardi, Giorgio; Panzarasa, Silvia; Quaglini, Silvana

    2011-01-01

    Computer-based approaches can add great value to the traditional paper-based approaches for cognitive rehabilitation. The management of a big amount of stimuli and the use of multimedia features permits to improve the patient's involvement and to reuse and recombine them to create new exercises, whose difficulty level should be adapted to the patient's performance. This work proposes an ontological organization of the stimuli, to support the automatic generation of new exercises, tailored on the patient's preferences and skills, and its integration into a commercial cognitive rehabilitation tool. The possibilities offered by this approach are presented with the help of real examples.

  7. OpenTox predictive toxicology framework: toxicological ontology and semantic media wiki-based OpenToxipedia

    PubMed Central

    2012-01-01

    Background The OpenTox Framework, developed by the partners in the OpenTox project (http://www.opentox.org), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing. Results The following related ontologies have been developed for OpenTox: a) Toxicological ontology – listing the toxicological endpoints; b) Organs system and Effects ontology – addressing organs, targets/examinations and effects observed in in vivo studies; c) ToxML ontology – representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology– representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink–ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology. OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources. The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists). Availability The OpenTox toxicological ontology projects may be accessed via the OpenTox ontology development page http://www.opentox.org/dev/ontology; the OpenTox ontology is available as OWL at http://opentox.org/api/1 1/opentox.owl, the ToxML - OWL conversion utility is an open source resource available at http://ambit.svn.sourceforge.net/viewvc/ambit/branches/toxml-utils/ PMID:22541598

  8. Medication Reconciliation: Work Domain Ontology, prototype development, and a predictive model.

    PubMed

    Markowitz, Eliz; Bernstam, Elmer V; Herskovic, Jorge; Zhang, Jiajie; Shneiderman, Ben; Plaisant, Catherine; Johnson, Todd R

    2011-01-01

    Medication errors can result from administration inaccuracies at any point of care and are a major cause for concern. To develop a successful Medication Reconciliation (MR) tool, we believe it necessary to build a Work Domain Ontology (WDO) for the MR process. A WDO defines the explicit, abstract, implementation-independent description of the task by separating the task from work context, application technology, and cognitive architecture. We developed a prototype based upon the WDO and designed to adhere to standard principles of interface design. The prototype was compared to Legacy Health System's and Pre-Admission Medication List Builder MR tools via a Keystroke-Level Model analysis for three MR tasks. The analysis found the prototype requires the fewest mental operations, completes tasks in the fewest steps, and completes tasks in the least amount of time. Accordingly, we believe that developing a MR tool, based upon the WDO and user interface guidelines, improves user efficiency and reduces cognitive load.

  9. Medication Reconciliation: Work Domain Ontology, Prototype Development, and a Predictive Model

    PubMed Central

    Markowitz, Eliz; Bernstam, Elmer V.; Herskovic, Jorge; Zhang, Jiajie; Shneiderman, Ben; Plaisant, Catherine; Johnson, Todd R.

    2011-01-01

    Medication errors can result from administration inaccuracies at any point of care and are a major cause for concern. To develop a successful Medication Reconciliation (MR) tool, we believe it necessary to build a Work Domain Ontology (WDO) for the MR process. A WDO defines the explicit, abstract, implementation-independent description of the task by separating the task from work context, application technology, and cognitive architecture. We developed a prototype based upon the WDO and designed to adhere to standard principles of interface design. The prototype was compared to Legacy Health System’s and Pre-Admission Medication List Builder MR tools via a Keystroke-Level Model analysis for three MR tasks. The analysis found the prototype requires the fewest mental operations, completes tasks in the fewest steps, and completes tasks in the least amount of time. Accordingly, we believe that developing a MR tool, based upon the WDO and user interface guidelines, improves user efficiency and reduces cognitive load. PMID:22195146

  10. An ontology-based personalization of health-care knowledge to support clinical decisions for chronically ill patients.

    PubMed

    Riaño, David; Real, Francis; López-Vallverdú, Joan Albert; Campana, Fabio; Ercolani, Sara; Mecocci, Patrizia; Annicchiarico, Roberta; Caltagirone, Carlo

    2012-06-01

    Chronically ill patients are complex health care cases that require the coordinated interaction of multiple professionals. A correct intervention of these sort of patients entails the accurate analysis of the conditions of each concrete patient and the adaptation of evidence-based standard intervention plans to these conditions. There are some other clinical circumstances such as wrong diagnoses, unobserved comorbidities, missing information, unobserved related diseases or prevention, whose detection depends on the capacities of deduction of the professionals involved. In this paper, we introduce an ontology for the care of chronically ill patients and implement two personalization processes and a decision support tool. The first personalization process adapts the contents of the ontology to the particularities observed in the health-care record of a given concrete patient, automatically providing a personalized ontology containing only the clinical information that is relevant for health-care professionals to manage that patient. The second personalization process uses the personalized ontology of a patient to automatically transform intervention plans describing health-care general treatments into individual intervention plans. For comorbid patients, this process concludes with the semi-automatic integration of several individual plans into a single personalized plan. Finally, the ontology is also used as the knowledge base of a decision support tool that helps health-care professionals to detect anomalous circumstances such as wrong diagnoses, unobserved comorbidities, missing information, unobserved related diseases, or preventive actions. Seven health-care centers participating in the K4CARE project, together with the group SAGESA and the Local Health System in the town of Pollenza have served as the validation platform for these two processes and tool. Health-care professionals participating in the evaluation agree about the average quality 84% (5.9/7.0) and utility 90% (6.3/7.0) of the tools and also about the correct reasoning of the decision support tool, according to clinical standards. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. ASON: An OWL-S based ontology for astrophysical services

    NASA Astrophysics Data System (ADS)

    Louge, T.; Karray, M. H.; Archimède, B.; Knödlseder, J.

    2018-07-01

    Modern astrophysics heavily relies on Web services to expose most of the data coming from many different instruments and researches worldwide. The virtual observatory (VO) has been designed to allow scientists to locate, retrieve and analyze useful information among those heterogeneous data. The use of ontologies has been studied in the VO context for astrophysical concerns like object types or astrophysical services subjects. On the operative point of view, ontological description of astrophysical services for interoperability and querying still has to be considered. In this paper, we design a global ontology (Astrophysical Services ONtology, ASON) based on web Ontology Language for Services (OWL-S) to enhance existing astrophysical services description. By expressing together VO specific and non-VO specific services design, it will improve the automation of services queries and allow automatic composition of heterogeneous astrophysical services.

  12. Text Mining to Support Gene Ontology Curation and Vice Versa.

    PubMed

    Ruch, Patrick

    2017-01-01

    In this chapter, we explain how text mining can support the curation of molecular biology databases dealing with protein functions. We also show how curated data can play a disruptive role in the developments of text mining methods. We review a decade of efforts to improve the automatic assignment of Gene Ontology (GO) descriptors, the reference ontology for the characterization of genes and gene products. To illustrate the high potential of this approach, we compare the performances of an automatic text categorizer and show a large improvement of +225 % in both precision and recall on benchmarked data. We argue that automatic text categorization functions can ultimately be embedded into a Question-Answering (QA) system to answer questions related to protein functions. Because GO descriptors can be relatively long and specific, traditional QA systems cannot answer such questions. A new type of QA system, so-called Deep QA which uses machine learning methods trained with curated contents, is thus emerging. Finally, future advances of text mining instruments are directly dependent on the availability of high-quality annotated contents at every curation step. Databases workflows must start recording explicitly all the data they curate and ideally also some of the data they do not curate.

  13. Systems definition study for shuttle demonstration flights of large space structures. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The development of large space structure technology is discussed, with emphasis on space fabricated structures which are automatically manufactured in space from sheet-strip materials and assembled on-orbit. Definition of a flight demonstration involving an Automated Beam Builder and the building and assembling of large structures is presented.

  14. Automatic Match between Delimitation Line and Real Terrain Based on Least-Cost Path Analysis

    NASA Astrophysics Data System (ADS)

    Feng, C. Q.; Jiang, N.; Zhang, X. N.; Ma, J.

    2013-11-01

    Nowadays, during the international negotiation on separating dispute areas, manual adjusting is lonely applied to the match between delimitation line and real terrain, which not only consumes much time and great labor force, but also cannot ensure high precision. Concerning that, the paper mainly explores automatic match between them and study its general solution based on Least -Cost Path Analysis. First, under the guidelines of delimitation laws, the cost layer is acquired through special disposals of delimitation line and terrain features line. Second, a new delimitation line gets constructed with the help of Least-Cost Path Analysis. Third, the whole automatic match model is built via Module Builder in order to share and reuse it. Finally, the result of automatic match is analyzed from many different aspects, including delimitation laws, two-sided benefits and so on. Consequently, a conclusion is made that the method of automatic match is feasible and effective.

  15. Creating an ontology driven rules base for an expert system for medical diagnosis.

    PubMed

    Bertaud Gounot, Valérie; Donfack, Valéry; Lasbleiz, Jérémy; Bourde, Annabel; Duvauferrier, Régis

    2011-01-01

    Expert systems of the 1980s have failed on the difficulties of maintaining large rule bases. The current work proposes a method to achieve and maintain rule bases grounded on ontologies (like NCIT). The process described here for an expert system on plasma cell disorder encompasses extraction of a sub-ontology and automatic and comprehensive generation of production rules. The creation of rules is not based directly on classes, but on individuals (instances). Instances can be considered as prototypes of diseases formally defined by "destrictions" in the ontology. Thus, it is possible to use this process to make diagnoses of diseases. The perspectives of this work are considered: the process described with an ontology formalized in OWL1 can be extended by using an ontology in OWL2 and allow reasoning about numerical data in addition to symbolic data.

  16. Logic-based assessment of the compatibility of UMLS ontology sources

    PubMed Central

    2011-01-01

    Background The UMLS Metathesaurus (UMLS-Meta) is currently the most comprehensive effort for integrating independently-developed medical thesauri and ontologies. UMLS-Meta is being used in many applications, including PubMed and ClinicalTrials.gov. The integration of new sources combines automatic techniques, expert assessment, and auditing protocols. The automatic techniques currently in use, however, are mostly based on lexical algorithms and often disregard the semantics of the sources being integrated. Results In this paper, we argue that UMLS-Meta’s current design and auditing methodologies could be significantly enhanced by taking into account the logic-based semantics of the ontology sources. We provide empirical evidence suggesting that UMLS-Meta in its 2009AA version contains a significant number of errors; these errors become immediately apparent if the rich semantics of the ontology sources is taken into account, manifesting themselves as unintended logical consequences that follow from the ontology sources together with the information in UMLS-Meta. We then propose general principles and specific logic-based techniques to effectively detect and repair such errors. Conclusions Our results suggest that the methodologies employed in the design of UMLS-Meta are not only very costly in terms of human effort, but also error-prone. The techniques presented here can be useful for both reducing human effort in the design and maintenance of UMLS-Meta and improving the quality of its contents. PMID:21388571

  17. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    PubMed Central

    Chen, Haijian; Han, Dongmei; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  18. Semantic SenseLab: implementing the vision of the Semantic Web in neuroscience

    PubMed Central

    Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi

    2011-01-01

    Summary Objective Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Methods Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. Conclusion We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/ PMID:20006477

  19. The Proteasix Ontology.

    PubMed

    Arguello Casteleiro, Mercedes; Klein, Julie; Stevens, Robert

    2016-06-04

    The Proteasix Ontology (PxO) is an ontology that supports the Proteasix tool; an open-source peptide-centric tool that can be used to predict automatically and in a large-scale fashion in silico the proteases involved in the generation of proteolytic cleavage fragments (peptides) The PxO re-uses parts of the Protein Ontology, the three Gene Ontology sub-ontologies, the Chemical Entities of Biological Interest Ontology, the Sequence Ontology and bespoke extensions to the PxO in support of a series of roles: 1. To describe the known proteases and their target cleaveage sites. 2. To enable the description of proteolytic cleaveage fragments as the outputs of observed and predicted proteolysis. 3. To use knowledge about the function, species and cellular location of a protease and protein substrate to support the prioritisation of proteases in observed and predicted proteolysis. The PxO is designed to describe the biological underpinnings of the generation of peptides. The peptide-centric PxO seeks to support the Proteasix tool by separating domain knowledge from the operational knowledge used in protease prediction by Proteasix and to support the confirmation of its analyses and results. The Proteasix Ontology may be found at: http://bioportal.bioontology.org/ontologies/PXO . This ontology is free and open for use by everyone.

  20. Semantic SenseLab: Implementing the vision of the Semantic Web in neuroscience.

    PubMed

    Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi

    2010-01-01

    Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/. 2009 Elsevier B.V. All rights reserved.

  1. Summarizing an Ontology: A "Big Knowledge" Coverage Approach.

    PubMed

    Zheng, Ling; Perl, Yehoshua; Elhanan, Gai; Ochs, Christopher; Geller, James; Halper, Michael

    2017-01-01

    Maintenance and use of a large ontology, consisting of thousands of knowledge assertions, are hampered by its scope and complexity. It is important to provide tools for summarization of ontology content in order to facilitate user "big picture" comprehension. We present a parameterized methodology for the semi-automatic summarization of major topics in an ontology, based on a compact summary of the ontology, called an "aggregate partial-area taxonomy", followed by manual enhancement. An experiment is presented to test the effectiveness of such summarization measured by coverage of a given list of major topics of the corresponding application domain. SNOMED CT's Specimen hierarchy is the test-bed. A domain-expert provided a list of topics that serves as a gold standard. The enhanced results show that the aggregate taxonomy covers most of the domain's main topics.

  2. Extracting rate changes in transcriptional regulation from MEDLINE abstracts.

    PubMed

    Liu, Wenting; Miao, Kui; Li, Guangxia; Chang, Kuiyu; Zheng, Jie; Rajapakse, Jagath C

    2014-01-01

    Time delays are important factors that are often neglected in gene regulatory network (GRN) inference models. Validating time delays from knowledge bases is a challenge since the vast majority of biological databases do not record temporal information of gene regulations. Biological knowledge and facts on gene regulations are typically extracted from bio-literature with specialized methods that depend on the regulation task. In this paper, we mine evidences for time delays related to the transcriptional regulation of yeast from the PubMed abstracts. Since the vast majority of abstracts lack quantitative time information, we can only collect qualitative evidences of time delays. Specifically, the speed-up or delay in transcriptional regulation rate can provide evidences for time delays (shorter or longer) in GRN. Thus, we focus on deriving events related to rate changes in transcriptional regulation. A corpus of yeast regulation related abstracts was manually labeled with such events. In order to capture these events automatically, we create an ontology of sub-processes that are likely to result in transcription rate changes by combining textual patterns and biological knowledge. We also propose effective feature extraction methods based on the created ontology to identify the direct evidences with specific details of these events. Our ontologies outperform existing state-of-the-art gene regulation ontologies in the automatic rule learning method applied to our corpus. The proposed deterministic ontology rule-based method can achieve comparable performance to the automatic rule learning method based on decision trees. This demonstrates the effectiveness of our ontology in identifying rate-changing events. We also tested the effectiveness of the proposed feature mining methods on detecting direct evidence of events. Experimental results show that the machine learning method on these features achieves an F1-score of 71.43%. The manually labeled corpus of events relating to rate changes in transcriptional regulation for yeast is available in https://sites.google.com/site/wentingntu/data. The created ontologies summarized both biological causes of rate changes in transcriptional regulation and corresponding positive and negative textual patterns from the corpus. They are demonstrated to be effective in identifying rate-changing events, which shows the benefits of combining textual patterns and biological knowledge on extracting complex biological events.

  3. Expert2OWL: A Methodology for Pattern-Based Ontology Development.

    PubMed

    Tahar, Kais; Xu, Jie; Herre, Heinrich

    2017-01-01

    The formalization of expert knowledge enables a broad spectrum of applications employing ontologies as underlying technology. These include eLearning, Semantic Web and expert systems. However, the manual construction of such ontologies is time-consuming and thus expensive. Moreover, experts are often unfamiliar with the syntax and semantics of formal ontology languages such as OWL and usually have no experience in developing formal ontologies. To overcome these barriers, we developed a new method and tool, called Expert2OWL that provides efficient features to support the construction of OWL ontologies using GFO (General Formal Ontology) as a top-level ontology. This method allows a close and effective collaboration between ontologists and domain experts. Essentially, this tool integrates Excel spreadsheets as part of a pattern-based ontology development and refinement process. Expert2OWL enables us to expedite the development process and modularize the resulting ontologies. We applied this method in the field of Chinese Herbal Medicine (CHM) and used Expert2OWL to automatically generate an accurate Chinese Herbology ontology (CHO). The expressivity of CHO was tested and evaluated using ontology query languages SPARQL and DL. CHO shows promising results and can generate answers to important scientific questions such as which Chinese herbal formulas contain which substances, which substances treat which diseases, and which ones are the most frequently used in CHM.

  4. Prioritising lexical patterns to increase axiomatisation in biomedical ontologies. The role of localisation and modularity.

    PubMed

    Quesada-Martínez, M; Fernández-Breis, J T; Stevens, R; Mikroyannidi, E

    2015-01-01

    This article is part of the Focus Theme of METHODS of Information in Medicine on "Managing Interoperability and Complexity in Health Systems". In previous work, we have defined methods for the extraction of lexical patterns from labels as an initial step towards semi-automatic ontology enrichment methods. Our previous findings revealed that many biomedical ontologies could benefit from enrichment methods using lexical patterns as a starting point.Here, we aim to identify which lexical patterns are appropriate for ontology enrichment, driving its analysis by metrics to prioritised the patterns. We propose metrics for suggesting which lexical regularities should be the starting point to enrich complex ontologies. Our method determines the relevance of a lexical pattern by measuring its locality in the ontology, that is, the distance between the classes associated with the pattern, and the distribution of the pattern in a certain module of the ontology. The methods have been applied to four significant biomedical ontologies including the Gene Ontology and SNOMED CT. The metrics provide information about the engineering of the ontologies and the relevance of the patterns. Our method enables the suggestion of links between classes that are not made explicit in the ontology. We propose a prioritisation of the lexical patterns found in the analysed ontologies. The locality and distribution of lexical patterns offer insights into the further engineering of the ontology. Developers can use this information to improve the axiomatisation of their ontologies.

  5. An ontology-based framework for bioinformatics workflows.

    PubMed

    Digiampietri, Luciano A; Perez-Alcazar, Jose de J; Medeiros, Claudia Bauzer

    2007-01-01

    The proliferation of bioinformatics activities brings new challenges - how to understand and organise these resources, how to exchange and reuse successful experimental procedures, and to provide interoperability among data and tools. This paper describes an effort toward these directions. It is based on combining research on ontology management, AI and scientific workflows to design, reuse and annotate bioinformatics experiments. The resulting framework supports automatic or interactive composition of tasks based on AI planning techniques and takes advantage of ontologies to support the specification and annotation of bioinformatics workflows. We validate our proposal with a prototype running on real data.

  6. Semantics-enabled service discovery framework in the SIMDAT pharma grid.

    PubMed

    Qu, Cangtao; Zimmermann, Falk; Kumpf, Kai; Kamuzinzi, Richard; Ledent, Valérie; Herzog, Robert

    2008-03-01

    We present the design and implementation of a semantics-enabled service discovery framework in the data Grids for process and product development using numerical simulation and knowledge discovery (SIMDAT) Pharma Grid, an industry-oriented Grid environment for integrating thousands of Grid-enabled biological data services and analysis services. The framework consists of three major components: the Web ontology language (OWL)-description logic (DL)-based biological domain ontology, OWL Web service ontology (OWL-S)-based service annotation, and semantic matchmaker based on the ontology reasoning. Built upon the framework, workflow technologies are extensively exploited in the SIMDAT to assist biologists in (semi)automatically performing in silico experiments. We present a typical usage scenario through the case study of a biological workflow: IXodus.

  7. Graphite composite truss welding and cap section forming subsystems. Volume 2: Program results

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The technology required to develop a beam builder which automatically fabricates long, continuous, lightweight, triangular truss members in space from graphite/thermoplastics composite materials is described. Objectives are: (1) continue the development of forming and welding methods for graphite/thermoplastic (GR/TP) composite material; (2) continue GR/TP materials technology development; and (3) fabricate and structurally test a lightweight truss segment.

  8. Incremental Ontology-Based Extraction and Alignment in Semi-structured Documents

    NASA Astrophysics Data System (ADS)

    Thiam, Mouhamadou; Bennacer, Nacéra; Pernelle, Nathalie; Lô, Moussa

    SHIRIis an ontology-based system for integration of semi-structured documents related to a specific domain. The system’s purpose is to allow users to access to relevant parts of documents as answers to their queries. SHIRI uses RDF/OWL for representation of resources and SPARQL for their querying. It relies on an automatic, unsupervised and ontology-driven approach for extraction, alignment and semantic annotation of tagged elements of documents. In this paper, we focus on the Extract-Align algorithm which exploits a set of named entity and term patterns to extract term candidates to be aligned with the ontology. It proceeds in an incremental manner in order to populate the ontology with terms describing instances of the domain and to reduce the access to extern resources such as Web. We experiment it on a HTML corpus related to call for papers in computer science and the results that we obtain are very promising. These results show how the incremental behaviour of Extract-Align algorithm enriches the ontology and the number of terms (or named entities) aligned directly with the ontology increases.

  9. Intelligent search in Big Data

    NASA Astrophysics Data System (ADS)

    Birialtsev, E.; Bukharaev, N.; Gusenkov, A.

    2017-10-01

    An approach to data integration, aimed on the ontology-based intelligent search in Big Data, is considered in the case when information objects are represented in the form of relational databases (RDB), structurally marked by their schemes. The source of information for constructing an ontology and, later on, the organization of the search are texts in natural language, treated as semi-structured data. For the RDBs, these are comments on the names of tables and their attributes. Formal definition of RDBs integration model in terms of ontologies is given. Within framework of the model universal RDB representation ontology, oil production subject domain ontology and linguistic thesaurus of subject domain language are built. Technique of automatic SQL queries generation for subject domain specialists is proposed. On the base of it, information system for TATNEFT oil-producing company RDBs was implemented. Exploitation of the system showed good relevance with majority of queries.

  10. Faceted Visualization of Three Dimensional Neuroanatomy By Combining Ontology with Faceted Search

    PubMed Central

    Veeraraghavan, Harini; Miller, James V.

    2013-01-01

    In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset. PMID:24006207

  11. Faceted visualization of three dimensional neuroanatomy by combining ontology with faceted search.

    PubMed

    Veeraraghavan, Harini; Miller, James V

    2014-04-01

    In this work, we present a faceted-search based approach for visualization of anatomy by combining a three dimensional digital atlas with an anatomy ontology. Specifically, our approach provides a drill-down search interface that exposes the relevant pieces of information (obtained by searching the ontology) for a user query. Hence, the user can produce visualizations starting with minimally specified queries. Furthermore, by automatically translating the user queries into the controlled terminology our approach eliminates the need for the user to use controlled terminology. We demonstrate the scalability of our approach using an abdominal atlas and the same ontology. We implemented our visualization tool on the opensource 3D Slicer software. We present results of our visualization approach by combining a modified Foundational Model of Anatomy (FMA) ontology with the Surgical Planning Laboratory (SPL) Brain 3D digital atlas, and geometric models specific to patients computed using the SPL brain tumor dataset.

  12. Semi-automated ontology generation within OBO-Edit.

    PubMed

    Wächter, Thomas; Schroeder, Michael

    2010-06-15

    Ontologies and taxonomies have proven highly beneficial for biocuration. The Open Biomedical Ontology (OBO) Foundry alone lists over 90 ontologies mainly built with OBO-Edit. Creating and maintaining such ontologies is a labour-intensive, difficult, manual process. Automating parts of it is of great importance for the further development of ontologies and for biocuration. We have developed the Dresden Ontology Generator for Directed Acyclic Graphs (DOG4DAG), a system which supports the creation and extension of OBO ontologies by semi-automatically generating terms, definitions and parent-child relations from text in PubMed, the web and PDF repositories. DOG4DAG is seamlessly integrated into OBO-Edit. It generates terms by identifying statistically significant noun phrases in text. For definitions and parent-child relations it employs pattern-based web searches. We systematically evaluate each generation step using manually validated benchmarks. The term generation leads to high-quality terms also found in manually created ontologies. Up to 78% of definitions are valid and up to 54% of child-ancestor relations can be retrieved. There is no other validated system that achieves comparable results. By combining the prediction of high-quality terms, definitions and parent-child relations with the ontology editor OBO-Edit we contribute a thoroughly validated tool for all OBO ontology engineers. DOG4DAG is available within OBO-Edit 2.1 at http://www.oboedit.org. Supplementary data are available at Bioinformatics online.

  13. Integrating Semantic Information in Metadata Descriptions for a Geoscience-wide Resource Inventory.

    NASA Astrophysics Data System (ADS)

    Zaslavsky, I.; Richard, S. M.; Gupta, A.; Valentine, D.; Whitenack, T.; Ozyurt, I. B.; Grethe, J. S.; Schachne, A.

    2016-12-01

    Integrating semantic information into legacy metadata catalogs is a challenging issue and so far has been mostly done on a limited scale. We present experience of CINERGI (Community Inventory of Earthcube Resources for Geoscience Interoperability), an NSF Earthcube Building Block project, in creating a large cross-disciplinary catalog of geoscience information resources to enable cross-domain discovery. The project developed a pipeline for automatically augmenting resource metadata, in particular generating keywords that describe metadata documents harvested from multiple geoscience information repositories or contributed by geoscientists through various channels including surveys and domain resource inventories. The pipeline examines available metadata descriptions using text parsing, vocabulary management and semantic annotation and graph navigation services of GeoSciGraph. GeoSciGraph, in turn, relies on a large cross-domain ontology of geoscience terms, which bridges several independently developed ontologies or taxonomies including SWEET, ENVO, YAGO, GeoSciML, GCMD, SWO, and CHEBI. The ontology content enables automatic extraction of keywords reflecting science domains, equipment used, geospatial features, measured properties, methods, processes, etc. We specifically focus on issues of cross-domain geoscience ontology creation, resolving several types of semantic conflicts among component ontologies or vocabularies, and constructing and managing facets for improved data discovery and navigation. The ontology and keyword generation rules are iteratively improved as pipeline results are presented to data managers for selective manual curation via a CINERGI Annotator user interface. We present lessons learned from applying CINERGI metadata augmentation pipeline to a number of federal agency and academic data registries, in the context of several use cases that require data discovery and integration across multiple earth science data catalogs of varying quality and completeness. The inventory is accessible at http://cinergi.sdsc.edu, and the CINERGI project web page is http://earthcube.org/group/cinergi

  14. Ontology-Based Multiple Choice Question Generation

    PubMed Central

    Al-Yahya, Maha

    2014-01-01

    With recent advancements in Semantic Web technologies, a new trend in MCQ item generation has emerged through the use of ontologies. Ontologies are knowledge representation structures that formally describe entities in a domain and their relationships, thus enabling automated inference and reasoning. Ontology-based MCQ item generation is still in its infancy, but substantial research efforts are being made in the field. However, the applicability of these models for use in an educational setting has not been thoroughly evaluated. In this paper, we present an experimental evaluation of an ontology-based MCQ item generation system known as OntoQue. The evaluation was conducted using two different domain ontologies. The findings of this study show that ontology-based MCQ generation systems produce satisfactory MCQ items to a certain extent. However, the evaluation also revealed a number of shortcomings with current ontology-based MCQ item generation systems with regard to the educational significance of an automatically constructed MCQ item, the knowledge level it addresses, and its language structure. Furthermore, for the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework. PMID:24982937

  15. CASAS: A tool for composing automatically and semantically astrophysical services

    NASA Astrophysics Data System (ADS)

    Louge, T.; Karray, M. H.; Archimède, B.; Knödlseder, J.

    2017-07-01

    Multiple astronomical datasets are available through internet and the astrophysical Distributed Computing Infrastructure (DCI) called Virtual Observatory (VO). Some scientific workflow technologies exist for retrieving and combining data from those sources. However selection of relevant services, automation of the workflows composition and the lack of user-friendly platforms remain a concern. This paper presents CASAS, a tool for semantic web services composition in astrophysics. This tool proposes automatic composition of astrophysical web services and brings a semantics-based, automatic composition of workflows. It widens the services choice and eases the use of heterogeneous services. Semantic web services composition relies on ontologies for elaborating the services composition; this work is based on Astrophysical Services ONtology (ASON). ASON had its structure mostly inherited from the VO services capacities. Nevertheless, our approach is not limited to the VO and brings VO plus non-VO services together without the need for premade recipes. CASAS is available for use through a simple web interface.

  16. Decomposing phenotype descriptions for the human skeletal phenome.

    PubMed

    Groza, Tudor; Hunter, Jane; Zankl, Andreas

    2013-01-01

    Over the course of the last few years there has been a significant amount of research performed on ontology-based formalization of phenotype descriptions. The intrinsic value and knowledge captured within such descriptions can only be expressed by taking advantage of their inner structure that implicitly combines qualities and anatomical entities. We present a meta-model (the Phenotype Fragment Ontology) and a processing pipeline that enable together the automatic decomposition and conceptualization of phenotype descriptions for the human skeletal phenome. We use this approach to showcase the usefulness of the generic concept of phenotype decomposition by performing an experimental study on all skeletal phenotype concepts defined in the Human Phenotype Ontology.

  17. The GlycanBuilder: a fast, intuitive and flexible software tool for building and displaying glycan structures.

    PubMed

    Ceroni, Alessio; Dell, Anne; Haslam, Stuart M

    2007-08-07

    Carbohydrates play a critical role in human diseases and their potential utility as biomarkers for pathological conditions is a major driver for characterization of the glycome. However, the additional complexity of glycans compared to proteins and nucleic acids has slowed the advancement of glycomics in comparison to genomics and proteomics. The branched nature of carbohydrates, the great diversity of their constituents and the numerous alternative symbolic notations, make the input and display of glycans not as straightforward as for example the amino-acid sequence of a protein. Every glycoinformatic tool providing a user interface would benefit from a fast, intuitive, appealing mechanism for input and output of glycan structures in a computer readable format. A software tool for building and displaying glycan structures using a chosen symbolic notation is described here. The "GlycanBuilder" uses an automatic rendering algorithm to draw the saccharide symbols and to place them on the drawing board. The information about the symbolic notation is derived from a configurable graphical model as a set of rules governing the aspect and placement of residues and linkages. The algorithm is able to represent a structure using only few traversals of the tree and is inherently fast. The tool uses an XML format for import and export of encoded structures. The rendering algorithm described here is able to produce high-quality representations of glycan structures in a chosen symbolic notation. The automated rendering process enables the "GlycanBuilder" to be used both as a user-independent component for displaying glycans and as an easy-to-use drawing tool. The "GlycanBuilder" can be integrated in web pages as a Java applet for the visual editing of glycans. The same component is available as a web service to render an encoded structure into a graphical format. Finally, the "GlycanBuilder" can be integrated into other applications to create intuitive and appealing user interfaces: an example is the "GlycoWorkbench", a software tool for assisted annotation of glycan mass spectra. The "GlycanBuilder" represent a flexible, reliable and efficient solution to the problem of input and output of glycan structures in any glycomic tool or database.

  18. Development of a composite geodetic structure for space construction, phase 1A

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The development of a geodetic beam and beam builder for on orbit construction of large truss type space structures is discussed. The geodetic beam is a lightweight, open lattice structure composed of an equilateral gridwork of crisscrossing rods. The beam provides a high degree of stiffness and minimizes structural distortion, due to temperature gradients, through the incorporation of a new graphite and glass reinforced thermoplastic composite material with a low coefficient of thermal expansion. A low power consuming, high production rate, beam builder automatically fabricates the geodetic beams in space using rods preprocessed on Earth. Three areas of the development are focused upon; (1) geodetic beam designs for local attachment of equipment or beam to beam joining in a parallel or crossing configurations, (2) evaluation of long life pultruded rods capable of service temperatures higher than possible with the HMS/P1700 rod material, and (3) evalaution of high temperature joint encapsulant materials.

  19. TermGenie – a web-application for pattern-based ontology class generation

    DOE PAGES

    Dietze, Heiko; Berardini, Tanya Z.; Foulger, Rebecca E.; ...

    2014-01-01

    Biological ontologies are continually growing and improving from requests for new classes (terms) by biocurators. These ontology requests can frequently create bottlenecks in the biocuration process, as ontology developers struggle to keep up, while manually processing these requests and create classes. TermGenie allows biocurators to generate new classes based on formally specified design patterns or templates. The system is web-based and can be accessed by any authorized curator through a web browser. Automated rules and reasoning engines are used to ensure validity, uniqueness and relationship to pre-existing classes. In the last 4 years the Gene Ontology TermGenie generated 4715 newmore » classes, about 51.4% of all new classes created. The immediate generation of permanent identifiers proved not to be an issue with only 70 (1.4%) obsoleted classes. Lastly, TermGenie is a web-based class-generation system that complements traditional ontology development tools. All classes added through pre-defined templates are guaranteed to have OWL equivalence axioms that are used for automatic classification and in some cases inter-ontology linkage. At the same time, the system is simple and intuitive and can be used by most biocurators without extensive training.« less

  20. TermGenie – a web-application for pattern-based ontology class generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dietze, Heiko; Berardini, Tanya Z.; Foulger, Rebecca E.

    Biological ontologies are continually growing and improving from requests for new classes (terms) by biocurators. These ontology requests can frequently create bottlenecks in the biocuration process, as ontology developers struggle to keep up, while manually processing these requests and create classes. TermGenie allows biocurators to generate new classes based on formally specified design patterns or templates. The system is web-based and can be accessed by any authorized curator through a web browser. Automated rules and reasoning engines are used to ensure validity, uniqueness and relationship to pre-existing classes. In the last 4 years the Gene Ontology TermGenie generated 4715 newmore » classes, about 51.4% of all new classes created. The immediate generation of permanent identifiers proved not to be an issue with only 70 (1.4%) obsoleted classes. Lastly, TermGenie is a web-based class-generation system that complements traditional ontology development tools. All classes added through pre-defined templates are guaranteed to have OWL equivalence axioms that are used for automatic classification and in some cases inter-ontology linkage. At the same time, the system is simple and intuitive and can be used by most biocurators without extensive training.« less

  1. TermGenie - a web-application for pattern-based ontology class generation.

    PubMed

    Dietze, Heiko; Berardini, Tanya Z; Foulger, Rebecca E; Hill, David P; Lomax, Jane; Osumi-Sutherland, David; Roncaglia, Paola; Mungall, Christopher J

    2014-01-01

    Biological ontologies are continually growing and improving from requests for new classes (terms) by biocurators. These ontology requests can frequently create bottlenecks in the biocuration process, as ontology developers struggle to keep up, while manually processing these requests and create classes. TermGenie allows biocurators to generate new classes based on formally specified design patterns or templates. The system is web-based and can be accessed by any authorized curator through a web browser. Automated rules and reasoning engines are used to ensure validity, uniqueness and relationship to pre-existing classes. In the last 4 years the Gene Ontology TermGenie generated 4715 new classes, about 51.4% of all new classes created. The immediate generation of permanent identifiers proved not to be an issue with only 70 (1.4%) obsoleted classes. TermGenie is a web-based class-generation system that complements traditional ontology development tools. All classes added through pre-defined templates are guaranteed to have OWL equivalence axioms that are used for automatic classification and in some cases inter-ontology linkage. At the same time, the system is simple and intuitive and can be used by most biocurators without extensive training.

  2. STOP using just GO: a multi-ontology hypothesis generation tool for high throughput experimentation

    PubMed Central

    2013-01-01

    Background Gene Ontology (GO) enrichment analysis remains one of the most common methods for hypothesis generation from high throughput datasets. However, we believe that researchers strive to test other hypotheses that fall outside of GO. Here, we developed and evaluated a tool for hypothesis generation from gene or protein lists using ontological concepts present in manually curated text that describes those genes and proteins. Results As a consequence we have developed the method Statistical Tracking of Ontological Phrases (STOP) that expands the realm of testable hypotheses in gene set enrichment analyses by integrating automated annotations of genes to terms from over 200 biomedical ontologies. While not as precise as manually curated terms, we find that the additional enriched concepts have value when coupled with traditional enrichment analyses using curated terms. Conclusion Multiple ontologies have been developed for gene and protein annotation, by using a dataset of both manually curated GO terms and automatically recognized concepts from curated text we can expand the realm of hypotheses that can be discovered. The web application STOP is available at http://mooneygroup.org/stop/. PMID:23409969

  3. A Study of the Use of Ontologies for Building Computer-Aided Control Engineering Self-Learning Educational Software

    NASA Astrophysics Data System (ADS)

    García, Isaías; Benavides, Carmen; Alaiz, Héctor; Alonso, Angel

    2013-08-01

    This paper describes research on the use of knowledge models (ontologies) for building computer-aided educational software in the field of control engineering. Ontologies are able to represent in the computer a very rich conceptual model of a given domain. This model can be used later for a number of purposes in different software applications. In this study, domain ontology about the field of lead-lag compensator design has been built and used for automatic exercise generation, graphical user interface population and interaction with the user at any level of detail, including explanations about why things occur. An application called Onto-CELE (ontology-based control engineering learning environment) uses the ontology for implementing a learning environment that can be used for self and lifelong learning purposes. The experience has shown that the use of knowledge models as the basis for educational software applications is capable of showing students the whole complexity of the analysis and design processes at any level of detail. A practical experience with postgraduate students has shown the mentioned benefits and possibilities of the approach.

  4. Standardized description of scientific evidence using the Evidence Ontology (ECO)

    PubMed Central

    Chibucos, Marcus C.; Mungall, Christopher J.; Balakrishnan, Rama; Christie, Karen R.; Huntley, Rachael P.; White, Owen; Blake, Judith A.; Lewis, Suzanna E.; Giglio, Michelle

    2014-01-01

    The Evidence Ontology (ECO) is a structured, controlled vocabulary for capturing evidence in biological research. ECO includes diverse terms for categorizing evidence that supports annotation assertions including experimental types, computational methods, author statements and curator inferences. Using ECO, annotation assertions can be distinguished according to the evidence they are based on such as those made by curators versus those automatically computed or those made via high-throughput data review versus single test experiments. Originally created for capturing evidence associated with Gene Ontology annotations, ECO is now used in other capacities by many additional annotation resources including UniProt, Mouse Genome Informatics, Saccharomyces Genome Database, PomBase, the Protein Information Resource and others. Information on the development and use of ECO can be found at http://evidenceontology.org. The ontology is freely available under Creative Commons license (CC BY-SA 3.0), and can be downloaded in both Open Biological Ontologies and Web Ontology Language formats at http://code.google.com/p/evidenceontology. Also at this site is a tracker for user submission of term requests and questions. ECO remains under active development in response to user-requested terms and in collaborations with other ontologies and database resources. Database URL: Evidence Ontology Web site: http://evidenceontology.org PMID:25052702

  5. Self-Supervised Chinese Ontology Learning from Online Encyclopedias

    PubMed Central

    Shao, Zhiqing; Ruan, Tong

    2014-01-01

    Constructing ontology manually is a time-consuming, error-prone, and tedious task. We present SSCO, a self-supervised learning based chinese ontology, which contains about 255 thousand concepts, 5 million entities, and 40 million facts. We explore the three largest online Chinese encyclopedias for ontology learning and describe how to transfer the structured knowledge in encyclopedias, including article titles, category labels, redirection pages, taxonomy systems, and InfoBox modules, into ontological form. In order to avoid the errors in encyclopedias and enrich the learnt ontology, we also apply some machine learning based methods. First, we proof that the self-supervised machine learning method is practicable in Chinese relation extraction (at least for synonymy and hyponymy) statistically and experimentally and train some self-supervised models (SVMs and CRFs) for synonymy extraction, concept-subconcept relation extraction, and concept-instance relation extraction; the advantages of our methods are that all training examples are automatically generated from the structural information of encyclopedias and a few general heuristic rules. Finally, we evaluate SSCO in two aspects, scale and precision; manual evaluation results show that the ontology has excellent precision, and high coverage is concluded by comparing SSCO with other famous ontologies and knowledge bases; the experiment results also indicate that the self-supervised models obviously enrich SSCO. PMID:24715819

  6. Self-supervised Chinese ontology learning from online encyclopedias.

    PubMed

    Hu, Fanghuai; Shao, Zhiqing; Ruan, Tong

    2014-01-01

    Constructing ontology manually is a time-consuming, error-prone, and tedious task. We present SSCO, a self-supervised learning based chinese ontology, which contains about 255 thousand concepts, 5 million entities, and 40 million facts. We explore the three largest online Chinese encyclopedias for ontology learning and describe how to transfer the structured knowledge in encyclopedias, including article titles, category labels, redirection pages, taxonomy systems, and InfoBox modules, into ontological form. In order to avoid the errors in encyclopedias and enrich the learnt ontology, we also apply some machine learning based methods. First, we proof that the self-supervised machine learning method is practicable in Chinese relation extraction (at least for synonymy and hyponymy) statistically and experimentally and train some self-supervised models (SVMs and CRFs) for synonymy extraction, concept-subconcept relation extraction, and concept-instance relation extraction; the advantages of our methods are that all training examples are automatically generated from the structural information of encyclopedias and a few general heuristic rules. Finally, we evaluate SSCO in two aspects, scale and precision; manual evaluation results show that the ontology has excellent precision, and high coverage is concluded by comparing SSCO with other famous ontologies and knowledge bases; the experiment results also indicate that the self-supervised models obviously enrich SSCO.

  7. Modularizing Spatial Ontologies for Assisted Living Systems

    NASA Astrophysics Data System (ADS)

    Hois, Joana

    Assisted living systems are intended to support daily-life activities in user homes by automatizing and monitoring behavior of the environment while interacting with the user in a non-intrusive way. The knowledge base of such systems therefore has to define thematically different aspects of the environment mostly related to space, such as basic spatial floor plan information, pieces of technical equipment in the environment and their functions and spatial ranges, activities users can perform, entities that occur in the environment, etc. In this paper, we present thematically different ontologies, each of which describing environmental aspects from a particular perspective. The resulting modular structure allows the selection of application-specific ontologies as necessary. This hides information and reduces complexity in terms of the represented spatial knowledge and reasoning practicability. We motivate and present the different spatial ontologies applied to an ambient assisted living application.

  8. Ontology Alignment Architecture for Semantic Sensor Web Integration

    PubMed Central

    Fernandez, Susel; Marsa-Maestre, Ivan; Velasco, Juan R.; Alarcos, Bernardo

    2013-01-01

    Sensor networks are a concept that has become very popular in data acquisition and processing for multiple applications in different fields such as industrial, medicine, home automation, environmental detection, etc. Today, with the proliferation of small communication devices with sensors that collect environmental data, semantic Web technologies are becoming closely related with sensor networks. The linking of elements from Semantic Web technologies with sensor networks has been called Semantic Sensor Web and has among its main features the use of ontologies. One of the key challenges of using ontologies in sensor networks is to provide mechanisms to integrate and exchange knowledge from heterogeneous sources (that is, dealing with semantic heterogeneity). Ontology alignment is the process of bringing ontologies into mutual agreement by the automatic discovery of mappings between related concepts. This paper presents a system for ontology alignment in the Semantic Sensor Web which uses fuzzy logic techniques to combine similarity measures between entities of different ontologies. The proposed approach focuses on two key elements: the terminological similarity, which takes into account the linguistic and semantic information of the context of the entity's names, and the structural similarity, based on both the internal and relational structure of the concepts. This work has been validated using sensor network ontologies and the Ontology Alignment Evaluation Initiative (OAEI) tests. The results show that the proposed techniques outperform previous approaches in terms of precision and recall. PMID:24051523

  9. Ontology alignment architecture for semantic sensor Web integration.

    PubMed

    Fernandez, Susel; Marsa-Maestre, Ivan; Velasco, Juan R; Alarcos, Bernardo

    2013-09-18

    Sensor networks are a concept that has become very popular in data acquisition and processing for multiple applications in different fields such as industrial, medicine, home automation, environmental detection, etc. Today, with the proliferation of small communication devices with sensors that collect environmental data, semantic Web technologies are becoming closely related with sensor networks. The linking of elements from Semantic Web technologies with sensor networks has been called Semantic Sensor Web and has among its main features the use of ontologies. One of the key challenges of using ontologies in sensor networks is to provide mechanisms to integrate and exchange knowledge from heterogeneous sources (that is, dealing with semantic heterogeneity). Ontology alignment is the process of bringing ontologies into mutual agreement by the automatic discovery of mappings between related concepts. This paper presents a system for ontology alignment in the Semantic Sensor Web which uses fuzzy logic techniques to combine similarity measures between entities of different ontologies. The proposed approach focuses on two key elements: the terminological similarity, which takes into account the linguistic and semantic information of the context of the entity's names, and the structural similarity, based on both the internal and relational structure of the concepts. This work has been validated using sensor network ontologies and the Ontology Alignment Evaluation Initiative (OAEI) tests. The results show that the proposed techniques outperform previous approaches in terms of precision and recall.

  10. Ontology-based classification of remote sensing images using spectral rules

    NASA Astrophysics Data System (ADS)

    Andrés, Samuel; Arvor, Damien; Mougenot, Isabelle; Libourel, Thérèse; Durieux, Laurent

    2017-05-01

    Earth Observation data is of great interest for a wide spectrum of scientific domain applications. An enhanced access to remote sensing images for "domain" experts thus represents a great advance since it allows users to interpret remote sensing images based on their domain expert knowledge. However, such an advantage can also turn into a major limitation if this knowledge is not formalized, and thus is difficult for it to be shared with and understood by other users. In this context, knowledge representation techniques such as ontologies should play a major role in the future of remote sensing applications. We implemented an ontology-based prototype to automatically classify Landsat images based on explicit spectral rules. The ontology is designed in a very modular way in order to achieve a generic and versatile representation of concepts we think of utmost importance in remote sensing. The prototype was tested on four subsets of Landsat images and the results confirmed the potential of ontologies to formalize expert knowledge and classify remote sensing images.

  11. Study on distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database

    NASA Astrophysics Data System (ADS)

    WANG, Qingrong; ZHU, Changfeng

    2017-06-01

    Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.

  12. SemVisM: semantic visualizer for medical image

    NASA Astrophysics Data System (ADS)

    Landaeta, Luis; La Cruz, Alexandra; Baranya, Alexander; Vidal, María.-Esther

    2015-01-01

    SemVisM is a toolbox that combines medical informatics and computer graphics tools for reducing the semantic gap between low-level features and high-level semantic concepts/terms in the images. This paper presents a novel strategy for visualizing medical data annotated semantically, combining rendering techniques, and segmentation algorithms. SemVisM comprises two main components: i) AMORE (A Modest vOlume REgister) to handle input data (RAW, DAT or DICOM) and to initially annotate the images using terms defined on medical ontologies (e.g., MesH, FMA or RadLex), and ii) VOLPROB (VOlume PRObability Builder) for generating the annotated volumetric data containing the classified voxels that belong to a particular tissue. SemVisM is built on top of the semantic visualizer ANISE.1

  13. Light-Weighted Automatic Import of Standardized Ontologies into the Content Management System Drupal.

    PubMed

    Beger, Christoph; Uciteli, Alexandr; Herre, Heinrich

    2017-01-01

    The amount of ontologies, which are utilizable for widespread domains, is growing steadily. BioPortal alone, embraces over 500 published ontologies with nearly 8 million classes. In contrast, the vast informative content of these ontologies is only directly intelligible by experts. To overcome this deficiency it could be possible to represent ontologies as web portals, which does not require knowledge about ontologies and their semantics, but still carries as much information as possible to the end-user. Furthermore, the conception of a complex web portal is a sophisticated process. Many entities must be analyzed and linked to existing terminologies. Ontologies are a decent solution for gathering and storing this complex data and dependencies. Hence, automated imports of ontologies into web portals could support both mentioned scenarios. The Content Management System (CMS) Drupal 8 is one of many solutions to develop web presentations with less required knowledge about programming languages and it is suitable to represent ontological entities. We developed the Drupal Upper Ontology (DUO), which models concepts of Drupal's architecture, such as nodes, vocabularies and links. DUO can be imported into ontologies to map their entities to Drupal's concepts. Because of Drupal's lack of import capabilities, we implemented the Simple Ontology Loader in Drupal (SOLID), a Drupal 8 module, which allows Drupal administrators to import ontologies based on DUO. Our module generates content in Drupal from existing ontologies and makes it accessible by the general public. Moreover Drupal offers a tagging system which may be amplified with multiple standardized and established terminologies by importing them with SOLID. Our Drupal module shows that ontologies can be used to model content of a CMS and vice versa CMS are suitable to represent ontologies in a user-friendly way. Ontological entities are presented to the user as discrete pages with all appropriate properties, links and tags.

  14. Automatically Expanding the Synonym Set of SNOMED CT using Wikipedia.

    PubMed

    Schlegel, Daniel R; Crowner, Chris; Elkin, Peter L

    2015-01-01

    Clinical terminologies and ontologies are often used in natural language processing/understanding tasks as a method for semantically tagging text. One ontology commonly used for this task is SNOMED CT. Natural language is rich and varied: many different combinations of words may be used to express the same idea. It is therefore essential that ontologies and terminologies have a rich set of synonyms. One source of synonyms is Wikipedia. We examine methods for aligning concepts in SNOMED CT with articles in Wikipedia so that newly-found synonyms may be added to SNOMED CT. Our experiments show promising results and provide guidance to researchers who wish to use Wikipedia for similar tasks.

  15. A bibliometric and visual analysis of global geo-ontology research

    NASA Astrophysics Data System (ADS)

    Li, Lin; Liu, Yu; Zhu, Haihong; Ying, Shen; Luo, Qinyao; Luo, Heng; Kuai, Xi; Xia, Hui; Shen, Hang

    2017-02-01

    In this paper, the results of a bibliometric and visual analysis of geo-ontology research articles collected from the Web of Science (WOS) database between 1999 and 2014 are presented. The numbers of national institutions and published papers are visualized and a global research heat map is drawn, illustrating an overview of global geo-ontology research. In addition, we present a chord diagram of countries and perform a visual cluster analysis of a knowledge co-citation network of references, disclosing potential academic communities and identifying key points, main research areas, and future research trends. The International Journal of Geographical Information Science, Progress in Human Geography, and Computers & Geosciences are the most active journals. The USA makes the largest contributions to geo-ontology research by virtue of its highest numbers of independent and collaborative papers, and its dominance was also confirmed in the country chord diagram. The majority of institutions are in the USA, Western Europe, and Eastern Asia. Wuhan University, University of Munster, and the Chinese Academy of Sciences are notable geo-ontology institutions. Keywords such as "Semantic Web," "GIS," and "space" have attracted a great deal of attention. "Semantic granularity in ontology-driven geographic information systems, "Ontologies in support of activities in geographical space" and "A translation approach to portable ontology specifications" have the highest cited centrality. Geographical space, computer-human interaction, and ontology cognition are the three main research areas of geo-ontology. The semantic mismatch between the producers and users of ontology data as well as error propagation in interdisciplinary and cross-linguistic data reuse needs to be solved. In addition, the development of geo-ontology modeling primitives based on OWL (Web Ontology Language)and finding methods to automatically rework data in Semantic Web are needed. Furthermore, the topological relations between geographical entities still require further study.

  16. Knowledge retrieval from PubMed abstracts and electronic medical records with the Multiple Sclerosis Ontology.

    PubMed

    Malhotra, Ashutosh; Gündel, Michaela; Rajput, Abdul Mateen; Mevissen, Heinz-Theodor; Saiz, Albert; Pastor, Xavier; Lozano-Rubi, Raimundo; Martinez-Lapiscina, Elena H; Martinez-Lapsicina, Elena H; Zubizarreta, Irati; Mueller, Bernd; Kotelnikova, Ekaterina; Toldo, Luca; Hofmann-Apitius, Martin; Villoslada, Pablo

    2015-01-01

    In order to retrieve useful information from scientific literature and electronic medical records (EMR) we developed an ontology specific for Multiple Sclerosis (MS). The MS Ontology was created using scientific literature and expert review under the Protégé OWL environment. We developed a dictionary with semantic synonyms and translations to different languages for mining EMR. The MS Ontology was integrated with other ontologies and dictionaries (diseases/comorbidities, gene/protein, pathways, drug) into the text-mining tool SCAIView. We analyzed the EMRs from 624 patients with MS using the MS ontology dictionary in order to identify drug usage and comorbidities in MS. Testing competency questions and functional evaluation using F statistics further validated the usefulness of MS ontology. Validation of the lexicalized ontology by means of named entity recognition-based methods showed an adequate performance (F score = 0.73). The MS Ontology retrieved 80% of the genes associated with MS from scientific abstracts and identified additional pathways targeted by approved disease-modifying drugs (e.g. apoptosis pathways associated with mitoxantrone, rituximab and fingolimod). The analysis of the EMR from patients with MS identified current usage of disease modifying drugs and symptomatic therapy as well as comorbidities, which are in agreement with recent reports. The MS Ontology provides a semantic framework that is able to automatically extract information from both scientific literature and EMR from patients with MS, revealing new pathogenesis insights as well as new clinical information.

  17. Computable visually observed phenotype ontological framework for plants

    PubMed Central

    2011-01-01

    Background The ability to search for and precisely compare similar phenotypic appearances within and across species has vast potential in plant science and genetic research. The difficulty in doing so lies in the fact that many visual phenotypic data, especially visually observed phenotypes that often times cannot be directly measured quantitatively, are in the form of text annotations, and these descriptions are plagued by semantic ambiguity, heterogeneity, and low granularity. Though several bio-ontologies have been developed to standardize phenotypic (and genotypic) information and permit comparisons across species, these semantic issues persist and prevent precise analysis and retrieval of information. A framework suitable for the modeling and analysis of precise computable representations of such phenotypic appearances is needed. Results We have developed a new framework called the Computable Visually Observed Phenotype Ontological Framework for plants. This work provides a novel quantitative view of descriptions of plant phenotypes that leverages existing bio-ontologies and utilizes a computational approach to capture and represent domain knowledge in a machine-interpretable form. This is accomplished by means of a robust and accurate semantic mapping module that automatically maps high-level semantics to low-level measurements computed from phenotype imagery. The framework was applied to two different plant species with semantic rules mined and an ontology constructed. Rule quality was evaluated and showed high quality rules for most semantics. This framework also facilitates automatic annotation of phenotype images and can be adopted by different plant communities to aid in their research. Conclusions The Computable Visually Observed Phenotype Ontological Framework for plants has been developed for more efficient and accurate management of visually observed phenotypes, which play a significant role in plant genomics research. The uniqueness of this framework is its ability to bridge the knowledge of informaticians and plant science researchers by translating descriptions of visually observed phenotypes into standardized, machine-understandable representations, thus enabling the development of advanced information retrieval and phenotype annotation analysis tools for the plant science community. PMID:21702966

  18. Self-organizing ontology of biochemically relevant small molecules

    PubMed Central

    2012-01-01

    Background The advent of high-throughput experimentation in biochemistry has led to the generation of vast amounts of chemical data, necessitating the development of novel analysis, characterization, and cataloguing techniques and tools. Recently, a movement to publically release such data has advanced biochemical structure-activity relationship research, while providing new challenges, the biggest being the curation, annotation, and classification of this information to facilitate useful biochemical pattern analysis. Unfortunately, the human resources currently employed by the organizations supporting these efforts (e.g. ChEBI) are expanding linearly, while new useful scientific information is being released in a seemingly exponential fashion. Compounding this, currently existing chemical classification and annotation systems are not amenable to automated classification, formal and transparent chemical class definition axiomatization, facile class redefinition, or novel class integration, thus further limiting chemical ontology growth by necessitating human involvement in curation. Clearly, there is a need for the automation of this process, especially for novel chemical entities of biological interest. Results To address this, we present a formal framework based on Semantic Web technologies for the automatic design of chemical ontology which can be used for automated classification of novel entities. We demonstrate the automatic self-assembly of a structure-based chemical ontology based on 60 MeSH and 40 ChEBI chemical classes. This ontology is then used to classify 200 compounds with an accuracy of 92.7%. We extend these structure-based classes with molecular feature information and demonstrate the utility of our framework for classification of functionally relevant chemicals. Finally, we discuss an iterative approach that we envision for future biochemical ontology development. Conclusions We conclude that the proposed methodology can ease the burden of chemical data annotators and dramatically increase their productivity. We anticipate that the use of formal logic in our proposed framework will make chemical classification criteria more transparent to humans and machines alike and will thus facilitate predictive and integrative bioactivity model development. PMID:22221313

  19. Ontology based heterogeneous materials database integration and semantic query

    NASA Astrophysics Data System (ADS)

    Zhao, Shuai; Qian, Quan

    2017-10-01

    Materials digital data, high throughput experiments and high throughput computations are regarded as three key pillars of materials genome initiatives. With the fast growth of materials data, the integration and sharing of data is very urgent, that has gradually become a hot topic of materials informatics. Due to the lack of semantic description, it is difficult to integrate data deeply in semantic level when adopting the conventional heterogeneous database integration approaches such as federal database or data warehouse. In this paper, a semantic integration method is proposed to create the semantic ontology by extracting the database schema semi-automatically. Other heterogeneous databases are integrated to the ontology by means of relational algebra and the rooted graph. Based on integrated ontology, semantic query can be done using SPARQL. During the experiments, two world famous First Principle Computational databases, OQMD and Materials Project are used as the integration targets, which show the availability and effectiveness of our method.

  20. Evaluation of an ontological resource for pharmacovigilance.

    PubMed

    Jaulent, Marie-Christine; Alecu, Iulian

    2009-01-01

    In this work, we present a methodology for evaluating an ontology designed in a previous study to describe adverse drug reactions. We evaluate it in term of its fitness for grouping cases in pharmacovigilance. We define as gold standard the Standardized MedDRA Queries (SMQs) developed manually to group terms representing similar medical conditions. We perform an automatic search in the ontology in order to retrieve concepts related to the medical conditions. An optimal query is built for each medical condition. The evaluation relies on the comparison between the terms in the SMQ and the terms subsumed by the query. The result is quantified by sensitivity and specificity. We applied this methodology for 24 SMQs and we obtain a mean sensitivity of 0.82. This work allows validating the semantic resource and provides, in perspective, tools to maintain the ontology while the knowledge is evolving.

  1. Semantic annotation of Web data applied to risk in food.

    PubMed

    Hignette, Gaëlle; Buche, Patrice; Couvert, Olivier; Dibie-Barthélemy, Juliette; Doussot, David; Haemmerlé, Ollivier; Mettler, Eric; Soler, Lydie

    2008-11-30

    A preliminary step to risk in food assessment is the gathering of experimental data. In the framework of the Sym'Previus project (http://www.symprevius.org), a complete data integration system has been designed, grouping data provided by industrial partners and data extracted from papers published in the main scientific journals of the domain. Those data have been classified by means of a predefined vocabulary, called ontology. Our aim is to complement the database with data extracted from the Web. In the framework of the WebContent project (www.webcontent.fr), we have designed a semi-automatic acquisition tool, called @WEB, which retrieves scientific documents from the Web. During the @WEB process, data tables are extracted from the documents and then annotated with the ontology. We focus on the data tables as they contain, in general, a synthesis of data published in the documents. In this paper, we explain how the columns of the data tables are automatically annotated with data types of the ontology and how the relations represented by the table are recognised. We also give the results of our experimentation to assess the quality of such an annotation.

  2. Ontology design patterns to disambiguate relations between genes and gene products in GENIA

    PubMed Central

    2011-01-01

    Motivation Annotated reference corpora play an important role in biomedical information extraction. A semantic annotation of the natural language texts in these reference corpora using formal ontologies is challenging due to the inherent ambiguity of natural language. The provision of formal definitions and axioms for semantic annotations offers the means for ensuring consistency as well as enables the development of verifiable annotation guidelines. Consistent semantic annotations facilitate the automatic discovery of new information through deductive inferences. Results We provide a formal characterization of the relations used in the recent GENIA corpus annotations. For this purpose, we both select existing axiom systems based on the desired properties of the relations within the domain and develop new axioms for several relations. To apply this ontology of relations to the semantic annotation of text corpora, we implement two ontology design patterns. In addition, we provide a software application to convert annotated GENIA abstracts into OWL ontologies by combining both the ontology of relations and the design patterns. As a result, the GENIA abstracts become available as OWL ontologies and are amenable for automated verification, deductive inferences and other knowledge-based applications. Availability Documentation, implementation and examples are available from http://www-tsujii.is.s.u-tokyo.ac.jp/GENIA/. PMID:22166341

  3. An ontology-based annotation of cardiac implantable electronic devices to detect therapy changes in a national registry.

    PubMed

    Rosier, Arnaud; Mabo, Philippe; Chauvin, Michel; Burgun, Anita

    2015-05-01

    The patient population benefitting from cardiac implantable electronic devices (CIEDs) is increasing. This study introduces a device annotation method that supports the consistent description of the functional attributes of cardiac devices and evaluates how this method can detect device changes from a CIED registry. We designed the Cardiac Device Ontology, an ontology of CIEDs and device functions. We annotated 146 cardiac devices with this ontology and used it to detect therapy changes with respect to atrioventricular pacing, cardiac resynchronization therapy, and defibrillation capability in a French national registry of patients with implants (STIDEFIX). We then analyzed a set of 6905 device replacements from the STIDEFIX registry. Ontology-based identification of therapy changes (upgraded, downgraded, or similar) was accurate (6905 cases) and performed better than straightforward analysis of the registry codes (F-measure 1.00 versus 0.75 to 0.97). This study demonstrates the feasibility and effectiveness of ontology-based functional annotation of devices in the cardiac domain. Such annotation allowed a better description and in-depth analysis of STIDEFIX. This method was useful for the automatic detection of therapy changes and may be reused for analyzing data from other device registries.

  4. A web-based system architecture for ontology-based data integration in the domain of IT benchmarking

    NASA Astrophysics Data System (ADS)

    Pfaff, Matthias; Krcmar, Helmut

    2018-03-01

    In the domain of IT benchmarking (ITBM), a variety of data and information are collected. Although these data serve as the basis for business analyses, no unified semantic representation of such data yet exists. Consequently, data analysis across different distributed data sets and different benchmarks is almost impossible. This paper presents a system architecture and prototypical implementation for an integrated data management of distributed databases based on a domain-specific ontology. To preserve the semantic meaning of the data, the ITBM ontology is linked to data sources and functions as the central concept for database access. Thus, additional databases can be integrated by linking them to this domain-specific ontology and are directly available for further business analyses. Moreover, the web-based system supports the process of mapping ontology concepts to external databases by introducing a semi-automatic mapping recommender and by visualizing possible mapping candidates. The system also provides a natural language interface to easily query linked databases. The expected result of this ontology-based approach of knowledge representation and data access is an increase in knowledge and data sharing in this domain, which will enhance existing business analysis methods.

  5. From Lexical Regularities to Axiomatic Patterns for the Quality Assurance of Biomedical Terminologies and Ontologies.

    PubMed

    van Damme, Philip; Quesada-Martínez, Manuel; Cornet, Ronald; Fernández-Breis, Jesualdo Tomás

    2018-06-13

    Ontologies and terminologies have been identified as key resources for the achievement of semantic interoperability in biomedical domains. The development of ontologies is performed as a joint work by domain experts and knowledge engineers. The maintenance and auditing of these resources is also the responsibility of such experts, and this is usually a time-consuming, mostly manual task. Manual auditing is impractical and ineffective for most biomedical ontologies, especially for larger ones. An example is SNOMED CT, a key resource in many countries for codifying medical information. SNOMED CT contains more than 300000 concepts. Consequently its auditing requires the support of automatic methods. Many biomedical ontologies contain natural language content for humans and logical axioms for machines. The 'lexically suggest, logically define' principle means that there should be a relation between what is expressed in natural language and as logical axioms, and that such a relation should be useful for auditing and quality assurance. Besides, the meaning of this principle is that the natural language content for humans could be used to generate the logical axioms for the machines. In this work, we propose a method that combines lexical analysis and clustering techniques to (1) identify regularities in the natural language content of ontologies; (2) cluster, by similarity, labels exhibiting a regularity; (3) extract relevant information from those clusters; and (4) propose logical axioms for each cluster with the support of axiom templates. These logical axioms can then be evaluated with the existing axioms in the ontology to check their correctness and completeness, which are two fundamental objectives in auditing and quality assurance. In this paper, we describe the application of the method to two SNOMED CT modules, a 'congenital' module, obtained using concepts exhibiting the attribute Occurrence - Congenital, and a 'chronic' module, using concepts exhibiting the attribute Clinical course - Chronic. We obtained a precision and a recall of respectively 75% and 28% for the 'congenital' module, and 64% and 40% for the 'chronic' one. We consider these results to be promising, so our method can contribute to the support of content editors by using automatic methods for assuring the quality of biomedical ontologies and terminologies. Copyright © 2018. Published by Elsevier Inc.

  6. Ontology-guided organ detection to retrieve web images of disease manifestation: towards the construction of a consumer-based health image library.

    PubMed

    Chen, Yang; Ren, Xiaofeng; Zhang, Guo-Qiang; Xu, Rong

    2013-01-01

    Visual information is a crucial aspect of medical knowledge. Building a comprehensive medical image base, in the spirit of the Unified Medical Language System (UMLS), would greatly benefit patient education and self-care. However, collection and annotation of such a large-scale image base is challenging. To combine visual object detection techniques with medical ontology to automatically mine web photos and retrieve a large number of disease manifestation images with minimal manual labeling effort. As a proof of concept, we first learnt five organ detectors on three detection scales for eyes, ears, lips, hands, and feet. Given a disease, we used information from the UMLS to select affected body parts, ran the pretrained organ detectors on web images, and combined the detection outputs to retrieve disease images. Compared with a supervised image retrieval approach that requires training images for every disease, our ontology-guided approach exploits shared visual information of body parts across diseases. In retrieving 2220 web images of 32 diseases, we reduced manual labeling effort to 15.6% while improving the average precision by 3.9% from 77.7% to 81.6%. For 40.6% of the diseases, we improved the precision by 10%. The results confirm the concept that the web is a feasible source for automatic disease image retrieval for health image database construction. Our approach requires a small amount of manual effort to collect complex disease images, and to annotate them by standard medical ontology terms.

  7. Organization of the concepts of the Platonic dialogue Parmenides into a software ontology

    NASA Astrophysics Data System (ADS)

    Dendrinos, Markos

    2015-02-01

    This paper concerns with the creation of a knowledge management ontology (in the open source OWL/ Protégé environment), including all the concepts and both the explicit and impicit relations between the concepts met in the Platonic dialogue Parmenides. The selected part of Parmenides is the second half (137c4-166c6), where Parmenides undertakes to take on the challenge of exposing a Zenonian type argumentation concerning the properties of one and being. The incorporation of the Platonic concepts and relations to a software ontology gives the opportunity to organize the philosophical ideas of a certain context in an hierarchical tree enriched with the various defined relations, detect possible logical inconsistencies and produce automatically further concept relations.

  8. Automatic geospatial information Web service composition based on ontology interface matching

    NASA Astrophysics Data System (ADS)

    Xu, Xianbin; Wu, Qunyong; Wang, Qinmin

    2008-10-01

    With Web services technology the functions of WebGIS can be presented as a kind of geospatial information service, and helped to overcome the limitation of the information-isolated situation in geospatial information sharing field. Thus Geospatial Information Web service composition, which conglomerates outsourced services working in tandem to offer value-added service, plays the key role in fully taking advantage of geospatial information services. This paper proposes an automatic geospatial information web service composition algorithm that employed the ontology dictionary WordNet to analyze semantic distances among the interfaces. Through making matching between input/output parameters and the semantic meaning of pairs of service interfaces, a geospatial information web service chain can be created from a number of candidate services. A practice of the algorithm is also proposed and the result of it shows the feasibility of this algorithm and the great promise in the emerging demand for geospatial information web service composition.

  9. Combining machine learning and ontological data handling for multi-source classification of nature conservation areas

    NASA Astrophysics Data System (ADS)

    Moran, Niklas; Nieland, Simon; Tintrup gen. Suntrup, Gregor; Kleinschmit, Birgit

    2017-02-01

    Manual field surveys for nature conservation management are expensive and time-consuming and could be supplemented and streamlined by using Remote Sensing (RS). RS is critical to meet requirements of existing laws such as the EU Habitats Directive (HabDir) and more importantly to meet future challenges. The full potential of RS has yet to be harnessed as different nomenclatures and procedures hinder interoperability, comparison and provenance. Therefore, automated tools are needed to use RS data to produce comparable, empirical data outputs that lend themselves to data discovery and provenance. These issues are addressed by a novel, semi-automatic ontology-based classification method that uses machine learning algorithms and Web Ontology Language (OWL) ontologies that yields traceable, interoperable and observation-based classification outputs. The method was tested on European Union Nature Information System (EUNIS) grasslands in Rheinland-Palatinate, Germany. The developed methodology is a first step in developing observation-based ontologies in the field of nature conservation. The tests show promising results for the determination of the grassland indicators wetness and alkalinity with an overall accuracy of 85% for alkalinity and 76% for wetness.

  10. n-D shape/texture optimal synthetic description and modeling by GEOGINE

    NASA Astrophysics Data System (ADS)

    Fiorini, Rodolfo A.; Dacquino, Gianfranco F.

    2004-12-01

    GEOGINE(GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for multidimensional shape/texture optimal synthetic description and learning, is presented. Usually elementary geometric shape robust characterization, subjected to geometric transformation, on a rigorous mathematical level is a key problem in many computer applications in different interest areas. The past four decades have seen solutions almost based on the use of n-Dimensional Moment and Fourier descriptor invariants. The present paper introduces a new approach for automatic model generation based on n -Dimensional Tensor Invariants as formal dictionary. An ontological model is the kernel used for specifying ontologies so that how close an ontology can be from the real world depends on the possibilities offered by the ontological model. By this approach even chromatic information content can be easily and reliably decoupled from target geometric information and computed into robus colour shape parameter attributes. Main GEOGINEoperational advantages over previous approaches are: 1) Automated Model Generation, 2) Invariant Minimal Complete Set for computational efficiency, 3) Arbitrary Model Precision for robust object description.

  11. Definition of an Ontology Matching Algorithm for Context Integration in Smart Cities

    PubMed Central

    Otero-Cerdeira, Lorena; Rodríguez-Martínez, Francisco J.; Gómez-Rodríguez, Alma

    2014-01-01

    In this paper we describe a novel proposal in the field of smart cities: using an ontology matching algorithm to guarantee the automatic information exchange between the agents and the smart city. A smart city is composed by different types of agents that behave as producers and/or consumers of the information in the smart city. In our proposal, the data from the context is obtained by sensor and device agents while users interact with the smart city by means of user or system agents. The knowledge of each agent, as well as the smart city's knowledge, is semantically represented using different ontologies. To have an open city, that is fully accessible to any agent and therefore to provide enhanced services to the users, there is the need to ensure a seamless communication between agents and the city, regardless of their inner knowledge representations, i.e., ontologies. To meet this goal we use ontology matching techniques, specifically we have defined a new ontology matching algorithm called OntoPhil to be deployed within a smart city, which has never been done before. OntoPhil was tested on the benchmarks provided by the well known evaluation initiative, Ontology Alignment Evaluation Initiative, and also compared to other matching algorithms, although these algorithms were not specifically designed for smart cities. Additionally, specific tests involving a smart city's ontology and different types of agents were conducted to validate the usefulness of OntoPhil in the smart city environment. PMID:25494353

  12. Definition of an Ontology Matching Algorithm for Context Integration in Smart Cities.

    PubMed

    Otero-Cerdeira, Lorena; Rodríguez-Martínez, Francisco J; Gómez-Rodríguez, Alma

    2014-12-08

    In this paper we describe a novel proposal in the field of smart cities: using an ontology matching algorithm to guarantee the automatic information exchange between the agents and the smart city. A smart city is composed by different types of agents that behave as producers and/or consumers of the information in the smart city. In our proposal, the data from the context is obtained by sensor and device agents while users interact with the smart city by means of user or system agents. The knowledge of each agent, as well as the smart city's knowledge, is semantically represented using different ontologies. To have an open city, that is fully accessible to any agent and therefore to provide enhanced services to the users, there is the need to ensure a seamless communication between agents and the city, regardless of their inner knowledge representations, i.e., ontologies. To meet this goal we use ontology matching techniques, specifically we have defined a new ontology matching algorithm called OntoPhil to be deployed within a smart city, which has never been done before. OntoPhil was tested on the benchmarks provided by the well known evaluation initiative, Ontology Alignment Evaluation Initiative, and also compared to other matching algorithms, although these algorithms were not specifically designed for smart cities. Additionally, specific tests involving a smart city's ontology and different types of agents were conducted to validate the usefulness of OntoPhil in the smart city environment.

  13. Annotation of phenotypic diversity: decoupling data curation and ontology curation using Phenex.

    PubMed

    Balhoff, James P; Dahdul, Wasila M; Dececchi, T Alexander; Lapp, Hilmar; Mabee, Paula M; Vision, Todd J

    2014-01-01

    Phenex (http://phenex.phenoscape.org/) is a desktop application for semantically annotating the phenotypic character matrix datasets common in evolutionary biology. Since its initial publication, we have added new features that address several major bottlenecks in the efficiency of the phenotype curation process: allowing curators during the data curation phase to provisionally request terms that are not yet available from a relevant ontology; supporting quality control against annotation guidelines to reduce later manual review and revision; and enabling the sharing of files for collaboration among curators. We decoupled data annotation from ontology development by creating an Ontology Request Broker (ORB) within Phenex. Curators can use the ORB to request a provisional term for use in data annotation; the provisional term can be automatically replaced with a permanent identifier once the term is added to an ontology. We added a set of annotation consistency checks to prevent common curation errors, reducing the need for later correction. We facilitated collaborative editing by improving the reliability of Phenex when used with online folder sharing services, via file change monitoring and continual autosave. With the addition of these new features, and in particular the Ontology Request Broker, Phenex users have been able to focus more effectively on data annotation. Phenoscape curators using Phenex have reported a smoother annotation workflow, with much reduced interruptions from ontology maintenance and file management issues.

  14. Improving automation standards via semantic modelling: Application to ISA88.

    PubMed

    Dombayci, Canan; Farreres, Javier; Rodríguez, Horacio; Espuña, Antonio; Graells, Moisès

    2017-03-01

    Standardization is essential for automation. Extensibility, scalability, and reusability are important features for automation software that rely in the efficient modelling of the addressed systems. The work presented here is from the ongoing development of a methodology for semi-automatic ontology construction methodology from technical documents. The main aim of this work is to systematically check the consistency of technical documents and support the improvement of technical document consistency. The formalization of conceptual models and the subsequent writing of technical standards are simultaneously analyzed, and guidelines proposed for application to future technical standards. Three paradigms are discussed for the development of domain ontologies from technical documents, starting from the current state of the art, continuing with the intermediate method presented and used in this paper, and ending with the suggested paradigm for the future. The ISA88 Standard is taken as a representative case study. Linguistic techniques from the semi-automatic ontology construction methodology is applied to the ISA88 Standard and different modelling and standardization aspects that are worth sharing with the automation community is addressed. This study discusses different paradigms for developing and sharing conceptual models for the subsequent development of automation software, along with presenting the systematic consistency checking method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  15. OntologyWidget - a reusable, embeddable widget for easily locating ontology terms.

    PubMed

    Beauheim, Catherine C; Wymore, Farrell; Nitzberg, Michael; Zachariah, Zachariah K; Jin, Heng; Skene, J H Pate; Ball, Catherine A; Sherlock, Gavin

    2007-09-13

    Biomedical ontologies are being widely used to annotate biological data in a computer-accessible, consistent and well-defined manner. However, due to their size and complexity, annotating data with appropriate terms from an ontology is often challenging for experts and non-experts alike, because there exist few tools that allow one to quickly find relevant ontology terms to easily populate a web form. We have produced a tool, OntologyWidget, which allows users to rapidly search for and browse ontology terms. OntologyWidget can easily be embedded in other web-based applications. OntologyWidget is written using AJAX (Asynchronous JavaScript and XML) and has two related elements. The first is a dynamic auto-complete ontology search feature. As a user enters characters into the search box, the appropriate ontology is queried remotely for terms that match the typed-in text, and the query results populate a drop-down list with all potential matches. Upon selection of a term from the list, the user can locate this term within a generic and dynamic ontology browser, which comprises the second element of the tool. The ontology browser shows the paths from a selected term to the root as well as parent/child tree hierarchies. We have implemented web services at the Stanford Microarray Database (SMD), which provide the OntologyWidget with access to over 40 ontologies from the Open Biological Ontology (OBO) website 1. Each ontology is updated weekly. Adopters of the OntologyWidget can either use SMD's web services, or elect to rely on their own. Deploying the OntologyWidget can be accomplished in three simple steps: (1) install Apache Tomcat 2 on one's web server, (2) download and install the OntologyWidget servlet stub that provides access to the SMD ontology web services, and (3) create an html (HyperText Markup Language) file that refers to the OntologyWidget using a simple, well-defined format. We have developed OntologyWidget, an easy-to-use ontology search and display tool that can be used on any web page by creating a simple html description. OntologyWidget provides a rapid auto-complete search function paired with an interactive tree display. We have developed a web service layer that communicates between the web page interface and a database of ontology terms. We currently store 40 of the ontologies from the OBO website 1, as well as a several others. These ontologies are automatically updated on a weekly basis. OntologyWidget can be used in any web-based application to take advantage of the ontologies we provide via web services or any other ontology that is provided elsewhere in the correct format. The full source code for the JavaScript and description of the OntologyWidget is available from http://smd.stanford.edu/ontologyWidget/.

  16. A Uniform Ontology for Software Interfaces

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan

    2002-01-01

    It is universally the case that computer users who are not also computer specialists prefer to deal with computers' in terms of a familiar ontology, namely that of their application domains. For example, the well-known Windows ontology assumes that the user is an office worker, and therefore should be presented with a "desktop environment" featuring entities such as (virtual) file folders, documents, appointment calendars, and the like, rather than a world of machine registers and machine language instructions, or even the DOS command level. The central theme of this research has been the proposition that the user interacting with a software system should have at his disposal both the ontology underlying the system, as well as a model of the system. This information is necessary for the understanding of the system in use, as well as for the automatic generation of assistance for the user, both in solving the problem for which the application is designed, and for providing guidance in the capabilities and use of the system.

  17. Ontology-Based High-Level Context Inference for Human Behavior Identification

    PubMed Central

    Villalonga, Claudia; Razzaq, Muhammad Asif; Khan, Wajahat Ali; Pomares, Hector; Rojas, Ignacio; Lee, Sungyoung; Banos, Oresti

    2016-01-01

    Recent years have witnessed a huge progress in the automatic identification of individual primitives of human behavior, such as activities or locations. However, the complex nature of human behavior demands more abstract contextual information for its analysis. This work presents an ontology-based method that combines low-level primitives of behavior, namely activity, locations and emotions, unprecedented to date, to intelligently derive more meaningful high-level context information. The paper contributes with a new open ontology describing both low-level and high-level context information, as well as their relationships. Furthermore, a framework building on the developed ontology and reasoning models is presented and evaluated. The proposed method proves to be robust while identifying high-level contexts even in the event of erroneously-detected low-level contexts. Despite reasonable inference times being obtained for a relevant set of users and instances, additional work is required to scale to long-term scenarios with a large number of users. PMID:27690050

  18. The ACGT Master Ontology and its applications – Towards an ontology-driven cancer research and management system

    PubMed Central

    Brochhausen, Mathias; Spear, Andrew D.; Cocos, Cristian; Weiler, Gabriele; Martín, Luis; Anguita, Alberto; Stenzhorn, Holger; Daskalaki, Evangelia; Schera, Fatima; Schwarz, Ulf; Sfakianakis, Stelios; Kiefer, Stephan; Dörr, Martin; Graf, Norbert; Tsiknakis, Manolis

    2017-01-01

    Objective This paper introduces the objectives, methods and results of ontology development in the EU co-funded project Advancing Clinico-genomic Trials on Cancer – Open Grid Services for Improving Medical Knowledge Discovery (ACGT). While the available data in the life sciences has recently grown both in amount and quality, the full exploitation of it is being hindered by the use of different underlying technologies, coding systems, category schemes and reporting methods on the part of different research groups. The goal of the ACGT project is to contribute to the resolution of these problems by developing an ontology-driven, semantic grid services infrastructure that will enable efficient execution of discovery-driven scientific workflows in the context of multi-centric, post-genomic clinical trials. The focus of the present paper is the ACGT Master Ontology (MO). Methods ACGT project researchers undertook a systematic review of existing domain and upper-level ontologies, as well as of existing ontology design software, implementation methods, and end-user interfaces. This included the careful study of best practices, design principles and evaluation methods for ontology design, maintenance, implementation, and versioning, as well as for use on the part of domain experts and clinicians. Results To date, the results of the ACGT project include (i) the development of a master ontology (the ACGT-MO) based on clearly defined principles of ontology development and evaluation; (ii) the development of a technical infra-structure (the ACGT Platform) that implements the ACGT-MO utilizing independent tools, components and resources that have been developed based on open architectural standards, and which includes an application updating and evolving the ontology efficiently in response to end-user needs; and (iii) the development of an Ontology-based Trial Management Application (ObTiMA) that integrates the ACGT-MO into the design process of clinical trials in order to guarantee automatic semantic integration without the need to perform a separate mapping process. PMID:20438862

  19. Transformation of standardized clinical models based on OWL technologies: from CEM to OpenEHR archetypes

    PubMed Central

    Legaz-García, María del Carmen; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás; Chute, Christopher G; Tao, Cui

    2015-01-01

    Introduction The semantic interoperability of electronic healthcare records (EHRs) systems is a major challenge in the medical informatics area. International initiatives pursue the use of semantically interoperable clinical models, and ontologies have frequently been used in semantic interoperability efforts. The objective of this paper is to propose a generic, ontology-based, flexible approach for supporting the automatic transformation of clinical models, which is illustrated for the transformation of Clinical Element Models (CEMs) into openEHR archetypes. Methods Our transformation method exploits the fact that the information models of the most relevant EHR specifications are available in the Web Ontology Language (OWL). The transformation approach is based on defining mappings between those ontological structures. We propose a way in which CEM entities can be transformed into openEHR by using transformation templates and OWL as common representation formalism. The transformation architecture exploits the reasoning and inferencing capabilities of OWL technologies. Results We have devised a generic, flexible approach for the transformation of clinical models, implemented for the unidirectional transformation from CEM to openEHR, a series of reusable transformation templates, a proof-of-concept implementation, and a set of openEHR archetypes that validate the methodological approach. Conclusions We have been able to transform CEM into archetypes in an automatic, flexible, reusable transformation approach that could be extended to other clinical model specifications. We exploit the potential of OWL technologies for supporting the transformation process. We believe that our approach could be useful for international efforts in the area of semantic interoperability of EHR systems. PMID:25670753

  20. PDON: Parkinson's disease ontology for representation and modeling of the Parkinson's disease knowledge domain.

    PubMed

    Younesi, Erfan; Malhotra, Ashutosh; Gündel, Michaela; Scordis, Phil; Kodamullil, Alpha Tom; Page, Matt; Müller, Bernd; Springstubbe, Stephan; Wüllner, Ullrich; Scheller, Dieter; Hofmann-Apitius, Martin

    2015-09-22

    Despite the unprecedented and increasing amount of data, relatively little progress has been made in molecular characterization of mechanisms underlying Parkinson's disease. In the area of Parkinson's research, there is a pressing need to integrate various pieces of information into a meaningful context of presumed disease mechanism(s). Disease ontologies provide a novel means for organizing, integrating, and standardizing the knowledge domains specific to disease in a compact, formalized and computer-readable form and serve as a reference for knowledge exchange or systems modeling of disease mechanism. The Parkinson's disease ontology was built according to the life cycle of ontology building. Structural, functional, and expert evaluation of the ontology was performed to ensure the quality and usability of the ontology. A novelty metric has been introduced to measure the gain of new knowledge using the ontology. Finally, a cause-and-effect model was built around PINK1 and two gene expression studies from the Gene Expression Omnibus database were re-annotated to demonstrate the usability of the ontology. The Parkinson's disease ontology with a subclass-based taxonomic hierarchy covers the broad spectrum of major biomedical concepts from molecular to clinical features of the disease, and also reflects different views on disease features held by molecular biologists, clinicians and drug developers. The current version of the ontology contains 632 concepts, which are organized under nine views. The structural evaluation showed the balanced dispersion of concept classes throughout the ontology. The functional evaluation demonstrated that the ontology-driven literature search could gain novel knowledge not present in the reference Parkinson's knowledge map. The ontology was able to answer specific questions related to Parkinson's when evaluated by experts. Finally, the added value of the Parkinson's disease ontology is demonstrated by ontology-driven modeling of PINK1 and re-annotation of gene expression datasets relevant to Parkinson's disease. Parkinson's disease ontology delivers the knowledge domain of Parkinson's disease in a compact, computer-readable form, which can be further edited and enriched by the scientific community and also to be used to construct, represent and automatically extend Parkinson's-related computable models. A practical version of the Parkinson's disease ontology for browsing and editing can be publicly accessed at http://bioportal.bioontology.org/ontologies/PDON .

  1. OntologyWidget – a reusable, embeddable widget for easily locating ontology terms

    PubMed Central

    Beauheim, Catherine C; Wymore, Farrell; Nitzberg, Michael; Zachariah, Zachariah K; Jin, Heng; Skene, JH Pate; Ball, Catherine A; Sherlock, Gavin

    2007-01-01

    Background Biomedical ontologies are being widely used to annotate biological data in a computer-accessible, consistent and well-defined manner. However, due to their size and complexity, annotating data with appropriate terms from an ontology is often challenging for experts and non-experts alike, because there exist few tools that allow one to quickly find relevant ontology terms to easily populate a web form. Results We have produced a tool, OntologyWidget, which allows users to rapidly search for and browse ontology terms. OntologyWidget can easily be embedded in other web-based applications. OntologyWidget is written using AJAX (Asynchronous JavaScript and XML) and has two related elements. The first is a dynamic auto-complete ontology search feature. As a user enters characters into the search box, the appropriate ontology is queried remotely for terms that match the typed-in text, and the query results populate a drop-down list with all potential matches. Upon selection of a term from the list, the user can locate this term within a generic and dynamic ontology browser, which comprises the second element of the tool. The ontology browser shows the paths from a selected term to the root as well as parent/child tree hierarchies. We have implemented web services at the Stanford Microarray Database (SMD), which provide the OntologyWidget with access to over 40 ontologies from the Open Biological Ontology (OBO) website [1]. Each ontology is updated weekly. Adopters of the OntologyWidget can either use SMD's web services, or elect to rely on their own. Deploying the OntologyWidget can be accomplished in three simple steps: (1) install Apache Tomcat [2] on one's web server, (2) download and install the OntologyWidget servlet stub that provides access to the SMD ontology web services, and (3) create an html (HyperText Markup Language) file that refers to the OntologyWidget using a simple, well-defined format. Conclusion We have developed OntologyWidget, an easy-to-use ontology search and display tool that can be used on any web page by creating a simple html description. OntologyWidget provides a rapid auto-complete search function paired with an interactive tree display. We have developed a web service layer that communicates between the web page interface and a database of ontology terms. We currently store 40 of the ontologies from the OBO website [1], as well as a several others. These ontologies are automatically updated on a weekly basis. OntologyWidget can be used in any web-based application to take advantage of the ontologies we provide via web services or any other ontology that is provided elsewhere in the correct format. The full source code for the JavaScript and description of the OntologyWidget is available from . PMID:17854506

  2. A Builder`s Guide to Super Good Cents Contruction and Sales.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    OSU Extension Energy Program; United States. Bonneville Power Administration.

    This Builder`s guide describes the Super Good Cents {reg_sign} program and the benefits available to participating builders. It explains the program standards and the typical building techniques used by Super Good Cents builders. Finally, the guide tells how you can participate and answers many of the questions asked by builders about the Super Good Cents program.

  3. Conceptual Model Formalization in a Semantic Interoperability Service Framework: Transforming Relational Database Schemas to OWL.

    PubMed

    Bravo, Carlos; Suarez, Carlos; González, Carolina; López, Diego; Blobel, Bernd

    2014-01-01

    Healthcare information is distributed through multiple heterogeneous and autonomous systems. Access to, and sharing of, distributed information sources are a challenging task. To contribute to meeting this challenge, this paper presents a formal, complete and semi-automatic transformation service from Relational Databases to Web Ontology Language. The proposed service makes use of an algorithm that allows to transform several data models of different domains by deploying mainly inheritance rules. The paper emphasizes the relevance of integrating the proposed approach into an ontology-based interoperability service to achieve semantic interoperability.

  4. An efficient, large-scale, non-lattice-detection algorithm for exhaustive structural auditing of biomedical ontologies.

    PubMed

    Zhang, Guo-Qiang; Xing, Guangming; Cui, Licong

    2018-04-01

    One of the basic challenges in developing structural methods for systematic audition on the quality of biomedical ontologies is the computational cost usually involved in exhaustive sub-graph analysis. We introduce ANT-LCA, a new algorithm for computing all non-trivial lowest common ancestors (LCA) of each pair of concepts in the hierarchical order induced by an ontology. The computation of LCA is a fundamental step for non-lattice approach for ontology quality assurance. Distinct from existing approaches, ANT-LCA only computes LCAs for non-trivial pairs, those having at least one common ancestor. To skip all trivial pairs that may be of no practical interest, ANT-LCA employs a simple but innovative algorithmic strategy combining topological order and dynamic programming to keep track of non-trivial pairs. We provide correctness proofs and demonstrate a substantial reduction in computational time for two largest biomedical ontologies: SNOMED CT and Gene Ontology (GO). ANT-LCA achieved an average computation time of 30 and 3 sec per version for SNOMED CT and GO, respectively, about 2 orders of magnitude faster than the best known approaches. Our algorithm overcomes a fundamental computational barrier in sub-graph based structural analysis of large ontological systems. It enables the implementation of a new breed of structural auditing methods that not only identifies potential problematic areas, but also automatically suggests changes to fix the issues. Such structural auditing methods can lead to more effective tools supporting ontology quality assurance work. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Space Construction Automated Fabrication Experiment Definition Study (SCAFEDS). Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The techniques, processes, and equipment required for automatic fabrication and assembly of structural elements in space using the space shuttle as a launch vehicle and construction base were investigated. Additional construction/systems/operational techniques, processes, and equipment which can be developed/demonstrated in the same program to provide further risk reduction benefits to future large space systems were included. Results in the areas of structure/materials, fabrication systems (beam builder, assembly jig, and avionics/controls), mission integration, and programmatics are summarized. Conclusions and recommendations are given.

  6. Information extraction from Italian medical reports: An ontology-driven approach.

    PubMed

    Viani, Natalia; Larizza, Cristiana; Tibollo, Valentina; Napolitano, Carlo; Priori, Silvia G; Bellazzi, Riccardo; Sacchi, Lucia

    2018-03-01

    In this work, we propose an ontology-driven approach to identify events and their attributes from episodes of care included in medical reports written in Italian. For this language, shared resources for clinical information extraction are not easily accessible. The corpus considered in this work includes 5432 non-annotated medical reports belonging to patients with rare arrhythmias. To guide the information extraction process, we built a domain-specific ontology that includes the events and the attributes to be extracted, with related regular expressions. The ontology and the annotation system were constructed on a development set, while the performance was evaluated on an independent test set. As a gold standard, we considered a manually curated hospital database named TRIAD, which stores most of the information written in reports. The proposed approach performs well on the considered Italian medical corpus, with a percentage of correct annotations above 90% for most considered clinical events. We also assessed the possibility to adapt the system to the analysis of another language (i.e., English), with promising results. Our annotation system relies on a domain ontology to extract and link information in clinical text. We developed an ontology that can be easily enriched and translated, and the system performs well on the considered task. In the future, it could be successfully used to automatically populate the TRIAD database. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Transformation of standardized clinical models based on OWL technologies: from CEM to OpenEHR archetypes.

    PubMed

    Legaz-García, María del Carmen; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás; Chute, Christopher G; Tao, Cui

    2015-05-01

    The semantic interoperability of electronic healthcare records (EHRs) systems is a major challenge in the medical informatics area. International initiatives pursue the use of semantically interoperable clinical models, and ontologies have frequently been used in semantic interoperability efforts. The objective of this paper is to propose a generic, ontology-based, flexible approach for supporting the automatic transformation of clinical models, which is illustrated for the transformation of Clinical Element Models (CEMs) into openEHR archetypes. Our transformation method exploits the fact that the information models of the most relevant EHR specifications are available in the Web Ontology Language (OWL). The transformation approach is based on defining mappings between those ontological structures. We propose a way in which CEM entities can be transformed into openEHR by using transformation templates and OWL as common representation formalism. The transformation architecture exploits the reasoning and inferencing capabilities of OWL technologies. We have devised a generic, flexible approach for the transformation of clinical models, implemented for the unidirectional transformation from CEM to openEHR, a series of reusable transformation templates, a proof-of-concept implementation, and a set of openEHR archetypes that validate the methodological approach. We have been able to transform CEM into archetypes in an automatic, flexible, reusable transformation approach that could be extended to other clinical model specifications. We exploit the potential of OWL technologies for supporting the transformation process. We believe that our approach could be useful for international efforts in the area of semantic interoperability of EHR systems. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. REVEAL: An Extensible Reduced Order Model Builder for Simulation and Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Khushbu; Sharma, Poorva; Ma, Jinliang

    2013-04-30

    Many science domains need to build computationally efficient and accurate representations of high fidelity, computationally expensive simulations. These computationally efficient versions are known as reduced-order models. This paper presents the design and implementation of a novel reduced-order model (ROM) builder, the REVEAL toolset. This toolset generates ROMs based on science- and engineering-domain specific simulations executed on high performance computing (HPC) platforms. The toolset encompasses a range of sampling and regression methods that can be used to generate a ROM, automatically quantifies the ROM accuracy, and provides support for an iterative approach to improve ROM accuracy. REVEAL is designed to bemore » extensible in order to utilize the core functionality with any simulator that has published input and output formats. It also defines programmatic interfaces to include new sampling and regression techniques so that users can ‘mix and match’ mathematical techniques to best suit the characteristics of their model. In this paper, we describe the architecture of REVEAL and demonstrate its usage with a computational fluid dynamics model used in carbon capture.« less

  9. Business Solutions Case Study: Marketing Zero Energy Homes: LifeStyle Homes, Melbourne, Florida

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Building America research has shown that high-performance homes can potentially give builders an edge in the marketplace and can boost sales. But it doesn't happen automatically. It requires a tailored, easy to understand marketing campaign and sometimes a little flair. This case study highlights LifeStyle Homes’ successful marketing approach for their SunSmart home package, which has helped to boost sales for the company. SunSmart marketing includes a modified logo, weekly blog, social media, traditional advertising, website, and sales staff training. Marketing focuses on quality, durability, healthy indoor air, and energy efficiency with an emphasis on the surety of third-party verificationmore » and the scientific approach to developing the SunSmart package. With the introduction of SunSmart, LifeStyle began an early recovery, nearly doubling sales in 2010; SunSmart sales now exceed 300 homes, including more than 20 zero energy homes. Completed homes in 2014 far outpaced the national (19%) and southern census region (27%) recovery rates for the same period. As technology improves and evolves, this builder will continue to collaborate with Building America.« less

  10. Extending gene ontology with gene association networks.

    PubMed

    Peng, Jiajie; Wang, Tao; Wang, Jixuan; Wang, Yadong; Chen, Jin

    2016-04-15

    Gene ontology (GO) is a widely used resource to describe the attributes for gene products. However, automatic GO maintenance remains to be difficult because of the complex logical reasoning and the need of biological knowledge that are not explicitly represented in the GO. The existing studies either construct whole GO based on network data or only infer the relations between existing GO terms. None is purposed to add new terms automatically to the existing GO. We proposed a new algorithm 'GOExtender' to efficiently identify all the connected gene pairs labeled by the same parent GO terms. GOExtender is used to predict new GO terms with biological network data, and connect them to the existing GO. Evaluation tests on biological process and cellular component categories of different GO releases showed that GOExtender can extend new GO terms automatically based on the biological network. Furthermore, we applied GOExtender to the recent release of GO and discovered new GO terms with strong support from literature. Software and supplementary document are available at www.msu.edu/%7Ejinchen/GOExtender jinchen@msu.edu or ydwang@hit.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Approaching the axiomatic enrichment of the Gene Ontology from a lexical perspective.

    PubMed

    Quesada-Martínez, Manuel; Mikroyannidi, Eleni; Fernández-Breis, Jesualdo Tomás; Stevens, Robert

    2015-09-01

    The main goal of this work is to measure how lexical regularities in biomedical ontology labels can be used for the automatic creation of formal relationships between classes, and to evaluate the results of applying our approach to the Gene Ontology (GO). In recent years, we have developed a method for the lexical analysis of regularities in biomedical ontology labels, and we showed that the labels can present a high degree of regularity. In this work, we extend our method with a cross-products extension (CPE) metric, which estimates the potential interest of a specific regularity for axiomatic enrichment in the lexical analysis, using information on exact matches in external ontologies. The GO consortium recently enriched the GO by using so-called cross-product extensions. Cross-products are generated by establishing axioms that relate a given GO class with classes from the GO or other biomedical ontologies. We apply our method to the GO and study how its lexical analysis can identify and reconstruct the cross-products that are defined by the GO consortium. The label of the classes of the GO are highly regular in lexical terms, and the exact matches with labels of external ontologies affect 80% of the GO classes. The CPE metric reveals that 31.48% of the classes that exhibit regularities have fragments that are classes into two external ontologies that are selected for our experiment, namely, the Cell Ontology and the Chemical Entities of Biological Interest ontology, and 18.90% of them are fully decomposable into smaller parts. Our results show that the CPE metric permits our method to detect GO cross-product extensions with a mean recall of 62% and a mean precision of 28%. The study is completed with an analysis of false positives to explain this precision value. We think that our results support the claim that our lexical approach can contribute to the axiomatic enrichment of biomedical ontologies and that it can provide new insights into the engineering of biomedical ontologies. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Ontology-based data integration between clinical and research systems.

    PubMed

    Mate, Sebastian; Köpcke, Felix; Toddenroth, Dennis; Martin, Marcus; Prokosch, Hans-Ulrich; Bürkle, Thomas; Ganslandt, Thomas

    2015-01-01

    Data from the electronic medical record comprise numerous structured but uncoded elements, which are not linked to standard terminologies. Reuse of such data for secondary research purposes has gained in importance recently. However, the identification of relevant data elements and the creation of database jobs for extraction, transformation and loading (ETL) are challenging: With current methods such as data warehousing, it is not feasible to efficiently maintain and reuse semantically complex data extraction and trans-formation routines. We present an ontology-supported approach to overcome this challenge by making use of abstraction: Instead of defining ETL procedures at the database level, we use ontologies to organize and describe the medical concepts of both the source system and the target system. Instead of using unique, specifically developed SQL statements or ETL jobs, we define declarative transformation rules within ontologies and illustrate how these constructs can then be used to automatically generate SQL code to perform the desired ETL procedures. This demonstrates how a suitable level of abstraction may not only aid the interpretation of clinical data, but can also foster the reutilization of methods for un-locking it.

  13. Where to search top-K biomedical ontologies?

    PubMed

    Oliveira, Daniela; Butt, Anila Sahar; Haller, Armin; Rebholz-Schuhmann, Dietrich; Sahay, Ratnesh

    2018-03-20

    Searching for precise terms and terminological definitions in the biomedical data space is problematic, as researchers find overlapping, closely related and even equivalent concepts in a single or multiple ontologies. Search engines that retrieve ontological resources often suggest an extensive list of search results for a given input term, which leads to the tedious task of selecting the best-fit ontological resource (class or property) for the input term and reduces user confidence in the retrieval engines. A systematic evaluation of these search engines is necessary to understand their strengths and weaknesses in different search requirements. We have implemented seven comparable Information Retrieval ranking algorithms to search through ontologies and compared them against four search engines for ontologies. Free-text queries have been performed, the outcomes have been judged by experts and the ranking algorithms and search engines have been evaluated against the expert-based ground truth (GT). In addition, we propose a probabilistic GT that is developed automatically to provide deeper insights and confidence to the expert-based GT as well as evaluating a broader range of search queries. The main outcome of this work is the identification of key search factors for biomedical ontologies together with search requirements and a set of recommendations that will help biomedical experts and ontology engineers to select the best-suited retrieval mechanism in their search scenarios. We expect that this evaluation will allow researchers and practitioners to apply the current search techniques more reliably and that it will help them to select the right solution for their daily work. The source code (of seven ranking algorithms), ground truths and experimental results are available at https://github.com/danielapoliveira/bioont-search-benchmark.

  14. Towards Automatic Threat Recognition

    DTIC Science & Technology

    2006-12-01

    York: Bantam. Forschungsinstitut für Kommunikation, Informationsverarbeitung und Ergonomie FGAN Informationstechnik und Führungssysteme KIE Towards...Informationsverarbeitung und Ergonomie FGAN Informationstechnik und Führungssysteme KIE Content Preliminaries about Information Fusion The System Ontology Unification...as Processing Principle Back to the Example Conclusion and Outlook Forschungsinstitut für Kommunikation, Informationsverarbeitung und Ergonomie FGAN

  15. Integrating Information in Biological Ontologies and Molecular Networks to Infer Novel Terms.

    PubMed

    Li, Le; Yip, Kevin Y

    2016-12-15

    Currently most terms and term-term relationships in Gene Ontology (GO) are defined manually, which creates cost, consistency and completeness issues. Recent studies have demonstrated the feasibility of inferring GO automatically from biological networks, which represents an important complementary approach to GO construction. These methods (NeXO and CliXO) are unsupervised, which means 1) they cannot use the information contained in existing GO, 2) the way they integrate biological networks may not optimize the accuracy, and 3) they are not customized to infer the three different sub-ontologies of GO. Here we present a semi-supervised method called Unicorn that extends these previous methods to tackle the three problems. Unicorn uses a sub-tree of an existing GO sub-ontology as training part to learn parameters in integrating multiple networks. Cross-validation results show that Unicorn reliably inferred the left-out parts of each specific GO sub-ontology. In addition, by training Unicorn with an old version of GO together with biological networks, it successfully re-discovered some terms and term-term relationships present only in a new version of GO. Unicorn also successfully inferred some novel terms that were not contained in GO but have biological meanings well-supported by the literature. Source code of Unicorn is available at http://yiplab.cse.cuhk.edu.hk/unicorn/.

  16. An ontological system based on MODIS images to assess ecosystem functioning of Natura 2000 habitats: A case study for Quercus pyrenaica forests

    NASA Astrophysics Data System (ADS)

    Pérez-Luque, A. J.; Pérez-Pérez, R.; Bonet-García, F. J.; Magaña, P. J.

    2015-05-01

    The implementation of the Natura 2000 network requires methods to assess the conservation status of habitats. This paper shows a methodological approach that combines the use of (satellite) Earth observation with ontologies to monitor Natura 2000 habitats and assess their functioning. We have created an ontological system called Savia that can describe both the ecosystem functioning and the behaviour of abiotic factors in a Natura 2000 habitat. This system is able to automatically download images from MODIS products, create indicators and compute temporal trends for them. We have developed an ontology that takes into account the different concepts and relations about indicators and temporal trends, and the spatio-temporal components of the datasets. All the information generated from datasets and MODIS images, is stored into a knowledge base according to the ontology. Users can formulate complex questions using a SPARQL end-point. This system has been tested and validated in a case study that uses Quercus pyrenaica Willd. forests as a target habitat in Sierra Nevada (Spain), a Natura 2000 site. We assess ecosystem functioning using NDVI. The selected abiotic factor is snow cover. Savia provides useful data regarding these two variables and reflects relationships between them.

  17. A method of constructing geo-object ontology in disaster system for prevention and decrease

    NASA Astrophysics Data System (ADS)

    Li, Bin; Liu, Jiping; Shi, Lihong; Wang, Zhenfeng

    2009-10-01

    A kind of formal system, which can express clearly a certain entity or information, is needed to express geographical concept. Besides, some rules explaining the interrelationship and action between different components are also required. Therefore, the conception of geo-object ontology is introduced. It is a shared formalization and display specification of conceptual knowledge system in the field of concrete application of spatial information science. It can constitute hierarchy structure, which derives from the concept classification system in the geographical area. Its concepts can be described by the property. Property sets can form a vector space with multi-dimensional characteristics. Geographic space is composed of different types of geographic entities. And its concept is formed by a series of geographic entities with the same properties and actions. Moreover, each of the geographic entities can be mapped to an object, and each object has its spatial property, time information and topology, semantic relationships associated with other objects. The biggest difference between ecumenical information ontology and geo-ontology is that the latter has the spatial characteristics. During the construction process of geo-object ontology, some important components, such as geographic type, spatial relation, spatial entity type and coordinates, time, should be included to make further research. Here, taking disaster as an example, by using Protégé and OWL, combined methods used by constructing the geo-object ontology in the form of being manual made by domanial experts and semi-automatic are investigated oriented to disaster to serve ultimately geographic information retrieval service driven by ontology.

  18. Realizing parameterless automatic classification of remote sensing imagery using ontology engineering and cyberinfrastructure techniques

    NASA Astrophysics Data System (ADS)

    Sun, Ziheng; Fang, Hui; Di, Liping; Yue, Peng

    2016-09-01

    It was an untouchable dream for remote sensing experts to realize total automatic image classification without inputting any parameter values. Experts usually spend hours and hours on tuning the input parameters of classification algorithms in order to obtain the best results. With the rapid development of knowledge engineering and cyberinfrastructure, a lot of data processing and knowledge reasoning capabilities become online accessible, shareable and interoperable. Based on these recent improvements, this paper presents an idea of parameterless automatic classification which only requires an image and automatically outputs a labeled vector. No parameters and operations are needed from endpoint consumers. An approach is proposed to realize the idea. It adopts an ontology database to store the experiences of tuning values for classifiers. A sample database is used to record training samples of image segments. Geoprocessing Web services are used as functionality blocks to finish basic classification steps. Workflow technology is involved to turn the overall image classification into a total automatic process. A Web-based prototypical system named PACS (Parameterless Automatic Classification System) is implemented. A number of images are fed into the system for evaluation purposes. The results show that the approach could automatically classify remote sensing images and have a fairly good average accuracy. It is indicated that the classified results will be more accurate if the two databases have higher quality. Once the experiences and samples in the databases are accumulated as many as an expert has, the approach should be able to get the results with similar quality to that a human expert can get. Since the approach is total automatic and parameterless, it can not only relieve remote sensing workers from the heavy and time-consuming parameter tuning work, but also significantly shorten the waiting time for consumers and facilitate them to engage in image classification activities. Currently, the approach is used only on high resolution optical three-band remote sensing imagery. The feasibility using the approach on other kinds of remote sensing images or involving additional bands in classification will be studied in future.

  19. Speeding up ontology creation of scientific terms

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.; Graybeal, J.

    2005-12-01

    An ontology is a formal specification of a controlled vocabulary. Ontologies are composed of classes (similar to categories), individuals (members of classes) and properties (attributes of the individuals). Having vocabularies expressed in a formal specification like the Web Ontology Language (OWL) enables interoperability due to the comprehensiveness of OWL by software programs. Two main non-inclusive strategies exist when constructing an ontology: an up-down approach and a bottom-up approach. The former one is directed towards the creation of top classes first (main concepts) and then finding the required subclasses and individuals. The later approach starts from the individuals and then finds similar properties promoting the creation of classes. At the Marine Metadata Interoperability (MMI) Initiative we used a bottom-up approach to create ontologies from simple-vocabularies (those that are not expressed in a conceptual way). We found that the vocabularies were available in different formats (relational data bases, plain files, HTML, XML, PDF) and sometimes were composed of thousands of terms, making the ontology creation process a very time consuming activity. To expedite the conversion process we created a tool VOC2OWL that takes a vocabulary in a table like structure (CSV or TAB format) and a conversion-property file to create automatically an ontology. We identified two basic structures of simple-vocabularies: Flat vocabularies (e.g., phone directory) and hierarchical vocabularies (e.g., taxonomies). The property file defines a list of attributes for the conversion process for each structure type. The attributes included metadata information (title, description, subject, contributor, urlForMoreInformation) and conversion flags (treatAsHierarchy, generateAutoIds) and other conversion information needed to create the ontology (columnForPrimaryClass, columnsToCreateClassesFrom, fileIn, fileOut, namespace, format). We created more than 50 ontologies and generated more than 250,000 statements (or triples). The previous ontologies allowed domain experts to create 800 relations allowing to infer 2200 more relations among different vocabularies in the MMI workshop "Advancing Domain Vocabularies" held in Boulder Aug, 2005.

  20. Semantics-informed geological maps: Conceptual modeling and knowledge encoding

    NASA Astrophysics Data System (ADS)

    Lombardo, Vincenzo; Piana, Fabrizio; Mimmo, Dario

    2018-07-01

    This paper introduces a novel, semantics-informed geologic mapping process, whose application domain is the production of a synthetic geologic map of a large administrative region. A number of approaches concerning the expression of geologic knowledge through UML schemata and ontologies have been around for more than a decade. These approaches have yielded resources that concern specific domains, such as, e.g., lithology. We develop a conceptual model that aims at building a digital encoding of several domains of geologic knowledge, in order to support the interoperability of the sources. We apply the devised terminological base to the classification of the elements of a geologic map of the Italian Western Alps and northern Apennines (Piemonte region). The digitally encoded knowledge base is a merged set of ontologies, called OntoGeonous. The encoding process identifies the objects of the semantic encoding, the geologic units, gathers the relevant information about such objects from authoritative resources, such as GeoSciML (giving priority to the application schemata reported in the INSPIRE Encoding Cookbook), and expresses the statements by means of axioms encoded in the Web Ontology Language (OWL). To support interoperability, OntoGeonous interlinks the general concepts by referring to the upper part level of ontology SWEET (developed by NASA), and imports knowledge that is already encoded in ontological format (e.g., ontology Simple Lithology). Machine-readable knowledge allows for consistency checking and for classification of the geological map data through algorithms of automatic reasoning.

  1. Integrating systems biology models and biomedical ontologies

    PubMed Central

    2011-01-01

    Background Systems biology is an approach to biology that emphasizes the structure and dynamic behavior of biological systems and the interactions that occur within them. To succeed, systems biology crucially depends on the accessibility and integration of data across domains and levels of granularity. Biomedical ontologies were developed to facilitate such an integration of data and are often used to annotate biosimulation models in systems biology. Results We provide a framework to integrate representations of in silico systems biology with those of in vivo biology as described by biomedical ontologies and demonstrate this framework using the Systems Biology Markup Language. We developed the SBML Harvester software that automatically converts annotated SBML models into OWL and we apply our software to those biosimulation models that are contained in the BioModels Database. We utilize the resulting knowledge base for complex biological queries that can bridge levels of granularity, verify models based on the biological phenomenon they represent and provide a means to establish a basic qualitative layer on which to express the semantics of biosimulation models. Conclusions We establish an information flow between biomedical ontologies and biosimulation models and we demonstrate that the integration of annotated biosimulation models and biomedical ontologies enables the verification of models as well as expressive queries. Establishing a bi-directional information flow between systems biology and biomedical ontologies has the potential to enable large-scale analyses of biological systems that span levels of granularity from molecules to organisms. PMID:21835028

  2. Quality assurance of the gene ontology using abstraction networks.

    PubMed

    Ochs, Christopher; Perl, Yehoshua; Halper, Michael; Geller, James; Lomax, Jane

    2016-06-01

    The gene ontology (GO) is used extensively in the field of genomics. Like other large and complex ontologies, quality assurance (QA) efforts for GO's content can be laborious and time consuming. Abstraction networks (AbNs) are summarization networks that reveal and highlight high-level structural and hierarchical aggregation patterns in an ontology. They have been shown to successfully support QA work in the context of various ontologies. Two kinds of AbNs, called the area taxonomy and the partial-area taxonomy, are developed for GO hierarchies and derived specifically for the biological process (BP) hierarchy. Within this framework, several QA heuristics, based on the identification of groups of anomalous terms which exhibit certain taxonomy-defined characteristics, are introduced. Such groups are expected to have higher error rates when compared to other terms. Thus, by focusing QA efforts on anomalous terms one would expect to find relatively more erroneous content. By automatically identifying these potential problem areas within an ontology, time and effort will be saved during manual reviews of GO's content. BP is used as a testbed, with samples of three kinds of anomalous BP terms chosen for a taxonomy-based QA review. Additional heuristics for QA are demonstrated. From the results of this QA effort, it is observed that different kinds of inconsistencies in the modeling of GO can be exposed with the use of the proposed heuristics. For comparison, the results of QA work on a sample of terms chosen from GO's general population are presented.

  3. Knowledge Evolution in Distributed Geoscience Datasets and the Role of Semantic Technologies

    NASA Astrophysics Data System (ADS)

    Ma, X.

    2014-12-01

    Knowledge evolves in geoscience, and the evolution is reflected in datasets. In a context with distributed data sources, the evolution of knowledge may cause considerable challenges to data management and re-use. For example, a short news published in 2009 (Mascarelli, 2009) revealed the geoscience community's concern that the International Commission on Stratigraphy's change to the definition of Quaternary may bring heavy reworking of geologic maps. Now we are in the era of the World Wide Web, and geoscience knowledge is increasingly modeled and encoded in the form of ontologies and vocabularies by using semantic technologies. Accordingly, knowledge evolution leads to a consequence called ontology dynamics. Flouris et al. (2008) summarized 10 topics of general ontology changes/dynamics such as: ontology mapping, morphism, evolution, debugging and versioning, etc. Ontology dynamics makes impacts at several stages of a data life cycle and causes challenges, such as: the request for reworking of the extant data in a data center, semantic mismatch among data sources, differentiated understanding of a same piece of dataset between data providers and data users, as well as error propagation in cross-discipline data discovery and re-use (Ma et al., 2014). This presentation will analyze the best practices in the geoscience community so far and summarize a few recommendations to reduce the negative impacts of ontology dynamics in a data life cycle, including: communities of practice and collaboration on ontology and vocabulary building, link data records to standardized terms, and methods for (semi-)automatic reworking of datasets using semantic technologies. References: Flouris, G., Manakanatas, D., Kondylakis, H., Plexousakis, D., Antoniou, G., 2008. Ontology change: classification and survey. The Knowledge Engineering Review 23 (2), 117-152. Ma, X., Fox, P., Rozell, E., West, P., Zednik, S., 2014. Ontology dynamics in a data life cycle: Challenges and recommendations from a Geoscience Perspective. Journal of Earth Science 25 (2), 407-412. Mascarelli, A.L., 2009. Quaternary geologists win timescale vote. Nature 459, 624.

  4. Semi-automatic Data Integration using Karma

    NASA Astrophysics Data System (ADS)

    Garijo, D.; Kejriwal, M.; Pierce, S. A.; Houser, P. I. Q.; Peckham, S. D.; Stanko, Z.; Hardesty Lewis, D.; Gil, Y.; Pennington, D. D.; Knoblock, C.

    2017-12-01

    Data integration applications are ubiquitous in scientific disciplines. A state-of-the-art data integration system accepts both a set of data sources and a target ontology as input, and semi-automatically maps the data sources in terms of concepts and relationships in the target ontology. Mappings can be both complex and highly domain-specific. Once such a semantic model, expressing the mapping using community-wide standard, is acquired, the source data can be stored in a single repository or database using the semantics of the target ontology. However, acquiring the mapping is a labor-prone process, and state-of-the-art artificial intelligence systems are unable to fully automate the process using heuristics and algorithms alone. Instead, a more realistic goal is to develop adaptive tools that minimize user feedback (e.g., by offering good mapping recommendations), while at the same time making it intuitive and easy for the user to both correct errors and to define complex mappings. We present Karma, a data integration system that has been developed over multiple years in the information integration group at the Information Sciences Institute, a research institute at the University of Southern California's Viterbi School of Engineering. Karma is a state-of-the-art data integration tool that supports an interactive graphical user interface, and has been featured in multiple domains over the last five years, including geospatial, biological, humanities and bibliographic applications. Karma allows a user to import their own ontology and datasets using widely used formats such as RDF, XML, CSV and JSON, can be set up either locally or on a server, supports a native backend database for prototyping queries, and can even be seamlessly integrated into external computational pipelines, including those ingesting data via streaming data sources, Web APIs and SQL databases. We illustrate a Karma workflow at a conceptual level, along with a live demo, and show use cases of Karma specifically for the geosciences. In particular, we show how Karma can be used intuitively to obtain the mapping model between case study data sources and a publicly available and expressive target ontology that has been designed to capture a broad set of concepts in geoscience with standardized, easily searchable names.

  5. Partial automation of database processing of simulation outputs from L-systems models of plant morphogenesis.

    PubMed

    Chen, Yi- Ping Phoebe; Hanan, Jim

    2002-01-01

    Models of plant architecture allow us to explore how genotype environment interactions effect the development of plant phenotypes. Such models generate masses of data organised in complex hierarchies. This paper presents a generic system for creating and automatically populating a relational database from data generated by the widely used L-system approach to modelling plant morphogenesis. Techniques from compiler technology are applied to generate attributes (new fields) in the database, to simplify query development for the recursively-structured branching relationship. Use of biological terminology in an interactive query builder contributes towards making the system biologist-friendly.

  6. Systems definition study for shuttle demonstration flights of large space structures, Volume 2: Technical Report

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The development of large space structure (LSS) technology is discussed, with emphasis on space fabricated structures which are automatically manufactured in space from sheet-strip materials and assembled on-orbit. It is concluded that an LSS flight demonstration using an Automated Beam Builder and the orbiter as a construction base, could be performed in the 1983-1984 time period. The estimated cost is $24 million exclusive of shuttle launch costs. During the mission, a simple space platform could be constructed in-orbit to accommodate user requirements associated with earth viewing and materials exposure experiments needs.

  7. Research on the Intensity Analysis and Result Visualization of Construction Land in Urban Planning

    NASA Astrophysics Data System (ADS)

    Cui, J.; Dong, B.; Li, J.; Li, L.

    2017-09-01

    As a fundamental work of urban planning, the intensity analysis of construction land involves many repetitive data processing works that are prone to cause errors or data precision loss, and the lack of efficient methods and tools to visualizing the analysis results in current urban planning. In the research a portable tool is developed by using the Model Builder technique embedded in ArcGIS to provide automatic data processing and rapid result visualization for the works. A series of basic modules provided by ArcGIS are linked together to shape a whole data processing chain in the tool. Once the required data is imported, the analysis results and related maps and graphs including the intensity values and zoning map, the skyline analysis map etc. are produced automatically. Finally the tool is installation-free and can be dispatched quickly between planning teams.

  8. 76 FR 2145 - Masco Builder Cabinet Group Including On-Site Leased Workers From Reserves Network, Jackson, OH...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-12

    ...,287B; TA-W-71,287C] Masco Builder Cabinet Group Including On-Site Leased Workers From Reserves Network, Jackson, OH; Masco Builder Cabinet Group, Waverly, OH; Masco Builder Cabinet Group, Seal Township, OH; Masco Builder Cabinet Group, Seaman, OH; Amended Certification Regarding Eligibility To Apply for Worker...

  9. A Builder's Guide to Super Good Cents Contruction and Sales.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    OSU Extension Energy Program; United States. Bonneville Power Administration.

    This Builder's guide describes the Super Good Cents {reg sign} program and the benefits available to participating builders. It explains the program standards and the typical building techniques used by Super Good Cents builders. Finally, the guide tells how you can participate and answers many of the questions asked by builders about the Super Good Cents program.

  10. Using Ontologies to Interlink Linguistic Annotations and Improve Their Accuracy

    ERIC Educational Resources Information Center

    Pareja-Lora, Antonio

    2016-01-01

    For the new approaches to language e-learning (e.g. language blended learning, language autonomous learning or mobile-assisted language learning) to succeed, some automatic functions for error correction (for instance, in exercises) will have to be included in the long run in the corresponding environments and/or applications. A possible way to…

  11. Using a User-Interactive QA System for Personalized E-Learning

    ERIC Educational Resources Information Center

    Hu, Dawei; Chen, Wei; Zeng, Qingtian; Hao, Tianyong; Min, Feng; Wenyin, Liu

    2008-01-01

    A personalized e-learning framework based on a user-interactive question-answering (QA) system is proposed, in which a user-modeling approach is used to capture personal information of students and a personalized answer extraction algorithm is proposed for personalized automatic answering. In our approach, a topic ontology (or concept hierarchy)…

  12. Builders Challenge High Performance Builder Spotlight - Martha Rose Construction, Inc., Seattle, Washington

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2008-01-01

    Building America/Builders Challenge fact sheet on Martha Rose Construction, an energy-efficient home builder in marine climate using the German Passiv Haus design, improved insulation, and solar photovoltaics.

  13. A Case Study on Sepsis Using PubMed and Deep Learning for Ontology Learning.

    PubMed

    Arguello Casteleiro, Mercedes; Maseda Fernandez, Diego; Demetriou, George; Read, Warren; Fernandez Prieto, Maria Jesus; Des Diz, Julio; Nenadic, Goran; Keane, John; Stevens, Robert

    2017-01-01

    We investigate the application of distributional semantics models for facilitating unsupervised extraction of biomedical terms from unannotated corpora. Term extraction is used as the first step of an ontology learning process that aims to (semi-)automatic annotation of biomedical concepts and relations from more than 300K PubMed titles and abstracts. We experimented with both traditional distributional semantics methods such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) as well as the neural language models CBOW and Skip-gram from Deep Learning. The evaluation conducted concentrates on sepsis, a major life-threatening condition, and shows that Deep Learning models outperform LSA and LDA with much higher precision.

  14. Populating the Semantic Web by Macro-reading Internet Text

    NASA Astrophysics Data System (ADS)

    Mitchell, Tom M.; Betteridge, Justin; Carlson, Andrew; Hruschka, Estevam; Wang, Richard

    A key question regarding the future of the semantic web is "how will we acquire structured information to populate the semantic web on a vast scale?" One approach is to enter this information manually. A second approach is to take advantage of pre-existing databases, and to develop common ontologies, publishing standards, and reward systems to make this data widely accessible. We consider here a third approach: developing software that automatically extracts structured information from unstructured text present on the web. We also describe preliminary results demonstrating that machine learning algorithms can learn to extract tens of thousands of facts to populate a diverse ontology, with imperfect but reasonably good accuracy.

  15. Formalized Conflicts Detection Based on the Analysis of Multiple Emails: An Approach Combining Statistics and Ontologies

    NASA Astrophysics Data System (ADS)

    Zakaria, Chahnez; Curé, Olivier; Salzano, Gabriella; Smaïli, Kamel

    In Computer Supported Cooperative Work (CSCW), it is crucial for project leaders to detect conflicting situations as early as possible. Generally, this task is performed manually by studying a set of documents exchanged between team members. In this paper, we propose a full-fledged automatic solution that identifies documents, subjects and actors involved in relational conflicts. Our approach detects conflicts in emails, probably the most popular type of documents in CSCW, but the methods used can handle other text-based documents. These methods rely on the combination of statistical and ontological operations. The proposed solution is decomposed in several steps: (i) we enrich a simple negative emotion ontology with terms occuring in the corpus of emails, (ii) we categorize each conflicting email according to the concepts of this ontology and (iii) we identify emails, subjects and team members involved in conflicting emails using possibilistic description logic and a set of proposed measures. Each of these steps are evaluated and validated on concrete examples. Moreover, this approach's framework is generic and can be easily adapted to domains other than conflicts, e.g. security issues, and extended with operations making use of our proposed set of measures.

  16. Ontology-Based Data Integration between Clinical and Research Systems

    PubMed Central

    Mate, Sebastian; Köpcke, Felix; Toddenroth, Dennis; Martin, Marcus; Prokosch, Hans-Ulrich

    2015-01-01

    Data from the electronic medical record comprise numerous structured but uncoded ele-ments, which are not linked to standard terminologies. Reuse of such data for secondary research purposes has gained in importance recently. However, the identification of rele-vant data elements and the creation of database jobs for extraction, transformation and loading (ETL) are challenging: With current methods such as data warehousing, it is not feasible to efficiently maintain and reuse semantically complex data extraction and trans-formation routines. We present an ontology-supported approach to overcome this challenge by making use of abstraction: Instead of defining ETL procedures at the database level, we use ontologies to organize and describe the medical concepts of both the source system and the target system. Instead of using unique, specifically developed SQL statements or ETL jobs, we define declarative transformation rules within ontologies and illustrate how these constructs can then be used to automatically generate SQL code to perform the desired ETL procedures. This demonstrates how a suitable level of abstraction may not only aid the interpretation of clinical data, but can also foster the reutilization of methods for un-locking it. PMID:25588043

  17. The environment ontology in 2016: bridging domains with increased scope, semantic density, and interoperation

    DOE PAGES

    Buttigieg, Pier Luigi; Pafilis, Evangelos; Lewis, Suzanna E.; ...

    2016-09-23

    Background: The Environment Ontology (ENVO; http://www.environmentontology.org/), first described in 2013, is a resource and research target for the semantically controlled description of environmental entities. The ontology's initial aim was the representation of the biomes, environmental features, and environmental materials pertinent to genomic and microbiome-related investigations. However, the need for environmental semantics is common to a multitude of fields, and ENVO's use has steadily grown since its initial description. We have thus expanded, enhanced, and generalised the ontology to support its increasingly diverse applications. Methods: We have updated our development suite to promote expressivity, consistency, and speed: we now develop ENVOmore » in the Web Ontology Language (OWL) and employ templating methods to accelerate class creation. We have also taken steps to better align ENVO with the Open Biological and Biomedical Ontologies (OBO) Foundry principles and interoperate with existing OBO ontologies. Further, we applied text-mining approaches to extract habitat information from the Encyclopedia of Life and automatically create experimental habitat classes within ENVO. Results: Relative to its state in 2013, ENVO's content, scope, and implementation have been enhanced and much of its existing content revised for improved semantic representation. ENVO now offers representations of habitats, environmental processes, anthropogenic environments, and entities relevant to environmental health initiatives and the global Sustainable Development Agenda for 2030. Several branches of ENVO have been used to incubate and seed new ontologies in previously unrepresented domains such as food and agronomy. The current release version of the ontology, in OWL format, is available at http://purl.obolibrary.org/obo/envo.owl. Conclusions: ENVO has been shaped into an ontology which bridges multiple domains including biomedicine, natural and anthropogenic ecology, 'omics, and socioeconomic development. Through continued interactions with our users and partners, particularly those performing data archiving and sythesis, we anticipate that ENVO's growth will accelerate in 2017. As always, we invite further contributions and collaboration to advance the semantic representation of the environment, ranging from geographic features and environmental materials, across habitats and ecosystems, to everyday objects in household settings.« less

  18. The environment ontology in 2016: bridging domains with increased scope, semantic density, and interoperation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buttigieg, Pier Luigi; Pafilis, Evangelos; Lewis, Suzanna E.

    Background: The Environment Ontology (ENVO; http://www.environmentontology.org/), first described in 2013, is a resource and research target for the semantically controlled description of environmental entities. The ontology's initial aim was the representation of the biomes, environmental features, and environmental materials pertinent to genomic and microbiome-related investigations. However, the need for environmental semantics is common to a multitude of fields, and ENVO's use has steadily grown since its initial description. We have thus expanded, enhanced, and generalised the ontology to support its increasingly diverse applications. Methods: We have updated our development suite to promote expressivity, consistency, and speed: we now develop ENVOmore » in the Web Ontology Language (OWL) and employ templating methods to accelerate class creation. We have also taken steps to better align ENVO with the Open Biological and Biomedical Ontologies (OBO) Foundry principles and interoperate with existing OBO ontologies. Further, we applied text-mining approaches to extract habitat information from the Encyclopedia of Life and automatically create experimental habitat classes within ENVO. Results: Relative to its state in 2013, ENVO's content, scope, and implementation have been enhanced and much of its existing content revised for improved semantic representation. ENVO now offers representations of habitats, environmental processes, anthropogenic environments, and entities relevant to environmental health initiatives and the global Sustainable Development Agenda for 2030. Several branches of ENVO have been used to incubate and seed new ontologies in previously unrepresented domains such as food and agronomy. The current release version of the ontology, in OWL format, is available at http://purl.obolibrary.org/obo/envo.owl. Conclusions: ENVO has been shaped into an ontology which bridges multiple domains including biomedicine, natural and anthropogenic ecology, 'omics, and socioeconomic development. Through continued interactions with our users and partners, particularly those performing data archiving and sythesis, we anticipate that ENVO's growth will accelerate in 2017. As always, we invite further contributions and collaboration to advance the semantic representation of the environment, ranging from geographic features and environmental materials, across habitats and ecosystems, to everyday objects in household settings.« less

  19. The environment ontology in 2016: bridging domains with increased scope, semantic density, and interoperation.

    PubMed

    Buttigieg, Pier Luigi; Pafilis, Evangelos; Lewis, Suzanna E; Schildhauer, Mark P; Walls, Ramona L; Mungall, Christopher J

    2016-09-23

    The Environment Ontology (ENVO; http://www.environmentontology.org/ ), first described in 2013, is a resource and research target for the semantically controlled description of environmental entities. The ontology's initial aim was the representation of the biomes, environmental features, and environmental materials pertinent to genomic and microbiome-related investigations. However, the need for environmental semantics is common to a multitude of fields, and ENVO's use has steadily grown since its initial description. We have thus expanded, enhanced, and generalised the ontology to support its increasingly diverse applications. We have updated our development suite to promote expressivity, consistency, and speed: we now develop ENVO in the Web Ontology Language (OWL) and employ templating methods to accelerate class creation. We have also taken steps to better align ENVO with the Open Biological and Biomedical Ontologies (OBO) Foundry principles and interoperate with existing OBO ontologies. Further, we applied text-mining approaches to extract habitat information from the Encyclopedia of Life and automatically create experimental habitat classes within ENVO. Relative to its state in 2013, ENVO's content, scope, and implementation have been enhanced and much of its existing content revised for improved semantic representation. ENVO now offers representations of habitats, environmental processes, anthropogenic environments, and entities relevant to environmental health initiatives and the global Sustainable Development Agenda for 2030. Several branches of ENVO have been used to incubate and seed new ontologies in previously unrepresented domains such as food and agronomy. The current release version of the ontology, in OWL format, is available at http://purl.obolibrary.org/obo/envo.owl . ENVO has been shaped into an ontology which bridges multiple domains including biomedicine, natural and anthropogenic ecology, 'omics, and socioeconomic development. Through continued interactions with our users and partners, particularly those performing data archiving and sythesis, we anticipate that ENVO's growth will accelerate in 2017. As always, we invite further contributions and collaboration to advance the semantic representation of the environment, ranging from geographic features and environmental materials, across habitats and ecosystems, to everyday objects in household settings.

  20. A Semantic Approach for Knowledge Discovery to Help Mitigate Habitat Loss in the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Maskey, M.; Graves, S.; Hardin, D.

    2008-12-01

    Noesis is a meta-search engine and a resource aggregator that uses domain ontologies to provide scoped search capabilities. Ontologies enable Noesis to help users refine their searches for information on the open web and in hidden web locations such as data catalogues with standardized, but discipline specific vocabularies. Through its ontologies Noesis provides a guided refinement of search queries which produces complete and accurate searches while reducing the user's burden to experiment with different search strings. All search results are organized by categories (e. g. all results from Google are grouped together) which may be selected or omitted according to the desire of the user. During the past two years ontologies were developed for sea grasses in the Gulf of Mexico and were used to support a habitat restoration demonstration project. Currently these ontologies are being augmented to address the special characteristics of mangroves. These new ontologies will extend the demonstration project to broader regions of the Gulf including protected mangrove locations in coastal Mexico. Noesis contributes to the decision making process by producing a comprehensive list of relevant resources based on the semantic information contained in the ontologies. Ontologies are organized in a tree like taxonomies, where the child nodes represent the Specializations and the parent nodes represent the Generalizations of a node or concept. Specializations can be used to provide more detailed search, while generalizations are used to make the search broader. Ontologies are also used to link two syntactically different terms to one semantic concept (synonyms). Appending a synonym to the query expands the search, thus providing better search coverage. Every concept has a set of properties that are neither in the same inheritance hierarchy (Specializations / Generalizations) nor equivalent (synonyms). These are called Related Concepts and they are captured in the ontology through property relationships. By using Related Concepts users can search for resources with respect to a particular property. Noesis automatically generates searches that include all of these capabilities, removing the burden from the user and producing broader and more accurate search results. This presentation will demonstrate the features of Noesis and describe its application to habitat studies in the Gulf of Mexico.

  1. Integrating Information in Biological Ontologies and Molecular Networks to Infer Novel Terms

    PubMed Central

    Li, Le; Yip, Kevin Y.

    2016-01-01

    Currently most terms and term-term relationships in Gene Ontology (GO) are defined manually, which creates cost, consistency and completeness issues. Recent studies have demonstrated the feasibility of inferring GO automatically from biological networks, which represents an important complementary approach to GO construction. These methods (NeXO and CliXO) are unsupervised, which means 1) they cannot use the information contained in existing GO, 2) the way they integrate biological networks may not optimize the accuracy, and 3) they are not customized to infer the three different sub-ontologies of GO. Here we present a semi-supervised method called Unicorn that extends these previous methods to tackle the three problems. Unicorn uses a sub-tree of an existing GO sub-ontology as training part to learn parameters in integrating multiple networks. Cross-validation results show that Unicorn reliably inferred the left-out parts of each specific GO sub-ontology. In addition, by training Unicorn with an old version of GO together with biological networks, it successfully re-discovered some terms and term-term relationships present only in a new version of GO. Unicorn also successfully inferred some novel terms that were not contained in GO but have biological meanings well-supported by the literature.Availability: Source code of Unicorn is available at http://yiplab.cse.cuhk.edu.hk/unicorn/. PMID:27976738

  2. Builders Challenge High Performance Builder Spotlight: Yavapai College, Chino Valley, Arizona

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2009-12-22

    Building America Builders Challenge fact sheet on Yavapai College of Chino Valley, Arizona. These college students built a Building America Builders Challenge house that achieved the remarkably low HERS score of -3 and achieved a tight building envelope.

  3. Towards symbiosis in knowledge representation and natural language processing for structuring clinical practice guidelines.

    PubMed

    Weng, Chunhua; Payne, Philip R O; Velez, Mark; Johnson, Stephen B; Bakken, Suzanne

    2014-01-01

    The successful adoption by clinicians of evidence-based clinical practice guidelines (CPGs) contained in clinical information systems requires efficient translation of free-text guidelines into computable formats. Natural language processing (NLP) has the potential to improve the efficiency of such translation. However, it is laborious to develop NLP to structure free-text CPGs using existing formal knowledge representations (KR). In response to this challenge, this vision paper discusses the value and feasibility of supporting symbiosis in text-based knowledge acquisition (KA) and KR. We compare two ontologies: (1) an ontology manually created by domain experts for CPG eligibility criteria and (2) an upper-level ontology derived from a semantic pattern-based approach for automatic KA from CPG eligibility criteria text. Then we discuss the strengths and limitations of interweaving KA and NLP for KR purposes and important considerations for achieving the symbiosis of KR and NLP for structuring CPGs to achieve evidence-based clinical practice.

  4. RuleGO: a logical rules-based tool for description of gene groups by means of Gene Ontology

    PubMed Central

    Gruca, Aleksandra; Sikora, Marek; Polanski, Andrzej

    2011-01-01

    Genome-wide expression profiles obtained with the use of DNA microarray technology provide abundance of experimental data on biological and molecular processes. Such amount of data need to be further analyzed and interpreted in order to obtain biological conclusions on the basis of experimental results. The analysis requires a lot of experience and is usually time-consuming process. Thus, frequently various annotation databases are used to improve the whole process of analysis. Here, we present RuleGO—the web-based application that allows the user to describe gene groups on the basis of logical rules that include Gene Ontology (GO) terms in their premises. Presented application allows obtaining rules that reflect coappearance of GO-terms describing genes supported by the rules. The ontology level and number of coappearing GO-terms is adjusted in automatic manner. The user limits the space of possible solutions only. The RuleGO application is freely available at http://rulego.polsl.pl/. PMID:21715384

  5. Ontological Foundations for Tracking Data Quality through the Internet of Things.

    PubMed

    Ceusters, Werner; Bona, Jonathan

    2016-01-01

    Amongst the positive outcomes expected from the Internet of Things for Health are longitudinal patient records that are more complete and less erroneous by complementing manual data entry with automatic data feeds from sensors. Unfortunately, devices are fallible too. Quality control procedures such as inspection, testing and maintenance can prevent devices from producing errors. The additional approach envisioned here is to establish constant data quality monitoring through analytics procedures on patient data that exploit not only the ontological principles ascribed to patients and their bodily features, but also to observation and measurement processes in which devices and patients participate, including the, perhaps erroneous, representations that are generated. Using existing realism-based ontologies, we propose a set of categories that analytics procedures should be able to reason with and highlight the importance of unique identification of not only patients, caregivers and devices, but of everything involved in those measurements. This approach supports the thesis that the majority of what tends to be viewed as 'metadata' are actually data about first-order entities.

  6. Using semantic technologies and the OSU ontology for modelling context and activities in multi-sensory surveillance systems

    NASA Astrophysics Data System (ADS)

    Gómez A, Héctor F.; Martínez-Tomás, Rafael; Arias Tapia, Susana A.; Rincón Zamorano, Mariano

    2014-04-01

    Automatic systems that monitor human behaviour for detecting security problems are a challenge today. Previously, our group defined the Horus framework, which is a modular architecture for the integration of multi-sensor monitoring stages. In this work, structure and technologies required for high-level semantic stages of Horus are proposed, and the associated methodological principles established with the aim of recognising specific behaviours and situations. Our methodology distinguishes three semantic levels of events: low level (compromised with sensors), medium level (compromised with context), and high level (target behaviours). The ontology for surveillance and ubiquitous computing has been used to integrate ontologies from specific domains and together with semantic technologies have facilitated the modelling and implementation of scenes and situations by reusing components. A home context and a supermarket context were modelled following this approach, where three suspicious activities were monitored via different virtual sensors. The experiments demonstrate that our proposals facilitate the rapid prototyping of this kind of systems.

  7. Builders Challenge High Performance Builder Spotlight - Masco Environments for Living, Las Vegas, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2009-01-01

    Building America Builders Challenge fact sheet on Masco’s Environments for Living Certified Green demo home at the 2009 International Builders Show in Las Vegas. The home has a Home Energy Rating System (HERS) index score of 44, a right-sized air conditi

  8. Textpresso: An Ontology-Based Information Retrieval and Extraction System for Biological Literature

    PubMed Central

    Müller, Hans-Michael; Kenny, Eimear E

    2004-01-01

    We have developed Textpresso, a new text-mining system for scientific literature whose capabilities go far beyond those of a simple keyword search engine. Textpresso's two major elements are a collection of the full text of scientific articles split into individual sentences, and the implementation of categories of terms for which a database of articles and individual sentences can be searched. The categories are classes of biological concepts (e.g., gene, allele, cell or cell group, phenotype, etc.) and classes that relate two objects (e.g., association, regulation, etc.) or describe one (e.g., biological process, etc.). Together they form a catalog of types of objects and concepts called an ontology. After this ontology is populated with terms, the whole corpus of articles and abstracts is marked up to identify terms of these categories. The current ontology comprises 33 categories of terms. A search engine enables the user to search for one or a combination of these tags and/or keywords within a sentence or document, and as the ontology allows word meaning to be queried, it is possible to formulate semantic queries. Full text access increases recall of biological data types from 45% to 95%. Extraction of particular biological facts, such as gene-gene interactions, can be accelerated significantly by ontologies, with Textpresso automatically performing nearly as well as expert curators to identify sentences; in searches for two uniquely named genes and an interaction term, the ontology confers a 3-fold increase of search efficiency. Textpresso currently focuses on Caenorhabditis elegans literature, with 3,800 full text articles and 16,000 abstracts. The lexicon of the ontology contains 14,500 entries, each of which includes all versions of a specific word or phrase, and it includes all categories of the Gene Ontology database. Textpresso is a useful curation tool, as well as search engine for researchers, and can readily be extended to other organism-specific corpora of text. Textpresso can be accessed at http://www.textpresso.org or via WormBase at http://www.wormbase.org. PMID:15383839

  9. Builders Challenge High Performance Builder Spotlight: David Weekley Homes, Houston, Texas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2009-12-22

    Building America Builders Challenge fact sheet on David Weekley Homes of Houston, Texas. The builder plans homes as a "system," with features such as wood-framed walls that are air-sealed then insulated with R-13 unfaced fiberglass batts plus an external covering of R-2 polyisocyanurate rigid foam sheathing.

  10. Practices and Processes of Leading High Performance Home Builders in the Upper Midwest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Von Thoma, E.; Ojczyk, C.

    2012-12-01

    The NorthernSTAR Building America Partnership team proposed this study to gain insight into the business, sales, and construction processes of successful high performance builders. The knowledge gained by understanding the high performance strategies used by individual builders, as well as the process each followed to move from traditional builder to high performance builder, will be beneficial in proposing more in-depth research to yield specific action items to assist the industry at large transform to high performance new home construction. This investigation identified the best practices of three successful high performance builders in the upper Midwest. In-depth field analysis of themore » performance levels of their homes, their business models, and their strategies for market acceptance were explored. All three builders commonly seek ENERGY STAR certification on their homes and implement strategies that would allow them to meet the requirements for the Building America Builders Challenge program. Their desire for continuous improvement, willingness to seek outside assistance, and ambition to be leaders in their field are common themes. Problem solving to overcome challenges was accepted as part of doing business. It was concluded that crossing the gap from code-based building to high performance based building was a natural evolution for these leading builders.« less

  11. Evaluation of the U.S. Department of Energy Challenge Home Program Certification of Production Builders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerrigan, P.; Loomis, H.

    2014-09-01

    The purpose of this project was to evaluate integrated packages of advanced measures in individual test homes to assess their performance with respect to Building America Program goals, specifically compliance with the DOE Challenge Home Program. BSC consulted on the construction of five test houses by three Cold Climate production builders in three separate US cities. BSC worked with the builders to develop a design package tailored to the cost-related impacts for each builder. Therefore, the resulting design packages do vary from builder to builder. BSC provided support through this research project on the design, construction and performance testing ofmore » the five test homes. Overall, the builders have concluded that the energy related upgrades (either through the prescriptive or performance path) represent reasonable upgrades. The builders commented that while not every improvement in specification was cost effective (as in a reasonable payback period), many were improvements that could improve the marketability of the homes and serve to attract more energy efficiency discerning prospective homeowners. However, the builders did express reservations on the associated checklists and added certifications. An increase in administrative time was observed with all builders. The checklists and certifications also inherently increase cost due to: 1. Adding services to the scope of work for various trades, such as HERS Rater, HVAC contractor; 2. Increased material costs related to the checklists, especially the EPA Indoor airPLUS and EPA WaterSense(R) Efficient Hot Water Distribution requirement.« less

  12. An ontology-driven, diagnostic modeling system.

    PubMed

    Haug, Peter J; Ferraro, Jeffrey P; Holmen, John; Wu, Xinzi; Mynam, Kumar; Ebert, Matthew; Dean, Nathan; Jones, Jason

    2013-06-01

    To present a system that uses knowledge stored in a medical ontology to automate the development of diagnostic decision support systems. To illustrate its function through an example focused on the development of a tool for diagnosing pneumonia. We developed a system that automates the creation of diagnostic decision-support applications. It relies on a medical ontology to direct the acquisition of clinic data from a clinical data warehouse and uses an automated analytic system to apply a sequence of machine learning algorithms that create applications for diagnostic screening. We refer to this system as the ontology-driven diagnostic modeling system (ODMS). We tested this system using samples of patient data collected in Salt Lake City emergency rooms and stored in Intermountain Healthcare's enterprise data warehouse. The system was used in the preliminary development steps of a tool to identify patients with pneumonia in the emergency department. This tool was compared with a manually created diagnostic tool derived from a curated dataset. The manually created tool is currently in clinical use. The automatically created tool had an area under the receiver operating characteristic curve of 0.920 (95% CI 0.916 to 0.924), compared with 0.944 (95% CI 0.942 to 0.947) for the manually created tool. Initial testing of the ODMS demonstrates promising accuracy for the highly automated results and illustrates the route to model improvement. The use of medical knowledge, embedded in ontologies, to direct the initial development of diagnostic computing systems appears feasible.

  13. Mapping from Space - Ontology Based Map Production Using Satellite Imageries

    NASA Astrophysics Data System (ADS)

    Asefpour Vakilian, A.; Momeni, M.

    2013-09-01

    Determination of the maximum ability for feature extraction from satellite imageries based on ontology procedure using cartographic feature determination is the main objective of this research. Therefore, a special ontology has been developed to extract maximum volume of information available in different high resolution satellite imageries and compare them to the map information layers required in each specific scale due to unified specification for surveying and mapping. ontology seeks to provide an explicit and comprehensive classification of entities in all sphere of being. This study proposes a new method for automatic maximum map feature extraction and reconstruction of high resolution satellite images. For example, in order to extract building blocks to produce 1 : 5000 scale and smaller maps, the road networks located around the building blocks should be determined. Thus, a new building index has been developed based on concepts obtained from ontology. Building blocks have been extracted with completeness about 83%. Then, road networks have been extracted and reconstructed to create a uniform network with less discontinuity on it. In this case, building blocks have been extracted with proper performance and the false positive value from confusion matrix was reduced by about 7%. Results showed that vegetation cover and water features have been extracted completely (100%) and about 71% of limits have been extracted. Also, the proposed method in this article had the ability to produce a map with largest scale possible from any multi spectral high resolution satellite imagery equal to or smaller than 1 : 5000.

  14. Mapping from Space - Ontology Based Map Production Using Satellite Imageries

    NASA Astrophysics Data System (ADS)

    Asefpour Vakilian, A.; Momeni, M.

    2013-09-01

    Determination of the maximum ability for feature extraction from satellite imageries based on ontology procedure using cartographic feature determination is the main objective of this research. Therefore, a special ontology has been developed to extract maximum volume of information available in different high resolution satellite imageries and compare them to the map information layers required in each specific scale due to unified specification for surveying and mapping. ontology seeks to provide an explicit and comprehensive classification of entities in all sphere of being. This study proposes a new method for automatic maximum map feature extraction and reconstruction of high resolution satellite images. For example, in order to extract building blocks to produce 1 : 5000 scale and smaller maps, the road networks located around the building blocks should be determined. Thus, a new building index has been developed based on concepts obtained from ontology. Building blocks have been extracted with completeness about 83 %. Then, road networks have been extracted and reconstructed to create a uniform network with less discontinuity on it. In this case, building blocks have been extracted with proper performance and the false positive value from confusion matrix was reduced by about 7 %. Results showed that vegetation cover and water features have been extracted completely (100 %) and about 71 % of limits have been extracted. Also, the proposed method in this article had the ability to produce a map with largest scale possible from any multi spectral high resolution satellite imagery equal to or smaller than 1 : 5000.

  15. Automatic Generation of Analogy Questions for Student Assessment: An Ontology-Based Approach

    ERIC Educational Resources Information Center

    Alsubait, Tahani; Parsia, Bijan; Sattler, Uli

    2012-01-01

    Different computational models for generating analogies of the form "A is to B as C is to D" have been proposed over the past 35 years. However, analogy generation is a challenging problem that requires further research. In this article, we present a new approach for generating analogies in Multiple Choice Question (MCQ) format that can be used…

  16. Impact of Machine-Translated Text on Entity and Relationship Extraction

    DTIC Science & Technology

    2014-12-01

    20 1 1. Introduction Using social network analysis tools is an important asset in...semantic modeling software to automatically build detailed network models from unstructured text. Contour imports unstructured text and then maps the text...onto an existing ontology of frames at the sentence level, using FrameNet, a structured language model, and through Semantic Role Labeling ( SRL

  17. Data fusion and classification using a hybrid intrinsic cellular inference network

    NASA Astrophysics Data System (ADS)

    Woodley, Robert; Walenz, Brett; Seiffertt, John; Robinette, Paul; Wunsch, Donald

    2010-04-01

    Hybrid Intrinsic Cellular Inference Network (HICIN) is designed for battlespace decision support applications. We developed an automatic method of generating hypotheses for an entity-attribute classifier. The capability and effectiveness of a domain specific ontology was used to generate automatic categories for data classification. Heterogeneous data is clustered using an Adaptive Resonance Theory (ART) inference engine on a sample (unclassified) data set. The data set is the Lahman baseball database. The actual data is immaterial to the architecture, however, parallels in the data can be easily drawn (i.e., "Team" maps to organization, "Runs scored/allowed" to Measure of organization performance (positive/negative), "Payroll" to organization resources, etc.). Results show that HICIN classifiers create known inferences from the heterogonous data. These inferences are not explicitly stated in the ontological description of the domain and are strictly data driven. HICIN uses data uncertainty handling to reduce errors in the classification. The uncertainty handling is based on subjective logic. The belief mass allows evidence from multiple sources to be mathematically combined to increase or discount an assertion. In military operations the ability to reduce uncertainty will be vital in the data fusion operation.

  18. Automated Tracking and Quantification of Autistic Behavioral Symptoms Using Microsoft Kinect.

    PubMed

    Kang, Joon Young; Kim, Ryunhyung; Kim, Hyunsun; Kang, Yeonjune; Hahn, Susan; Fu, Zhengrui; Khalid, Mamoon I; Schenck, Enja; Thesen, Thomas

    2016-01-01

    The prevalence of autism spectrum disorder (ASD) has risen significantly in the last ten years, and today, roughly 1 in 68 children has been diagnosed. One hallmark set of symptoms in this disorder are stereotypical motor movements. These repetitive movements may include spinning, body-rocking, or hand-flapping, amongst others. Despite the growing number of individuals affected by autism, an effective, accurate method of automatically quantifying such movements remains unavailable. This has negative implications for assessing the outcome of ASD intervention and drug studies. Here we present a novel approach to detecting autistic symptoms using the Microsoft Kinect v.2 to objectively and automatically quantify autistic body movements. The Kinect camera was used to film 12 actors performing three separate stereotypical motor movements each. Visual Gesture Builder (VGB) was implemented to analyze the skeletal structures in these recordings using a machine learning approach. In addition, movement detection was hard-coded in Matlab. Manual grading was used to confirm the validity and reliability of VGB and Matlab analysis. We found that both methods were able to detect autistic body movements with high probability. The machine learning approach yielded highest detection rates, supporting its use in automatically quantifying complex autistic behaviors with multi-dimensional input.

  19. G-Bean: an ontology-graph based web tool for biomedical literature retrieval

    PubMed Central

    2014-01-01

    Background Currently, most people use NCBI's PubMed to search the MEDLINE database, an important bibliographical information source for life science and biomedical information. However, PubMed has some drawbacks that make it difficult to find relevant publications pertaining to users' individual intentions, especially for non-expert users. To ameliorate the disadvantages of PubMed, we developed G-Bean, a graph based biomedical search engine, to search biomedical articles in MEDLINE database more efficiently. Methods G-Bean addresses PubMed's limitations with three innovations: (1) Parallel document index creation: a multithreaded index creation strategy is employed to generate the document index for G-Bean in parallel; (2) Ontology-graph based query expansion: an ontology graph is constructed by merging four major UMLS (Version 2013AA) vocabularies, MeSH, SNOMEDCT, CSP and AOD, to cover all concepts in National Library of Medicine (NLM) database; a Personalized PageRank algorithm is used to compute concept relevance in this ontology graph and the Term Frequency - Inverse Document Frequency (TF-IDF) weighting scheme is used to re-rank the concepts. The top 500 ranked concepts are selected for expanding the initial query to retrieve more accurate and relevant information; (3) Retrieval and re-ranking of documents based on user's search intention: after the user selects any article from the existing search results, G-Bean analyzes user's selections to determine his/her true search intention and then uses more relevant and more specific terms to retrieve additional related articles. The new articles are presented to the user in the order of their relevance to the already selected articles. Results Performance evaluation with 106 OHSUMED benchmark queries shows that G-Bean returns more relevant results than PubMed does when using these queries to search the MEDLINE database. PubMed could not even return any search result for some OHSUMED queries because it failed to form the appropriate Boolean query statement automatically from the natural language query strings. G-Bean is available at http://bioinformatics.clemson.edu/G-Bean/index.php. Conclusions G-Bean addresses PubMed's limitations with ontology-graph based query expansion, automatic document indexing, and user search intention discovery. It shows significant advantages in finding relevant articles from the MEDLINE database to meet the information need of the user. PMID:25474588

  20. G-Bean: an ontology-graph based web tool for biomedical literature retrieval.

    PubMed

    Wang, James Z; Zhang, Yuanyuan; Dong, Liang; Li, Lin; Srimani, Pradip K; Yu, Philip S

    2014-01-01

    Currently, most people use NCBI's PubMed to search the MEDLINE database, an important bibliographical information source for life science and biomedical information. However, PubMed has some drawbacks that make it difficult to find relevant publications pertaining to users' individual intentions, especially for non-expert users. To ameliorate the disadvantages of PubMed, we developed G-Bean, a graph based biomedical search engine, to search biomedical articles in MEDLINE database more efficiently. G-Bean addresses PubMed's limitations with three innovations: (1) Parallel document index creation: a multithreaded index creation strategy is employed to generate the document index for G-Bean in parallel; (2) Ontology-graph based query expansion: an ontology graph is constructed by merging four major UMLS (Version 2013AA) vocabularies, MeSH, SNOMEDCT, CSP and AOD, to cover all concepts in National Library of Medicine (NLM) database; a Personalized PageRank algorithm is used to compute concept relevance in this ontology graph and the Term Frequency - Inverse Document Frequency (TF-IDF) weighting scheme is used to re-rank the concepts. The top 500 ranked concepts are selected for expanding the initial query to retrieve more accurate and relevant information; (3) Retrieval and re-ranking of documents based on user's search intention: after the user selects any article from the existing search results, G-Bean analyzes user's selections to determine his/her true search intention and then uses more relevant and more specific terms to retrieve additional related articles. The new articles are presented to the user in the order of their relevance to the already selected articles. Performance evaluation with 106 OHSUMED benchmark queries shows that G-Bean returns more relevant results than PubMed does when using these queries to search the MEDLINE database. PubMed could not even return any search result for some OHSUMED queries because it failed to form the appropriate Boolean query statement automatically from the natural language query strings. G-Bean is available at http://bioinformatics.clemson.edu/G-Bean/index.php. G-Bean addresses PubMed's limitations with ontology-graph based query expansion, automatic document indexing, and user search intention discovery. It shows significant advantages in finding relevant articles from the MEDLINE database to meet the information need of the user.

  1. COMODI: an ontology to characterise differences in versions of computational models in biology.

    PubMed

    Scharm, Martin; Waltemath, Dagmar; Mendes, Pedro; Wolkenhauer, Olaf

    2016-07-11

    Open model repositories provide ready-to-reuse computational models of biological systems. Models within those repositories evolve over time, leading to different model versions. Taken together, the underlying changes reflect a model's provenance and thus can give valuable insights into the studied biology. Currently, however, changes cannot be semantically interpreted. To improve this situation, we developed an ontology of terms describing changes in models. The ontology can be used by scientists and within software to characterise model updates at the level of single changes. When studying or reusing a model, these annotations help with determining the relevance of a change in a given context. We manually studied changes in selected models from BioModels and the Physiome Model Repository. Using the BiVeS tool for difference detection, we then performed an automatic analysis of changes in all models published in these repositories. The resulting set of concepts led us to define candidate terms for the ontology. In a final step, we aggregated and classified these terms and built the first version of the ontology. We present COMODI, an ontology needed because COmputational MOdels DIffer. It empowers users and software to describe changes in a model on the semantic level. COMODI also enables software to implement user-specific filter options for the display of model changes. Finally, COMODI is a step towards predicting how a change in a model influences the simulation results. COMODI, coupled with our algorithm for difference detection, ensures the transparency of a model's evolution, and it enhances the traceability of updates and error corrections. COMODI is encoded in OWL. It is openly available at http://comodi.sems.uni-rostock.de/ .

  2. A novel approach for connecting temporal-ontologies with blood flow simulations.

    PubMed

    Weichert, F; Mertens, C; Walczak, L; Kern-Isberner, G; Wagner, M

    2013-06-01

    In this paper an approach for developing a temporal domain ontology for biomedical simulations is introduced. The ideas are presented in the context of simulations of blood flow in aneurysms using the Lattice Boltzmann Method. The advantages in using ontologies are manyfold: On the one hand, ontologies having been proven to be able to provide medical special knowledge e.g., key parameters for simulations. On the other hand, based on a set of rules and the usage of a reasoner, a system for checking the plausibility as well as tracking the outcome of medical simulations can be constructed. Likewise, results of simulations including data derived from them can be stored and communicated in a way that can be understood by computers. Later on, this set of results can be analyzed. At the same time, the ontologies provide a way to exchange knowledge between researchers. Lastly, this approach can be seen as a black-box abstraction of the internals of the simulation for the biomedical researcher as well. This approach is able to provide the complete parameter sets for simulations, part of the corresponding results and part of their analysis as well as e.g., geometry and boundary conditions. These inputs can be transferred to different simulation methods for comparison. Variations on the provided parameters can be automatically used to drive these simulations. Using a rule base, unphysical inputs or outputs of the simulation can be detected and communicated to the physician in a suitable and familiar way. An example for an instantiation of the blood flow simulation ontology and exemplary rules for plausibility checking are given. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Ontology-based configuration of problem-solving methods and generation of knowledge-acquisition tools: application of PROTEGE-II to protocol-based decision support.

    PubMed

    Tu, S W; Eriksson, H; Gennari, J H; Shahar, Y; Musen, M A

    1995-06-01

    PROTEGE-II is a suite of tools and a methodology for building knowledge-based systems and domain-specific knowledge-acquisition tools. In this paper, we show how PROTEGE-II can be applied to the task of providing protocol-based decision support in the domain of treating HIV-infected patients. To apply PROTEGE-II, (1) we construct a decomposable problem-solving method called episodic skeletal-plan refinement, (2) we build an application ontology that consists of the terms and relations in the domain, and of method-specific distinctions not already captured in the domain terms, and (3) we specify mapping relations that link terms from the application ontology to the domain-independent terms used in the problem-solving method. From the application ontology, we automatically generate a domain-specific knowledge-acquisition tool that is custom-tailored for the application. The knowledge-acquisition tool is used for the creation and maintenance of domain knowledge used by the problem-solving method. The general goal of the PROTEGE-II approach is to produce systems and components that are reusable and easily maintained. This is the rationale for constructing ontologies and problem-solving methods that can be composed from a set of smaller-grained methods and mechanisms. This is also why we tightly couple the knowledge-acquisition tools to the application ontology that specifies the domain terms used in the problem-solving systems. Although our evaluation is still preliminary, for the application task of providing protocol-based decision support, we show that these goals of reusability and easy maintenance can be achieved. We discuss design decisions and the tradeoffs that have to be made in the development of the system.

  4. Builder 1 & C: Naval Training Command Rate Training Manual. Revised 1973.

    ERIC Educational Resources Information Center

    Naval Training Command, Pensacola, FL.

    The training manual is designed to help Navy personnel meet the occupational qualifications for advancement to Builder First Class and Chief Builder. The introductory chapter provides information to aid personnel in their preparation for advancement and outlines the scope of the Builder rating and the types of billets to which he can be assigned.…

  5. Space Construction Automated Fabrication Experiment Definition Study (SCAFEDS), part 3. Volume 2: Study results

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The detailed results of all part 3 study tasks are presented. Selected analysis was performed on the beam builder conceptual design. The functions of the beam builder and a ground test beam builder were defined. Jig and fixture concepts were developed and the developmental plans of the beam builder were expounded.

  6. Radon

    MedlinePlus

    ... My Home? Radon Guide for Tenants Builders and Contractors Radon-Resistant Construction Basics and Techniques EPA's Directory of Builders Resources for Builders and Contractors Radon Action Plans The National Radon Action Plan ( ...

  7. Automatically calibrating admittances in KATE's autonomous launch operations model

    NASA Technical Reports Server (NTRS)

    Morgan, Steve

    1992-01-01

    This report documents a 1000-line Symbolics LISP program that automatically calibrates all 15 fluid admittances in KATE's Autonomous Launch Operations (ALO) model. (KATE is Kennedy Space Center's Knowledge-based Autonomous Test Engineer, a diagnosis and repair expert system created for use on the Space Shuttle's various fluid flow systems.) As a new KATE application, the calibrator described here breaks new ground for KSC's Artificial Intelligence Lab by allowing KATE to both control and measure the hardware she supervises. By automating a formerly manual process, the calibrator: (1) saves the ALO model builder untold amounts of labor; (2) enables quick repairs after workmen accidently adjust ALO's hand valves; and (3) frees the modeler to pursue new KATE applications that previously were too complicated. Also reported are suggestions for enhancing the program: (1) to calibrate ALO's TV cameras, pumps, and sensor tolerances; and (2) to calibrate devices in other KATE models, such as the shuttle's LOX and Environment Control System (ECS).

  8. Meteorological disaster management and assessment system design and implementation

    NASA Astrophysics Data System (ADS)

    Tang, Wei; Luo, Bin; Wu, Huanping

    2009-09-01

    Disaster prevention and mitigation get more and more attentions by Chinese government, with the national economic development in recent years. Some problems exhibit in traditional disaster management, such as the chaotic management of data, low level of information, poor data sharing. To improve the capability of information in disaster management, Meteorological Disaster Management and Assessment System (MDMAS) was developed and is introduced in the paper. MDMAS uses three-tier C/S architecture, including the application layer, data layer and service layer. Current functions of MDMAS include the typhoon and rainstorm assessment, disaster data query and statistics, automatic cartography for disaster management. The typhoon and rainstorm assessment models can be used in both pre-assessment of pre-disaster and post-disaster assessment. Implementation of automatic cartography uses ArcGIS Geoprocessing and ModelBuilder. In practice, MDMAS has been utilized to provide warning information, disaster assessment and services products. MDMAS is an efficient tool for meteorological disaster management and assessment. It can provide decision supports for disaster prevention and mitigation.

  9. Meteorological disaster management and assessment system design and implementation

    NASA Astrophysics Data System (ADS)

    Tang, Wei; Luo, Bin; Wu, Huanping

    2010-11-01

    Disaster prevention and mitigation get more and more attentions by Chinese government, with the national economic development in recent years. Some problems exhibit in traditional disaster management, such as the chaotic management of data, low level of information, poor data sharing. To improve the capability of information in disaster management, Meteorological Disaster Management and Assessment System (MDMAS) was developed and is introduced in the paper. MDMAS uses three-tier C/S architecture, including the application layer, data layer and service layer. Current functions of MDMAS include the typhoon and rainstorm assessment, disaster data query and statistics, automatic cartography for disaster management. The typhoon and rainstorm assessment models can be used in both pre-assessment of pre-disaster and post-disaster assessment. Implementation of automatic cartography uses ArcGIS Geoprocessing and ModelBuilder. In practice, MDMAS has been utilized to provide warning information, disaster assessment and services products. MDMAS is an efficient tool for meteorological disaster management and assessment. It can provide decision supports for disaster prevention and mitigation.

  10. Insight: An ontology-based integrated database and analysis platform for epilepsy self-management research.

    PubMed

    Sahoo, Satya S; Ramesh, Priya; Welter, Elisabeth; Bukach, Ashley; Valdez, Joshua; Tatsuoka, Curtis; Bamps, Yvan; Stoll, Shelley; Jobst, Barbara C; Sajatovic, Martha

    2016-10-01

    We present Insight as an integrated database and analysis platform for epilepsy self-management research as part of the national Managing Epilepsy Well Network. Insight is the only available informatics platform for accessing and analyzing integrated data from multiple epilepsy self-management research studies with several new data management features and user-friendly functionalities. The features of Insight include, (1) use of Common Data Elements defined by members of the research community and an epilepsy domain ontology for data integration and querying, (2) visualization tools to support real time exploration of data distribution across research studies, and (3) an interactive visual query interface for provenance-enabled research cohort identification. The Insight platform contains data from five completed epilepsy self-management research studies covering various categories of data, including depression, quality of life, seizure frequency, and socioeconomic information. The data represents over 400 participants with 7552 data points. The Insight data exploration and cohort identification query interface has been developed using Ruby on Rails Web technology and open source Web Ontology Language Application Programming Interface to support ontology-based reasoning. We have developed an efficient ontology management module that automatically updates the ontology mappings each time a new version of the Epilepsy and Seizure Ontology is released. The Insight platform features a Role-based Access Control module to authenticate and effectively manage user access to different research studies. User access to Insight is managed by the Managing Epilepsy Well Network database steering committee consisting of representatives of all current collaborating centers of the Managing Epilepsy Well Network. New research studies are being continuously added to the Insight database and the size as well as the unique coverage of the dataset allows investigators to conduct aggregate data analysis that will inform the next generation of epilepsy self-management studies. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Measuring the Evolution of Ontology Complexity: The Gene Ontology Case Study

    PubMed Central

    Dameron, Olivier; Bettembourg, Charles; Le Meur, Nolwenn

    2013-01-01

    Ontologies support automatic sharing, combination and analysis of life sciences data. They undergo regular curation and enrichment. We studied the impact of an ontology evolution on its structural complexity. As a case study we used the sixty monthly releases between January 2008 and December 2012 of the Gene Ontology and its three independent branches, i.e. biological processes (BP), cellular components (CC) and molecular functions (MF). For each case, we measured complexity by computing metrics related to the size, the nodes connectivity and the hierarchical structure. The number of classes and relations increased monotonously for each branch, with different growth rates. BP and CC had similar connectivity, superior to that of MF. Connectivity increased monotonously for BP, decreased for CC and remained stable for MF, with a marked increase for the three branches in November and December 2012. Hierarchy-related measures showed that CC and MF had similar proportions of leaves, average depths and average heights. BP had a lower proportion of leaves, and a higher average depth and average height. For BP and MF, the late 2012 increase of connectivity resulted in an increase of the average depth and average height and a decrease of the proportion of leaves, indicating that a major enrichment effort of the intermediate-level hierarchy occurred. The variation of the number of classes and relations in an ontology does not provide enough information about the evolution of its complexity. However, connectivity and hierarchy-related metrics revealed different patterns of values as well as of evolution for the three branches of the Gene Ontology. CC was similar to BP in terms of connectivity, and similar to MF in terms of hierarchy. Overall, BP complexity increased, CC was refined with the addition of leaves providing a finer level of annotations but decreasing slightly its complexity, and MF complexity remained stable. PMID:24146805

  12. Incorporation of composite defects from ultrasonic NDE into CAD and FE models

    NASA Astrophysics Data System (ADS)

    Bingol, Onur Rauf; Schiefelbein, Bryan; Grandin, Robert J.; Holland, Stephen D.; Krishnamurthy, Adarsh

    2017-02-01

    Fiber-reinforced composites are widely used in aerospace industry due to their combined properties of high strength and low weight. However, owing to their complex structure, it is difficult to assess the impact of manufacturing defects and service damage on their residual life. While, ultrasonic testing (UT) is the preferred NDE method to identify the presence of defects in composites, there are no reasonable ways to model the damage and evaluate the structural integrity of composites. We have developed an automated framework to incorporate flaws and known composite damage automatically into a finite element analysis (FEA) model of composites, ultimately aiding in accessing the residual life of composites and make informed decisions regarding repairs. The framework can be used to generate a layer-by-layer 3D structural CAD model of the composite laminates replicating their manufacturing process. Outlines of structural defects, such as delaminations, are automatically detected from UT of the laminate and are incorporated into the CAD model between the appropriate layers. In addition, the framework allows for direct structural analysis of the resulting 3D CAD models with defects by automatically applying the appropriate boundary conditions. In this paper, we show a working proof-of-concept for the composite model builder with capabilities of incorporating delaminations between laminate layers and automatically preparing the CAD model for structural analysis using a FEA software.

  13. Semi-Automated Annotation of Biobank Data Using Standard Medical Terminologies in a Graph Database.

    PubMed

    Hofer, Philipp; Neururer, Sabrina; Goebel, Georg

    2016-01-01

    Data describing biobank resources frequently contains unstructured free-text information or insufficient coding standards. (Bio-) medical ontologies like Orphanet Rare Diseases Ontology (ORDO) or the Human Disease Ontology (DOID) provide a high number of concepts, synonyms and entity relationship properties. Such standard terminologies increase quality and granularity of input data by adding comprehensive semantic background knowledge from validated entity relationships. Moreover, cross-references between terminology concepts facilitate data integration across databases using different coding standards. In order to encourage the use of standard terminologies, our aim is to identify and link relevant concepts with free-text diagnosis inputs within a biobank registry. Relevant concepts are selected automatically by lexical matching and SPARQL queries against a RDF triplestore. To ensure correctness of annotations, proposed concepts have to be confirmed by medical data administration experts before they are entered into the registry database. Relevant (bio-) medical terminologies describing diseases and phenotypes were identified and stored in a graph database which was tied to a local biobank registry. Concept recommendations during data input trigger a structured description of medical data and facilitate data linkage between heterogeneous systems.

  14. Building America Top Innovations 2012: Reduced Call-Backs with High-Performance Production Builders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    2013-01-01

    This Building America Top Innovations profile describes ways Building America teams have helped builders cut call-backs. Harvard University study found builders who worked with Building America had a 50% drop in call-backs. One builder reported a 50-fold reduction in the incidence of pipe freezing, a 50% reduction in drywall cracking, and a 60% decline in call-backs.

  15. Retrieving definitional content for ontology development.

    PubMed

    Smith, L; Wilbur, W J

    2004-12-01

    Ontology construction requires an understanding of the meaning and usage of its encoded concepts. While definitions found in dictionaries or glossaries may be adequate for many concepts, the actual usage in expert writing could be a better source of information for many others. The goal of this paper is to describe an automated procedure for finding definitional content in expert writing. The approach uses machine learning on phrasal features to learn when sentences in a book contain definitional content, as determined by their similarity to glossary definitions provided in the same book. The end result is not a concise definition of a given concept, but for each sentence, a predicted probability that it contains information relevant to a definition. The approach is evaluated automatically for terms with explicit definitions, and manually for terms with no available definition.

  16. Context-Based Tourism Information Filtering with a Semantic Rule Engine

    PubMed Central

    Lamsfus, Carlos; Martin, David; Alzua-Sorzabal, Aurkene; López-de-Ipiña, Diego; Torres-Manzanera, Emilio

    2012-01-01

    This paper presents the CONCERT framework, a push/filter information consumption paradigm, based on a rule-based semantic contextual information system for tourism. CONCERT suggests a specific insight of the notion of context from a human mobility perspective. It focuses on the particular characteristics and requirements of travellers and addresses the drawbacks found in other approaches. Additionally, CONCERT suggests the use of digital broadcasting as push communication technology, whereby tourism information is disseminated to mobile devices. This information is then automatically filtered by a network of ontologies and offered to tourists on the screen. The results obtained in the experiments carried out show evidence that the information disseminated through digital broadcasting can be manipulated by the network of ontologies, providing contextualized information that produces user satisfaction. PMID:22778584

  17. Context-based tourism information filtering with a semantic rule engine.

    PubMed

    Lamsfus, Carlos; Martin, David; Alzua-Sorzabal, Aurkene; López-de-Ipiña, Diego; Torres-Manzanera, Emilio

    2012-01-01

    This paper presents the CONCERT framework, a push/filter information consumption paradigm, based on a rule-based semantic contextual information system for tourism. CONCERT suggests a specific insight of the notion of context from a human mobility perspective. It focuses on the particular characteristics and requirements of travellers and addresses the drawbacks found in other approaches. Additionally, CONCERT suggests the use of digital broadcasting as push communication technology, whereby tourism information is disseminated to mobile devices. This information is then automatically filtered by a network of ontologies and offered to tourists on the screen. The results obtained in the experiments carried out show evidence that the information disseminated through digital broadcasting can be manipulated by the network of ontologies, providing contextualized information that produces user satisfaction.

  18. Building America Case Study: Meeting DOE Challenge Home Program Certification, Chicago, Illinois; Denver, Colorado; Devens, Massachusetts (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The purpose of this project was to evaluate integrated packages of advanced measures in individual test homes to assess their performance with respect to Building America Program goals, specifically compliance with the DOE Challenge Home Program. BSC consulted on the construction of five test houses by three Cold Climate production builders in three separate US cities. BSC worked with the builders to develop a design package tailored to the cost-related impacts for each builder. Therefore, the resulting design packages do vary from builder to builder. BSC provided support through this research project on the design, construction and performance testing ofmore » the five test homes. Overall, the builders have concluded that the energy related upgrades (either through the prescriptive or performance path) represent reasonable upgrades. The builders commented that while not every improvement in specification was cost effective (as in a reasonable payback period), many were improvements that could improve the marketability of the homes and serve to attract more energy efficiency discerning prospective homeowners. However, the builders did express reservations on the associated checklists and added certifications. An increase in administrative time was observed with all builders. The checklists and certifications also inherently increase cost due to: 1. Adding services to the scope of work for various trades, such as HERS Rater, HVAC contractor. 2. Increased material costs related to the checklists, especially the EPA Indoor airPLUS and EPA WaterSense Efficient Hot Water Distribution requirement.« less

  19. Buildings classification from airborne LiDAR point clouds through OBIA and ontology driven approach

    NASA Astrophysics Data System (ADS)

    Tomljenovic, Ivan; Belgiu, Mariana; Lampoltshammer, Thomas J.

    2013-04-01

    In the last years, airborne Light Detection and Ranging (LiDAR) data proved to be a valuable information resource for a vast number of applications ranging from land cover mapping to individual surface feature extraction from complex urban environments. To extract information from LiDAR data, users apply prior knowledge. Unfortunately, there is no consistent initiative for structuring this knowledge into data models that can be shared and reused across different applications and domains. The absence of such models poses great challenges to data interpretation, data fusion and integration as well as information transferability. The intention of this work is to describe the design, development and deployment of an ontology-based system to classify buildings from airborne LiDAR data. The novelty of this approach consists of the development of a domain ontology that specifies explicitly the knowledge used to extract features from airborne LiDAR data. The overall goal of this approach is to investigate the possibility for classification of features of interest from LiDAR data by means of domain ontology. The proposed workflow is applied to the building extraction process for the region of "Biberach an der Riss" in South Germany. Strip-adjusted and georeferenced airborne LiDAR data is processed based on geometrical and radiometric signatures stored within the point cloud. Region-growing segmentation algorithms are applied and segmented regions are exported to the GeoJSON format. Subsequently, the data is imported into the ontology-based reasoning process used to automatically classify exported features of interest. Based on the ontology it becomes possible to define domain concepts, associated properties and relations. As a consequence, the resulting specific body of knowledge restricts possible interpretation variants. Moreover, ontologies are machinable and thus it is possible to run reasoning on top of them. Available reasoners (FACT++, JESS, Pellet) are used to check the consistency of the developed ontologies, and logical reasoning is performed to infer implicit relations between defined concepts. The ontology for the definition of building is specified using the Ontology Web Language (OWL). It is the most widely used ontology language that is based on Description Logics (DL). DL allows the description of internal properties of modelled concepts (roof typology, shape, area, height etc.) and relationships between objects (IS_A, MEMBER_OF/INSTANCE_OF). It captures terminological knowledge (TBox) as well as assertional knowledge (ABox) - that represents facts about concept instances, i.e. the buildings in airborne LiDAR data. To assess the classification accuracy, ground truth data generated by visual interpretation and calculated classification results in terms of precision and recall are used. The advantages of this approach are: (i) flexibility, (ii) transferability, and (iii) extendibility - i.e. ontology can be extended with further concepts, data properties and object properties.

  20. Semantic Modelling of Digital Forensic Evidence

    NASA Astrophysics Data System (ADS)

    Kahvedžić, Damir; Kechadi, Tahar

    The reporting of digital investigation results are traditionally carried out in prose and in a large investigation may require successive communication of findings between different parties. Popular forensic suites aid in the reporting process by storing provenance and positional data but do not automatically encode why the evidence is considered important. In this paper we introduce an evidence management methodology to encode the semantic information of evidence. A structured vocabulary of terms, ontology, is used to model the results in a logical and predefined manner. The descriptions are application independent and automatically organised. The encoded descriptions aim to help the investigation in the task of report writing and evidence communication and can be used in addition to existing evidence management techniques.

  1. Human microRNA target analysis and gene ontology clustering by GOmir, a novel stand-alone application

    PubMed Central

    Roubelakis, Maria G; Zotos, Pantelis; Papachristoudis, Georgios; Michalopoulos, Ioannis; Pappa, Kalliopi I; Anagnou, Nicholas P; Kossida, Sophia

    2009-01-01

    Background microRNAs (miRNAs) are single-stranded RNA molecules of about 20–23 nucleotides length found in a wide variety of organisms. miRNAs regulate gene expression, by interacting with target mRNAs at specific sites in order to induce cleavage of the message or inhibit translation. Predicting or verifying mRNA targets of specific miRNAs is a difficult process of great importance. Results GOmir is a novel stand-alone application consisting of two separate tools: JTarget and TAGGO. JTarget integrates miRNA target prediction and functional analysis by combining the predicted target genes from TargetScan, miRanda, RNAhybrid and PicTar computational tools as well as the experimentally supported targets from TarBase and also providing a full gene description and functional analysis for each target gene. On the other hand, TAGGO application is designed to automatically group gene ontology annotations, taking advantage of the Gene Ontology (GO), in order to extract the main attributes of sets of proteins. GOmir represents a new tool incorporating two separate Java applications integrated into one stand-alone Java application. Conclusion GOmir (by using up to five different databases) introduces miRNA predicted targets accompanied by (a) full gene description, (b) functional analysis and (c) detailed gene ontology clustering. Additionally, a reverse search initiated by a potential target can also be conducted. GOmir can freely be downloaded BRFAA. PMID:19534746

  2. A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment

    PubMed Central

    Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae

    2015-01-01

    User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service. PMID:26393609

  3. Human microRNA target analysis and gene ontology clustering by GOmir, a novel stand-alone application.

    PubMed

    Roubelakis, Maria G; Zotos, Pantelis; Papachristoudis, Georgios; Michalopoulos, Ioannis; Pappa, Kalliopi I; Anagnou, Nicholas P; Kossida, Sophia

    2009-06-16

    microRNAs (miRNAs) are single-stranded RNA molecules of about 20-23 nucleotides length found in a wide variety of organisms. miRNAs regulate gene expression, by interacting with target mRNAs at specific sites in order to induce cleavage of the message or inhibit translation. Predicting or verifying mRNA targets of specific miRNAs is a difficult process of great importance. GOmir is a novel stand-alone application consisting of two separate tools: JTarget and TAGGO. JTarget integrates miRNA target prediction and functional analysis by combining the predicted target genes from TargetScan, miRanda, RNAhybrid and PicTar computational tools as well as the experimentally supported targets from TarBase and also providing a full gene description and functional analysis for each target gene. On the other hand, TAGGO application is designed to automatically group gene ontology annotations, taking advantage of the Gene Ontology (GO), in order to extract the main attributes of sets of proteins. GOmir represents a new tool incorporating two separate Java applications integrated into one stand-alone Java application. GOmir (by using up to five different databases) introduces miRNA predicted targets accompanied by (a) full gene description, (b) functional analysis and (c) detailed gene ontology clustering. Additionally, a reverse search initiated by a potential target can also be conducted. GOmir can freely be downloaded BRFAA.

  4. Design of a Golf Swing Injury Detection and Evaluation open service platform with Ontology-oriented clustering case-based reasoning mechanism.

    PubMed

    Ku, Hao-Hsiang

    2015-01-01

    Nowadays, people can easily use a smartphone to get wanted information and requested services. Hence, this study designs and proposes a Golf Swing Injury Detection and Evaluation open service platform with Ontology-oritened clustering case-based reasoning mechanism, which is called GoSIDE, based on Arduino and Open Service Gateway initative (OSGi). GoSIDE is a three-tier architecture, which is composed of Mobile Users, Application Servers and a Cloud-based Digital Convergence Server. A mobile user is with a smartphone and Kinect sensors to detect the user's Golf swing actions and to interact with iDTV. An application server is with Intelligent Golf Swing Posture Analysis Model (iGoSPAM) to check a user's Golf swing actions and to alter this user when he is with error actions. Cloud-based Digital Convergence Server is with Ontology-oriented Clustering Case-based Reasoning (CBR) for Quality of Experiences (OCC4QoE), which is designed to provide QoE services by QoE-based Ontology strategies, rules and events for this user. Furthermore, GoSIDE will automatically trigger OCC4QoE and deliver popular rules for a new user. Experiment results illustrate that GoSIDE can provide appropriate detections for Golfers. Finally, GoSIDE can be a reference model for researchers and engineers.

  5. A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment.

    PubMed

    Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae

    2015-09-18

    User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service.

  6. Generation of Test Questions from RDF Files Using PYTHON and SPARQL

    NASA Astrophysics Data System (ADS)

    Omarbekova, Assel; Sharipbay, Altynbek; Barlybaev, Alibek

    2017-02-01

    This article describes the development of the system for the automatic generation of test questions based on the knowledge base. This work has an applicable nature and provides detailed examples of the development of ontology and implementation the SPARQL queries in RDF-documents. Also it describes implementation of the program generating questions in the Python programming language including the necessary libraries while working with RDF-files.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    This builder built fourteen homes in the Gordon Estates subdivision that achieved Challenge Home certification with HERS 38–58 on an affordable budget for homeowners. Every Mandalay home in the development also met the National Green Building Standard gold level. The Gordon Estates subdivision is also serving as a showcase of energy efficiency, and Mandalay is hosting education workshops for realtors, state and local officials, other builders, students, potential homeowners, and the public. The builder won a 2013 Housing Innovation Award in the affordable builder category.

  8. Evaluation of the U.S. Department of Energy Challenge Home Program Certification of Production Builders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerrigan, P.; Loomis, H.

    2014-09-01

    The purpose of this project was to evaluate integrated packages of advanced measures in individual test homes to assess their performance with respect to Building America program goals, specifically compliance with the DOE Challenge Home Program. BSC consulted on the construction of five test houses by three cold climate production builders in three U.S. cities and worked with the builders to develop a design package tailored to the cost-related impacts for each builder. Also, BSC provided support through performance testing of the five test homes. Overall, the builders have concluded that the energy related upgrades (either through the prescriptive ormore » performance path) represent reasonable upgrades. The builders commented that while not every improvement in specification was cost effective (as in a reasonable payback period), many were improvements that could improve the marketability of the homes and serve to attract more energy efficiency discerning prospective homeowners. However, the builders did express reservations on the associated checklists and added certifications. An increase in administrative time was observed with all builders. The checklists and certifications also inherently increase cost due to: adding services to the scope of work for various trades, such as HERS Rater, HVAC contractor; and increased material costs related to the checklists, especially the EPA Indoor airPLUS and EPA WaterSense® Efficient Hot Water Distribution requirement.« less

  9. Knowledge transfer to builders in post-disaster housing reconstruction in West-Sumatra of Indonesia

    NASA Astrophysics Data System (ADS)

    Hidayat, Benny; Afif, Zal

    2017-11-01

    Housing is the most affected sector by disasters as can be observed after the 2009 earthquake in West Sumatra province in Indonesia. As in Indonesian construction industry, the housing post-disaster reconstruction is influenced by knowledge and skills of builders or laborers, or locally known as `tukang'. After the earthquake there were trainings to transfer knowledge about earthquake-safe house structure for the builders in the post-disaster reconstruction. This study examined the effectiveness of the training in term of understanding of the builders and application of the new knowledge. Ten semi-structured interviews with the builders were conducted in this study. The results indicate that the builders with prior housing construction experience can absorb and understand the new knowledge about earthquake-safe house structure. Combination of lecturing and practice sessions also help the builders to understand the knowledge. However, findings of this research also suggest there is a problem in implementation of the new knowledge. Utilization of earthquake-safe house structure may leads to a rise in house cost. As a result, some house owners prefer to save money than to adopt the new knowledge.

  10. Builders Challenge High Performance Builder Spotlight: Artistic Homes, Albuquerque, New Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2009-12-22

    Building America Builders Challenge fact sheet on Artistic Homes of Albuquerque, New Mexico. Standard features of their homes include advanced framed 2x6 24-inch on center walls, R-21 blown insulation in the walls, and high-efficiency windows.

  11. Ontological Encoding of GeoSciML and INSPIRE geological standard vocabularies and schemas: application to geological mapping

    NASA Astrophysics Data System (ADS)

    Lombardo, Vincenzo; Piana, Fabrizio; Mimmo, Dario; Fubelli, Giandomenico; Giardino, Marco

    2016-04-01

    Encoding of geologic knowledge in formal languages is an ambitious task, aiming at the interoperability and organic representation of geological data, and semantic characterization of geologic maps. Initiatives such as GeoScience Markup Language (last version is GeoSciML 4, 2015[1]) and INSPIRE "Data Specification on Geology" (an operative simplification of GeoSciML, last version is 3.0 rc3, 2013[2]), as well as the recent terminological shepherding of the Geoscience Terminology Working Group (GTWG[3]) have been promoting information exchange of the geologic knowledge. There have also been limited attempts to encode the knowledge in a machine-readable format, especially in the lithology domain (see e.g. the CGI_Lithology ontology[4]), but a comprehensive ontological model that connect the several knowledge sources is still lacking. This presentation concerns the "OntoGeonous" initiative, which aims at encoding the geologic knowledge, as expressed through the standard vocabularies, schemas and data models mentioned above, through a number of interlinked computational ontologies, based on the languages of the Semantic Web and the paradigm of Linked Open Data. The initiative proceeds in parallel with a concrete case study, concerning the setting up of a synthetic digital geological map of the Piemonte region (NW Italy), named "GEOPiemonteMap" (developed by the CNR Institute of Geosciences and Earth Resources, CNR IGG, Torino), where the description and classification of GeologicUnits has been supported by the modeling and implementation of the ontologies. We have devised a tripartite ontological model called OntoGeonous that consists of: 1) an ontology of the geologic features (in particular, GeologicUnit, GeomorphologicFeature, and GeologicStructure[5], modeled from the definitions and UML schemata of CGI vocabularies[6], GeoScienceML and INSPIRE, and aligned with the Planetary realm of NASA SWEET ontology[7]), 2) an ontology of the Earth materials (as defined by the SimpleLithology CGI vocabulary and aligned as a subclass of the Substance class in NASA SWEET ontology), and 3) an ontology of the MappedFeatures (as defined in the Representation sub-taxonomy of the NASA SWEET ontology). The latter correspond to the concrete elements of the map, with their geometry (polygons, lines) and geographical coordinates. The ontology model has been developed by taking into account applications primarily concerning the needs of geological mapping; nevertheless, the model is general enough to be applied to other contexts. In particular, we show how the automatic reasoning capabilities of the ontology system can be employed in tasks of unit definition and input filling of the map database and for supporting geologists in thematic re-classification of the map instances (e.g. for coloring tasks). ---------------------------------------- [1] http://www.geosciml.org [2] http://inspire.jrc.ec.europa.eu/documents/Data_Specifications/INSPIRE_DataSpecification_GE_v3.0rc3.pdf [3] http://www.cgi-iugs.org/tech_collaboration/geoscience_terminology_working_group.html [4] https://www.seegrid.csiro.au/subversion/CGI_CDTGVocabulary/trunk/OwlWork/CGI_Lithology.owl [5] We are currently neglecting the encoding of the geologic events, left as a future work. [6] http://resource.geosciml.org/vocabulary/cgi/201211/ [7] Web site: https://sweet.jpl.nasa.gov, Di Giuseppe et al., 2013, SWEET ontology coverage for earth system sciences, http://www.ics.uci.edu/~ndigiuse/Nicholas_DiGiuseppe/Research_files/digiuseppe14.pdf; S. Barahmand et al. 2009, A Survey on SWEET Ontologies and their Applications, http://www-scf.usc.edu/~taheriya/reports/csci586-report.pdf

  12. Green Builder Coalition |

    Science.gov Websites

    - Learn More Here - Join the Green Builder® Coalition today! - The Coalition supports the SAVE Act Assistance Advocacy Member Benefits Join Contact © 2016 Green Builder Coalition - Newsletter Signup - info @greenbuildercoalition.org - This site is green - Website by Triple Smart Follow Us

  13. Builders Challenge High Performance Builder Spotlight - Artistic Homes, Albuquerque, NM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2009-01-01

    Building America Builders Challenge fact sheet on Artistic Homes of Albuquerque, New Mexico. Describes the first true zero E-scale home in a hot-dry climate with ducts inside, R-50 attic insulation, roof-mounted photovoltaic power system, and solar thermal water heating.

  14. Building America Case Study: New Town Builders' Power of Zero Energy Center, Denver, Colorado (Brochure)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    New Town Builders, a builder of energy efficient homes in Denver, Colorado, offers a zero energy option for all the homes it builds. To attract a wide range of potential homebuyers to its energy efficient homes, New Town Builders created a 'Power of Zero Energy Center' linked to its model home in the Stapleton community of Denver. This case study presents New Town Builders' marketing approach, which is targeted to appeal to homebuyers' emotions rather than overwhelming homebuyers with scientific details about the technology. The exhibits in the Power of Zero Energy Center focus on reduced energy expenses for themore » homeowner, improved occupant comfort, the reputation of the builder, and the lack of sacrificing the homebuyers' desired design features to achieve zero net energy in the home. The case study also contains customer and realtor testimonials related to the effectiveness of the Center in influencing homebuyers to purchase a zero energy home.« less

  15. New Whole-House Solutions Case Study: New Town Builders' Power of Zero Energy Center - Denver, Colorado

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    New Town Builders, a builder of energy efficient homes in Denver, Colorado, offers a zero energy option for all the homes it builds. To attract a wide range of potential homebuyers to its energy efficient homes, New Town Builders created a "Power of Zero Energy Center" linked to its model home in the Stapleton community. This case study presents New Town Builders' marketing approach, which is targeted to appeal to homebuyers' emotions rather than overwhelming homebuyers with scientific details about the technology. The exhibits in the Power of Zero Energy Center focus on reduced energy expenses for the homeowner, improvedmore » occupant comfort, the reputation of the builder, and the lack of sacrificing the homebuyers' desired design features to achieve zero net energy in the home. This case study also contains customer and realtor testimonials related to the effectiveness of the Center in influencing homebuyers to purchase a zero energy home.« less

  16. Pharmacogenomic knowledge representation, reasoning and genome-based clinical decision support based on OWL 2 DL ontologies.

    PubMed

    Samwald, Matthias; Miñarro Giménez, Jose Antonio; Boyce, Richard D; Freimuth, Robert R; Adlassnig, Klaus-Peter; Dumontier, Michel

    2015-02-22

    Every year, hundreds of thousands of patients experience treatment failure or adverse drug reactions (ADRs), many of which could be prevented by pharmacogenomic testing. However, the primary knowledge needed for clinical pharmacogenomics is currently dispersed over disparate data structures and captured in unstructured or semi-structured formalizations. This is a source of potential ambiguity and complexity, making it difficult to create reliable information technology systems for enabling clinical pharmacogenomics. We developed Web Ontology Language (OWL) ontologies and automated reasoning methodologies to meet the following goals: 1) provide a simple and concise formalism for representing pharmacogenomic knowledge, 2) finde errors and insufficient definitions in pharmacogenomic knowledge bases, 3) automatically assign alleles and phenotypes to patients, 4) match patients to clinically appropriate pharmacogenomic guidelines and clinical decision support messages and 5) facilitate the detection of inconsistencies and overlaps between pharmacogenomic treatment guidelines from different sources. We evaluated different reasoning systems and test our approach with a large collection of publicly available genetic profiles. Our methodology proved to be a novel and useful choice for representing, analyzing and using pharmacogenomic data. The Genomic Clinical Decision Support (Genomic CDS) ontology represents 336 SNPs with 707 variants; 665 haplotypes related to 43 genes; 22 rules related to drug-response phenotypes; and 308 clinical decision support rules. OWL reasoning identified CDS rules with overlapping target populations but differing treatment recommendations. Only a modest number of clinical decision support rules were triggered for a collection of 943 public genetic profiles. We found significant performance differences across available OWL reasoners. The ontology-based framework we developed can be used to represent, organize and reason over the growing wealth of pharmacogenomic knowledge, as well as to identify errors, inconsistencies and insufficient definitions in source data sets or individual patient data. Our study highlights both advantages and potential practical issues with such an ontology-based approach.

  17. Eagle-i: Making Invisible Resources, Visible

    PubMed Central

    Haendel, M.; Wilson, M.; Torniai, C.; Segerdell, E.; Shaffer, C.; Frost, R.; Bourges, D.; Brownstein, J.; McInnerney, K.

    2010-01-01

    RP-134 The eagle-i Consortium – Dartmouth College, Harvard Medical School, Jackson State University, Morehouse School of Medicine, Montana State University, Oregon Health and Science University (OHSU), the University of Alaska, the University of Hawaii, and the University of Puerto Rico – aims to make invisible resources for scientific research visible by developing a searchable network of resource repositories at research institutions nationwide. Now in early development, it is hoped that the system will scale beyond the consortium at the end of the two-year pilot. Data Model & Ontology: The eagle-i ontology development team at the OHSU Library is generating the data model and ontologies necessary for resource indexing and querying. Our indexing system will enable cores and research labs to represent resources within a defined vocabulary, leading to more effective searches and better linkage between data types. This effort is being guided by active discussions within the ontology community (http://RRontology.tk) bringing together relevant preexisting ontologies in a logical framework. The goal of these discussions is to provide context for interoperability and domain-wide standards for resource types used throughout biomedical research. Research community feedback is welcomed. Architecture Development, led by a team at Harvard, includes four main components: tools for data collection, management and curation; an institutional resource repository; a federated network; and a central search application. Each participating institution will populate and manage their repository locally, using data collection and curation tools. To help improve search performance, data tools will support the semi-automatic annotation of resources. A central search application will use a federated protocol to broadcast queries to all repositories and display aggregated results. The search application will leverage the eagle-i ontologies to help guide users to valid queries via auto-suggestions and taxonomy browsing and improve search result quality via concept-based search and synonym expansion. Website: http://eagle-i.org. NIH/NCRR ARRA award #U24RR029825

  18. Bridging the phenotypic and genetic data useful for integrated breeding through a data annotation using the Crop Ontology developed by the crop communities of practice

    PubMed Central

    Shrestha, Rosemary; Matteis, Luca; Skofic, Milko; Portugal, Arllet; McLaren, Graham; Hyman, Glenn; Arnaud, Elizabeth

    2012-01-01

    The Crop Ontology (CO) of the Generation Challenge Program (GCP) (http://cropontology.org/) is developed for the Integrated Breeding Platform (IBP) (http://www.integratedbreeding.net/) by several centers of The Consultative Group on International Agricultural Research (CGIAR): bioversity, CIMMYT, CIP, ICRISAT, IITA, and IRRI. Integrated breeding necessitates that breeders access genotypic and phenotypic data related to a given trait. The CO provides validated trait names used by the crop communities of practice (CoP) for harmonizing the annotation of phenotypic and genotypic data and thus supporting data accessibility and discovery through web queries. The trait information is completed by the description of the measurement methods and scales, and images. The trait dictionaries used to produce the Integrated Breeding (IB) fieldbooks are synchronized with the CO terms for an automatic annotation of the phenotypic data measured in the field. The IB fieldbook provides breeders with direct access to the CO to get additional descriptive information on the traits. Ontologies and trait dictionaries are online for cassava, chickpea, common bean, groundnut, maize, Musa, potato, rice, sorghum, and wheat. Online curation and annotation tools facilitate (http://cropontology.org) direct maintenance of the trait information and production of trait dictionaries by the crop communities. An important feature is the cross referencing of CO terms with the Crop database trait ID and with their synonyms in Plant Ontology (PO) and Trait Ontology (TO). Web links between cross referenced terms in CO provide online access to data annotated with similar ontological terms, particularly the genetic data in Gramene (University of Cornell) or the evaluation and climatic data in the Global Repository of evaluation trials of the Climate Change, Agriculture and Food Security programme (CCAFS). Cross-referencing and annotation will be further applied in the IBP. PMID:22934074

  19. Wildlife Scenario Builder and User's Guide (Version 1.0, Beta Test)

    EPA Science Inventory

    Cover of the Wildlife Scenario Builder User's Manual The Wildlife Scenario Builder (WSB) was developed to improve the quality of wildlif...

  20. Automated compound classification using a chemical ontology.

    PubMed

    Bobach, Claudia; Böhme, Timo; Laube, Ulf; Püschel, Anett; Weber, Lutz

    2012-12-29

    Classification of chemical compounds into compound classes by using structure derived descriptors is a well-established method to aid the evaluation and abstraction of compound properties in chemical compound databases. MeSH and recently ChEBI are examples of chemical ontologies that provide a hierarchical classification of compounds into general compound classes of biological interest based on their structural as well as property or use features. In these ontologies, compounds have been assigned manually to their respective classes. However, with the ever increasing possibilities to extract new compounds from text documents using name-to-structure tools and considering the large number of compounds deposited in databases, automated and comprehensive chemical classification methods are needed to avoid the error prone and time consuming manual classification of compounds. In the present work we implement principles and methods to construct a chemical ontology of classes that shall support the automated, high-quality compound classification in chemical databases or text documents. While SMARTS expressions have already been used to define chemical structure class concepts, in the present work we have extended the expressive power of such class definitions by expanding their structure-based reasoning logic. Thus, to achieve the required precision and granularity of chemical class definitions, sets of SMARTS class definitions are connected by OR and NOT logical operators. In addition, AND logic has been implemented to allow the concomitant use of flexible atom lists and stereochemistry definitions. The resulting chemical ontology is a multi-hierarchical taxonomy of concept nodes connected by directed, transitive relationships. A proposal for a rule based definition of chemical classes has been made that allows to define chemical compound classes more precisely than before. The proposed structure-based reasoning logic allows to translate chemistry expert knowledge into a computer interpretable form, preventing erroneous compound assignments and allowing automatic compound classification. The automated assignment of compounds in databases, compound structure files or text documents to their related ontology classes is possible through the integration with a chemical structure search engine. As an application example, the annotation of chemical structure files with a prototypic ontology is demonstrated.

  1. Automated compound classification using a chemical ontology

    PubMed Central

    2012-01-01

    Background Classification of chemical compounds into compound classes by using structure derived descriptors is a well-established method to aid the evaluation and abstraction of compound properties in chemical compound databases. MeSH and recently ChEBI are examples of chemical ontologies that provide a hierarchical classification of compounds into general compound classes of biological interest based on their structural as well as property or use features. In these ontologies, compounds have been assigned manually to their respective classes. However, with the ever increasing possibilities to extract new compounds from text documents using name-to-structure tools and considering the large number of compounds deposited in databases, automated and comprehensive chemical classification methods are needed to avoid the error prone and time consuming manual classification of compounds. Results In the present work we implement principles and methods to construct a chemical ontology of classes that shall support the automated, high-quality compound classification in chemical databases or text documents. While SMARTS expressions have already been used to define chemical structure class concepts, in the present work we have extended the expressive power of such class definitions by expanding their structure-based reasoning logic. Thus, to achieve the required precision and granularity of chemical class definitions, sets of SMARTS class definitions are connected by OR and NOT logical operators. In addition, AND logic has been implemented to allow the concomitant use of flexible atom lists and stereochemistry definitions. The resulting chemical ontology is a multi-hierarchical taxonomy of concept nodes connected by directed, transitive relationships. Conclusions A proposal for a rule based definition of chemical classes has been made that allows to define chemical compound classes more precisely than before. The proposed structure-based reasoning logic allows to translate chemistry expert knowledge into a computer interpretable form, preventing erroneous compound assignments and allowing automatic compound classification. The automated assignment of compounds in databases, compound structure files or text documents to their related ontology classes is possible through the integration with a chemical structure search engine. As an application example, the annotation of chemical structure files with a prototypic ontology is demonstrated. PMID:23273256

  2. TGF-beta signaling proteins and the Protein Ontology.

    PubMed

    Arighi, Cecilia N; Liu, Hongfang; Natale, Darren A; Barker, Winona C; Drabkin, Harold; Blake, Judith A; Smith, Barry; Wu, Cathy H

    2009-05-06

    The Protein Ontology (PRO) is designed as a formal and principled Open Biomedical Ontologies (OBO) Foundry ontology for proteins. The components of PRO extend from a classification of proteins on the basis of evolutionary relationships at the homeomorphic level to the representation of the multiple protein forms of a gene, including those resulting from alternative splicing, cleavage and/or post-translational modifications. Focusing specifically on the TGF-beta signaling proteins, we describe the building, curation, usage and dissemination of PRO. PRO is manually curated on the basis of PrePRO, an automatically generated file with content derived from standard protein data sources. Manual curation ensures that the treatment of the protein classes and the internal and external relationships conform to the PRO framework. The current release of PRO is based upon experimental data from mouse and human proteins wherein equivalent protein forms are represented by single terms. In addition to the PRO ontology, the annotation of PRO terms is released as a separate PRO association file, which contains, for each given PRO term, an annotation from the experimentally characterized sub-types as well as the corresponding database identifiers and sequence coordinates. The annotations are added in the form of relationship to other ontologies. Whenever possible, equivalent forms in other species are listed to facilitate cross-species comparison. Splice and allelic variants, gene fusion products and modified protein forms are all represented as entities in the ontology. Therefore, PRO provides for the representation of protein entities and a resource for describing the associated data. This makes PRO useful both for proteomics studies where isoforms and modified forms must be differentiated, and for studies of biological pathways, where representations need to take account of the different ways in which the cascade of events may depend on specific protein modifications. PRO provides a framework for the formal representation of protein classes and protein forms in the OBO Foundry. It is designed to enable data retrieval and integration and machine reasoning at the molecular level of proteins, thereby facilitating cross-species comparisons, pathway analysis, disease modeling and the generation of new hypotheses.

  3. Using Ontologies for the Online Recognition of Activities of Daily Living†

    PubMed Central

    2018-01-01

    The recognition of activities of daily living is an important research area of interest in recent years. The process of activity recognition aims to recognize the actions of one or more people in a smart environment, in which a set of sensors has been deployed. Usually, all the events produced during each activity are taken into account to develop the classification models. However, the instant in which an activity started is unknown in a real environment. Therefore, only the most recent events are usually used. In this paper, we use statistics to determine the most appropriate length of that interval for each type of activity. In addition, we use ontologies to automatically generate features that serve as the input for the supervised learning algorithms that produce the classification model. The features are formed by combining the entities in the ontology, such as concepts and properties. The results obtained show a significant increase in the accuracy of the classification models generated with respect to the classical approach, in which only the state of the sensors is taken into account. Moreover, the results obtained in a simulation of a real environment under an event-based segmentation also show an improvement in most activities. PMID:29662011

  4. Interoperable cross-domain semantic and geospatial framework for automatic change detection

    NASA Astrophysics Data System (ADS)

    Kuo, Chiao-Ling; Hong, Jung-Hong

    2016-01-01

    With the increasingly diverse types of geospatial data established over the last few decades, semantic interoperability in integrated applications has attracted much interest in the field of Geographic Information System (GIS). This paper proposes a new strategy and framework to process cross-domain geodata at the semantic level. This framework leverages the semantic equivalence of concepts between domains through bridge ontology and facilitates the integrated use of different domain data, which has been long considered as an essential superiority of GIS, but is impeded by the lack of understanding about the semantics implicitly hidden in the data. We choose the task of change detection to demonstrate how the introduction of ontology concept can effectively make the integration possible. We analyze the common properties of geodata and change detection factors, then construct rules and summarize possible change scenario for making final decisions. The use of topographic map data to detect changes in land use shows promising success, as far as the improvement of efficiency and level of automation is concerned. We believe the ontology-oriented approach will enable a new way for data integration across different domains from the perspective of semantic interoperability, and even open a new dimensionality for the future GIS.

  5. Federated Access to Cyber Observables for Detection of Targeted Attacks

    DTIC Science & Technology

    2014-10-01

    each manages. The DQNs also utilize an intelligent information ex- traction capability for automatically suggesting mappings from text found in audit ...Harmelen, and others, “OWL web ontology language overview,” W3C Recomm., vol. 10, no. 2004–03, p. 10, 2004. [4] D. Miller and B. Pearson , Security...Online]. Available: http://www.disa.mil/Services/Information- Assurance /HBS/HBSS. [21] S. Zanikolas and R. Sakellariou, “A taxonomy of grid

  6. 78 FR 42122 - Bridge Builder Trust and Olive Street Investment Advisers, LLC; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-15

    ... SECURITIES AND EXCHANGE COMMISSION [Investment Company Act Release No. 30592; 812-14118] Bridge...: Bridge Builder Trust (the ``Trust'') and Olive Street Investment Advisers (the ``Adviser'') (collectively... the requested order as a Fund is the Bridge Builder Bond Fund. For purposes of the requested order...

  7. Automating generation of textual class definitions from OWL to English.

    PubMed

    Stevens, Robert; Malone, James; Williams, Sandra; Power, Richard; Third, Allan

    2011-05-17

    Text definitions for entities within bio-ontologies are a cornerstone of the effort to gain a consensus in understanding and usage of those ontologies. Writing these definitions is, however, a considerable effort and there is often a lag between specification of the main part of an ontology (logical descriptions and definitions of entities) and the development of the text-based definitions. The goal of natural language generation (NLG) from ontologies is to take the logical description of entities and generate fluent natural language. The application described here uses NLG to automatically provide text-based definitions from an ontology that has logical descriptions of its entities, so avoiding the bottleneck of authoring these definitions by hand. To produce the descriptions, the program collects all the axioms relating to a given entity, groups them according to common structure, realises each group through an English sentence, and assembles the resulting sentences into a paragraph, to form as 'coherent' a text as possible without human intervention. Sentence generation is accomplished using a generic grammar based on logical patterns in OWL, together with a lexicon for realising atomic entities. We have tested our output for the Experimental Factor Ontology (EFO) using a simple survey strategy to explore the fluency of the generated text and how well it conveys the underlying axiomatisation. Two rounds of survey and improvement show that overall the generated English definitions are found to convey the intended meaning of the axiomatisation in a satisfactory manner. The surveys also suggested that one form of generated English will not be universally liked; that intrusion of too much 'formal ontology' was not liked; and that too much explicit exposure of OWL semantics was also not liked. Our prototype tools can generate reasonable paragraphs of English text that can act as definitions. The definitions were found acceptable by our survey and, as a result, the developers of EFO are sufficiently satisfied with the output that the generated definitions have been incorporated into EFO. Whilst not a substitute for hand-written textual definitions, our generated definitions are a useful starting point. An on-line version of the NLG text definition tool can be found at http://swat.open.ac.uk/tools/. The questionaire and sample generated text definitions may be found at http://mcs.open.ac.uk/nlg/SWAT/bio-ontologies.html.

  8. Querying XML Data with SPARQL

    NASA Astrophysics Data System (ADS)

    Bikakis, Nikos; Gioldasis, Nektarios; Tsinaraki, Chrisa; Christodoulakis, Stavros

    SPARQL is today the standard access language for Semantic Web data. In the recent years XML databases have also acquired industrial importance due to the widespread applicability of XML in the Web. In this paper we present a framework that bridges the heterogeneity gap and creates an interoperable environment where SPARQL queries are used to access XML databases. Our approach assumes that fairly generic mappings between ontology constructs and XML Schema constructs have been automatically derived or manually specified. The mappings are used to automatically translate SPARQL queries to semantically equivalent XQuery queries which are used to access the XML databases. We present the algorithms and the implementation of SPARQL2XQuery framework, which is used for answering SPARQL queries over XML databases.

  9. Automatic annotation of protein motif function with Gene Ontology terms.

    PubMed

    Lu, Xinghua; Zhai, Chengxiang; Gopalakrishnan, Vanathi; Buchanan, Bruce G

    2004-09-02

    Conserved protein sequence motifs are short stretches of amino acid sequence patterns that potentially encode the function of proteins. Several sequence pattern searching algorithms and programs exist foridentifying candidate protein motifs at the whole genome level. However, a much needed and important task is to determine the functions of the newly identified protein motifs. The Gene Ontology (GO) project is an endeavor to annotate the function of genes or protein sequences with terms from a dynamic, controlled vocabulary and these annotations serve well as a knowledge base. This paper presents methods to mine the GO knowledge base and use the association between the GO terms assigned to a sequence and the motifs matched by the same sequence as evidence for predicting the functions of novel protein motifs automatically. The task of assigning GO terms to protein motifs is viewed as both a binary classification and information retrieval problem, where PROSITE motifs are used as samples for mode training and functional prediction. The mutual information of a motif and aGO term association is found to be a very useful feature. We take advantage of the known motifs to train a logistic regression classifier, which allows us to combine mutual information with other frequency-based features and obtain a probability of correct association. The trained logistic regression model has intuitively meaningful and logically plausible parameter values, and performs very well empirically according to our evaluation criteria. In this research, different methods for automatic annotation of protein motifs have been investigated. Empirical result demonstrated that the methods have a great potential for detecting and augmenting information about the functions of newly discovered candidate protein motifs.

  10. BiobankConnect: software to rapidly connect data elements for pooled analysis across biobanks using ontological and lexical indexing.

    PubMed

    Pang, Chao; Hendriksen, Dennis; Dijkstra, Martijn; van der Velde, K Joeri; Kuiper, Joel; Hillege, Hans L; Swertz, Morris A

    2015-01-01

    Pooling data across biobanks is necessary to increase statistical power, reveal more subtle associations, and synergize the value of data sources. However, searching for desired data elements among the thousands of available elements and harmonizing differences in terminology, data collection, and structure, is arduous and time consuming. To speed up biobank data pooling we developed BiobankConnect, a system to semi-automatically match desired data elements to available elements by: (1) annotating the desired elements with ontology terms using BioPortal; (2) automatically expanding the query for these elements with synonyms and subclass information using OntoCAT; (3) automatically searching available elements for these expanded terms using Lucene lexical matching; and (4) shortlisting relevant matches sorted by matching score. We evaluated BiobankConnect using human curated matches from EU-BioSHaRE, searching for 32 desired data elements in 7461 available elements from six biobanks. We found 0.75 precision at rank 1 and 0.74 recall at rank 10 compared to a manually curated set of relevant matches. In addition, best matches chosen by BioSHaRE experts ranked first in 63.0% and in the top 10 in 98.4% of cases, indicating that our system has the potential to significantly reduce manual matching work. BiobankConnect provides an easy user interface to significantly speed up the biobank harmonization process. It may also prove useful for other forms of biomedical data integration. All the software can be downloaded as a MOLGENIS open source app from http://www.github.com/molgenis, with a demo available at http://www.biobankconnect.org. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  11. BiobankConnect: software to rapidly connect data elements for pooled analysis across biobanks using ontological and lexical indexing

    PubMed Central

    Pang, Chao; Hendriksen, Dennis; Dijkstra, Martijn; van der Velde, K Joeri; Kuiper, Joel; Hillege, Hans L; Swertz, Morris A

    2015-01-01

    Objective Pooling data across biobanks is necessary to increase statistical power, reveal more subtle associations, and synergize the value of data sources. However, searching for desired data elements among the thousands of available elements and harmonizing differences in terminology, data collection, and structure, is arduous and time consuming. Materials and methods To speed up biobank data pooling we developed BiobankConnect, a system to semi-automatically match desired data elements to available elements by: (1) annotating the desired elements with ontology terms using BioPortal; (2) automatically expanding the query for these elements with synonyms and subclass information using OntoCAT; (3) automatically searching available elements for these expanded terms using Lucene lexical matching; and (4) shortlisting relevant matches sorted by matching score. Results We evaluated BiobankConnect using human curated matches from EU-BioSHaRE, searching for 32 desired data elements in 7461 available elements from six biobanks. We found 0.75 precision at rank 1 and 0.74 recall at rank 10 compared to a manually curated set of relevant matches. In addition, best matches chosen by BioSHaRE experts ranked first in 63.0% and in the top 10 in 98.4% of cases, indicating that our system has the potential to significantly reduce manual matching work. Conclusions BiobankConnect provides an easy user interface to significantly speed up the biobank harmonization process. It may also prove useful for other forms of biomedical data integration. All the software can be downloaded as a MOLGENIS open source app from http://www.github.com/molgenis, with a demo available at http://www.biobankconnect.org. PMID:25361575

  12. A rule-based ontological framework for the classification of molecules

    PubMed Central

    2014-01-01

    Background A variety of key activities within life sciences research involves integrating and intelligently managing large amounts of biochemical information. Semantic technologies provide an intuitive way to organise and sift through these rapidly growing datasets via the design and maintenance of ontology-supported knowledge bases. To this end, OWL—a W3C standard declarative language— has been extensively used in the deployment of biochemical ontologies that can be conveniently organised using the classification facilities of OWL-based tools. One of the most established ontologies for the chemical domain is ChEBI, an open-access dictionary of molecular entities that supplies high quality annotation and taxonomical information for biologically relevant compounds. However, ChEBI is being manually expanded which hinders its potential to grow due to the limited availability of human resources. Results In this work, we describe a prototype that performs automatic classification of chemical compounds. The software we present implements a sound and complete reasoning procedure of a formalism that extends datalog and builds upon an off-the-shelf deductive database system. We capture a wide range of chemical classes that are not expressible with OWL-based formalisms such as cyclic molecules, saturated molecules and alkanes. Furthermore, we describe a surface ‘less-logician-like’ syntax that allows application experts to create ontological descriptions of complex biochemical objects without prior knowledge of logic. In terms of performance, a noticeable improvement is observed in comparison with previous approaches. Our evaluation has discovered subsumptions that are missing from the manually curated ChEBI ontology as well as discrepancies with respect to existing subclass relations. We illustrate thus the potential of an ontology language suitable for the life sciences domain that exhibits a favourable balance between expressive power and practical feasibility. Conclusions Our proposed methodology can form the basis of an ontology-mediated application to assist biocurators in the production of complete and error-free taxonomies. Moreover, such a tool could contribute to a more rapid development of the ChEBI ontology and to the efforts of the ChEBI team to make annotated chemical datasets available to the public. From a modelling point of view, our approach could stimulate the adoption of a different and expressive reasoning paradigm based on rules for which state-of-the-art and highly optimised reasoners are available; it could thus pave the way for the representation of a broader spectrum of life sciences and biomedical knowledge. PMID:24735706

  13. A rule-based ontological framework for the classification of molecules.

    PubMed

    Magka, Despoina; Krötzsch, Markus; Horrocks, Ian

    2014-01-01

    A variety of key activities within life sciences research involves integrating and intelligently managing large amounts of biochemical information. Semantic technologies provide an intuitive way to organise and sift through these rapidly growing datasets via the design and maintenance of ontology-supported knowledge bases. To this end, OWL-a W3C standard declarative language- has been extensively used in the deployment of biochemical ontologies that can be conveniently organised using the classification facilities of OWL-based tools. One of the most established ontologies for the chemical domain is ChEBI, an open-access dictionary of molecular entities that supplies high quality annotation and taxonomical information for biologically relevant compounds. However, ChEBI is being manually expanded which hinders its potential to grow due to the limited availability of human resources. In this work, we describe a prototype that performs automatic classification of chemical compounds. The software we present implements a sound and complete reasoning procedure of a formalism that extends datalog and builds upon an off-the-shelf deductive database system. We capture a wide range of chemical classes that are not expressible with OWL-based formalisms such as cyclic molecules, saturated molecules and alkanes. Furthermore, we describe a surface 'less-logician-like' syntax that allows application experts to create ontological descriptions of complex biochemical objects without prior knowledge of logic. In terms of performance, a noticeable improvement is observed in comparison with previous approaches. Our evaluation has discovered subsumptions that are missing from the manually curated ChEBI ontology as well as discrepancies with respect to existing subclass relations. We illustrate thus the potential of an ontology language suitable for the life sciences domain that exhibits a favourable balance between expressive power and practical feasibility. Our proposed methodology can form the basis of an ontology-mediated application to assist biocurators in the production of complete and error-free taxonomies. Moreover, such a tool could contribute to a more rapid development of the ChEBI ontology and to the efforts of the ChEBI team to make annotated chemical datasets available to the public. From a modelling point of view, our approach could stimulate the adoption of a different and expressive reasoning paradigm based on rules for which state-of-the-art and highly optimised reasoners are available; it could thus pave the way for the representation of a broader spectrum of life sciences and biomedical knowledge.

  14. A Therapy System for Post-Traumatic Stress Disorder Using a Virtual Agent and Virtual Storytelling to Reconstruct Traumatic Memories.

    PubMed

    Tielman, Myrthe L; Neerincx, Mark A; Bidarra, Rafael; Kybartas, Ben; Brinkman, Willem-Paul

    2017-08-01

    Although post-traumatic stress disorder (PTSD) is well treatable, many people do not get the desired treatment due to barriers to care (such as stigma and cost). This paper presents a system that bridges this gap by enabling patients to follow therapy at home. A therapist is only involved remotely, to monitor progress and serve as a safety net. With this system, patients can recollect their memories in a digital diary and recreate them in a 3D WorldBuilder. Throughout the therapy, a virtual agent is present to inform and guide patients through the sessions, employing an ontology-based question module for recollecting traumatic memories to further elicit a detailed memory recollection. In a usability study with former PTSD patients (n = 4), these questions were found useful for memory recollection. Moreover, the usability of the whole system was rated positively. This system has the potential to be a valuable addition to the spectrum of PTSD treatments, offering a novel type of home therapy assisted by a virtual agent.

  15. 76 FR 19466 - Masco Builder Cabinet Group Including On-Site Leased Workers From Reserves Network, Reliable...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-07

    ... Builder Cabinet Group Including On-Site Leased Workers From Reserves Network, Reliable Staffing, and Third Dimension Waverly, OH; Masco Builder Cabinet Group Including On-Site Leased Workers From Reserves Network... Group including on-site leased workers from Reserves Network, Jackson, Ohio. The workers produce...

  16. 76 FR 13226 - Amended Certification Regarding Eligibility To Apply for Worker Adjustment Assistance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-10

    ... BUILDER CABINET GROUP INCLUDING ON-SITE LEASED WORKERS FROM RESERVES NETWORK AND RELIABLE STAFFING WAVERLY, OHIO. TA-W-71,287B MASCO BUILDER CABINET GROUP INCLUDING ON-SITE LEASED WORKERS FROM RESERVES NETWORK... Builder Cabinet Group, including on-site leased workers from Reserves Network and Reliable Staffing...

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    This builder took home the Grand Winner prize in the Custom Builder category in the 2014 Housing Innovation Awards for its high performance building science approach. The builder used insulated concrete form blocks to create the insulated crawlspace foundation for its first DOE Zero Energy Ready Home, the first net zero energy new home certified in the state of California.

  18. 46 CFR 388.4 - Criteria for grant of a waiver.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... unduly adversely affect— (i) United States vessel builders; or (ii) The coastwise trade business of any person who employs vessels built in the United States in that business. (2) The determination of “unduly... builders: Whether a potentially affected U.S. vessel builder has a history of construction of similar...

  19. Energy Conservation for the Home Builder: A Course for Residential Builders. Course Outline and Instructional Materials.

    ERIC Educational Resources Information Center

    Koenigshofer, Daniel R.

    Background information, handouts and related instructional materials comprise this manual for conducting a course on energy conservation for home builders. Information presented in the five- and ten-hour course is intended to help residential contractors make appropriate and cost-effective decisions in constructing energy-efficient dwellings.…

  20. An integrative approach for measuring semantic similarities using gene ontology.

    PubMed

    Peng, Jiajie; Li, Hongxiang; Jiang, Qinghua; Wang, Yadong; Chen, Jin

    2014-01-01

    Gene Ontology (GO) provides rich information and a convenient way to study gene functional similarity, which has been successfully used in various applications. However, the existing GO based similarity measurements have limited functions for only a subset of GO information is considered in each measure. An appropriate integration of the existing measures to take into account more information in GO is demanding. We propose a novel integrative measure called InteGO2 to automatically select appropriate seed measures and then to integrate them using a metaheuristic search method. The experiment results show that InteGO2 significantly improves the performance of gene similarity in human, Arabidopsis and yeast on both molecular function and biological process GO categories. InteGO2 computes gene-to-gene similarities more accurately than tested existing measures and has high robustness. The supplementary document and software are available at http://mlg.hit.edu.cn:8082/.

  1. Ontology-based content analysis of US patent applications from 2001-2010.

    PubMed

    Weber, Lutz; Böhme, Timo; Irmer, Matthias

    2013-01-01

    Ontology-based semantic text analysis methods allow to automatically extract knowledge relationships and data from text documents. In this review, we have applied these technologies for the systematic analysis of pharmaceutical patents. Hierarchical concepts from the knowledge domains of chemical compounds, diseases and proteins were used to annotate full-text US patent applications that deal with pharmacological activities of chemical compounds and filed in the years 2001-2010. Compounds claimed in these applications have been classified into their respective compound classes to review the distribution of scaffold types or general compound classes such as natural products in a time-dependent manner. Similarly, the target proteins and claimed utility of the compounds have been classified and the most relevant were extracted. The method presented allows the discovery of the main areas of innovation as well as emerging fields of patenting activities - providing a broad statistical basis for competitor analysis and decision-making efforts.

  2. Cell illustrator 4.0: a computational platform for systems biology.

    PubMed

    Nagasaki, Masao; Saito, Ayumu; Jeong, Euna; Li, Chen; Kojima, Kaname; Ikeda, Emi; Miyano, Satoru

    2011-01-01

    Cell Illustrator is a software platform for Systems Biology that uses the concept of Petri net for modeling and simulating biopathways. It is intended for biological scientists working at bench. The latest version of Cell Illustrator 4.0 uses Java Web Start technology and is enhanced with new capabilities, including: automatic graph grid layout algorithms using ontology information; tools using Cell System Markup Language (CSML) 3.0 and Cell System Ontology 3.0; parameter search module; high-performance simulation module; CSML database management system; conversion from CSML model to programming languages (FORTRAN, C, C++, Java, Python and Perl); import from SBML, CellML, and BioPAX; and, export to SVG and HTML. Cell Illustrator employs an extension of hybrid Petri net in an object-oriented style so that biopathway models can include objects such as DNA sequence, molecular density, 3D localization information, transcription with frame-shift, translation with codon table, as well as biochemical reactions.

  3. Formalizing Evidence Type Definitions for Drug-Drug Interaction Studies to Improve Evidence Base Curation.

    PubMed

    Utecht, Joseph; Brochhausen, Mathias; Judkins, John; Schneider, Jodi; Boyce, Richard D

    2017-01-01

    In this research we aim to demonstrate that an ontology-based system can categorize potential drug-drug interaction (PDDI) evidence items into complex types based on a small set of simple questions. Such a method could increase the transparency and reliability of PDDI evidence evaluation, while also reducing the variations in content and seriousness ratings present in PDDI knowledge bases. We extended the DIDEO ontology with 44 formal evidence type definitions. We then manually annotated the evidence types of 30 evidence items. We tested an RDF/OWL representation of answers to a small number of simple questions about each of these 30 evidence items and showed that automatic inference can determine the detailed evidence types based on this small number of simpler questions. These results show proof-of-concept for a decision support infrastructure that frees the evidence evaluator from mastering relatively complex written evidence type definitions.

  4. Culto: AN Ontology-Based Annotation Tool for Data Curation in Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Garozzo, R.; Murabito, F.; Santagati, C.; Pino, C.; Spampinato, C.

    2017-08-01

    This paper proposes CulTO, a software tool relying on a computational ontology for Cultural Heritage domain modelling, with a specific focus on religious historical buildings, for supporting cultural heritage experts in their investigations. It is specifically thought to support annotation, automatic indexing, classification and curation of photographic data and text documents of historical buildings. CULTO also serves as a useful tool for Historical Building Information Modeling (H-BIM) by enabling semantic 3D data modeling and further enrichment with non-geometrical information of historical buildings through the inclusion of new concepts about historical documents, images, decay or deformation evidence as well as decorative elements into BIM platforms. CulTO is the result of a joint research effort between the Laboratory of Surveying and Architectural Photogrammetry "Luigi Andreozzi" and the PeRCeiVe Lab (Pattern Recognition and Computer Vision Lab) of the University of Catania,

  5. Cell Illustrator 4.0: a computational platform for systems biology.

    PubMed

    Nagasaki, Masao; Saito, Ayumu; Jeong, Euna; Li, Chen; Kojima, Kaname; Ikeda, Emi; Miyano, Satoru

    2010-01-01

    Cell Illustrator is a software platform for Systems Biology that uses the concept of Petri net for modeling and simulating biopathways. It is intended for biological scientists working at bench. The latest version of Cell Illustrator 4.0 uses Java Web Start technology and is enhanced with new capabilities, including: automatic graph grid layout algorithms using ontology information; tools using Cell System Markup Language (CSML) 3.0 and Cell System Ontology 3.0; parameter search module; high-performance simulation module; CSML database management system; conversion from CSML model to programming languages (FORTRAN, C, C++, Java, Python and Perl); import from SBML, CellML, and BioPAX; and, export to SVG and HTML. Cell Illustrator employs an extension of hybrid Petri net in an object-oriented style so that biopathway models can include objects such as DNA sequence, molecular density, 3D localization information, transcription with frame-shift, translation with codon table, as well as biochemical reactions.

  6. Semantic Analysis of Email Using Domain Ontologies and WordNet

    NASA Technical Reports Server (NTRS)

    Berrios, Daniel C.; Keller, Richard M.

    2005-01-01

    The problem of capturing and accessing knowledge in paper form has been supplanted by a problem of providing structure to vast amounts of electronic information. Systems that can construct semantic links for natural language documents like email messages automatically will be a crucial element of semantic email tools. We have designed an information extraction process that can leverage the knowledge already contained in an existing semantic web, recognizing references in email to existing nodes in a network of ontology instances by using linguistic knowledge and knowledge of the structure of the semantic web. We developed a heuristic score that uses several forms of evidence to detect references in email to existing nodes in the Semanticorganizer repository's network. While these scores cannot directly support automated probabilistic inference, they can be used to rank nodes by relevance and link those deemed most relevant to email messages.

  7. Semantic representation of reported measurements in radiology.

    PubMed

    Oberkampf, Heiner; Zillner, Sonja; Overton, James A; Bauer, Bernhard; Cavallaro, Alexander; Uder, Michael; Hammon, Matthias

    2016-01-22

    In radiology, a vast amount of diverse data is generated, and unstructured reporting is standard. Hence, much useful information is trapped in free-text form, and often lost in translation and transmission. One relevant source of free-text data consists of reports covering the assessment of changes in tumor burden, which are needed for the evaluation of cancer treatment success. Any change of lesion size is a critical factor in follow-up examinations. It is difficult to retrieve specific information from unstructured reports and to compare them over time. Therefore, a prototype was implemented that demonstrates the structured representation of findings, allowing selective review in consecutive examinations and thus more efficient comparison over time. We developed a semantic Model for Clinical Information (MCI) based on existing ontologies from the Open Biological and Biomedical Ontologies (OBO) library. MCI is used for the integrated representation of measured image findings and medical knowledge about the normal size of anatomical entities. An integrated view of the radiology findings is realized by a prototype implementation of a ReportViewer. Further, RECIST (Response Evaluation Criteria In Solid Tumors) guidelines are implemented by SPARQL queries on MCI. The evaluation is based on two data sets of German radiology reports: An oncologic data set consisting of 2584 reports on 377 lymphoma patients and a mixed data set consisting of 6007 reports on diverse medical and surgical patients. All measurement findings were automatically classified as abnormal/normal using formalized medical background knowledge, i.e., knowledge that has been encoded into an ontology. A radiologist evaluated 813 classifications as correct or incorrect. All unclassified findings were evaluated as incorrect. The proposed approach allows the automatic classification of findings with an accuracy of 96.4 % for oncologic reports and 92.9 % for mixed reports. The ReportViewer permits efficient comparison of measured findings from consecutive examinations. The implementation of RECIST guidelines with SPARQL enhances the quality of the selection and comparison of target lesions as well as the corresponding treatment response evaluation. The developed MCI enables an accurate integrated representation of reported measurements and medical knowledge. Thus, measurements can be automatically classified and integrated in different decision processes. The structured representation is suitable for improved integration of clinical findings during decision-making. The proposed ReportViewer provides a longitudinal overview of the measurements.

  8. ABodyBuilder: Automated antibody structure prediction with data–driven accuracy estimation

    PubMed Central

    Leem, Jinwoo; Dunbar, James; Georges, Guy; Shi, Jiye; Deane, Charlotte M.

    2016-01-01

    ABSTRACT Computational modeling of antibody structures plays a critical role in therapeutic antibody design. Several antibody modeling pipelines exist, but no freely available methods currently model nanobodies, provide estimates of expected model accuracy, or highlight potential issues with the antibody's experimental development. Here, we describe our automated antibody modeling pipeline, ABodyBuilder, designed to overcome these issues. The algorithm itself follows the standard 4 steps of template selection, orientation prediction, complementarity-determining region (CDR) loop modeling, and side chain prediction. ABodyBuilder then annotates the ‘confidence’ of the model as a probability that a component of the antibody (e.g., CDRL3 loop) will be modeled within a root–mean square deviation threshold. It also flags structural motifs on the model that are known to cause issues during in vitro development. ABodyBuilder was tested on 4 separate datasets, including the 11 antibodies from the Antibody Modeling Assessment–II competition. ABodyBuilder builds models that are of similar quality to other methodologies, with sub–Angstrom predictions for the ‘canonical’ CDR loops. Its ability to model nanobodies, and rapidly generate models (∼30 seconds per model) widens its potential usage. ABodyBuilder can also help users in decision–making for the development of novel antibodies because it provides model confidence and potential sequence liabilities. ABodyBuilder is freely available at http://opig.stats.ox.ac.uk/webapps/abodybuilder. PMID:27392298

  9. 46 CFR 308.409 - Standard form of War Risk Builder's Risk Insurance Policy, Form MA-283.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 8 2011-10-01 2011-10-01 false Standard form of War Risk Builder's Risk Insurance Policy, Form MA-283. 308.409 Section 308.409 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EMERGENCY OPERATIONS WAR RISK INSURANCE War Risk Builder's Risk Insurance § 308.409 Standard form...

  10. 46 CFR 308.409 - Standard form of War Risk Builder's Risk Insurance Policy, Form MA-283.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 8 2013-10-01 2013-10-01 false Standard form of War Risk Builder's Risk Insurance Policy, Form MA-283. 308.409 Section 308.409 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EMERGENCY OPERATIONS WAR RISK INSURANCE War Risk Builder's Risk Insurance § 308.409 Standard form...

  11. 46 CFR 308.409 - Standard form of War Risk Builder's Risk Insurance Policy, Form MA-283.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 8 2010-10-01 2010-10-01 false Standard form of War Risk Builder's Risk Insurance Policy, Form MA-283. 308.409 Section 308.409 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EMERGENCY OPERATIONS WAR RISK INSURANCE War Risk Builder's Risk Insurance § 308.409 Standard form...

  12. 46 CFR 308.409 - Standard form of War Risk Builder's Risk Insurance Policy, Form MA-283.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 8 2012-10-01 2012-10-01 false Standard form of War Risk Builder's Risk Insurance Policy, Form MA-283. 308.409 Section 308.409 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EMERGENCY OPERATIONS WAR RISK INSURANCE War Risk Builder's Risk Insurance § 308.409 Standard form...

  13. 46 CFR 308.409 - Standard form of War Risk Builder's Risk Insurance Policy, Form MA-283.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 8 2014-10-01 2014-10-01 false Standard form of War Risk Builder's Risk Insurance Policy, Form MA-283. 308.409 Section 308.409 Shipping MARITIME ADMINISTRATION, DEPARTMENT OF TRANSPORTATION EMERGENCY OPERATIONS WAR RISK INSURANCE War Risk Builder's Risk Insurance § 308.409 Standard form...

  14. On combining image-based and ontological semantic dissimilarities for medical image retrieval applications

    PubMed Central

    Kurtz, Camille; Depeursinge, Adrien; Napel, Sandy; Beaulieu, Christopher F.; Rubin, Daniel L.

    2014-01-01

    Computer-assisted image retrieval applications can assist radiologists by identifying similar images in archives as a means to providing decision support. In the classical case, images are described using low-level features extracted from their contents, and an appropriate distance is used to find the best matches in the feature space. However, using low-level image features to fully capture the visual appearance of diseases is challenging and the semantic gap between these features and the high-level visual concepts in radiology may impair the system performance. To deal with this issue, the use of semantic terms to provide high-level descriptions of radiological image contents has recently been advocated. Nevertheless, most of the existing semantic image retrieval strategies are limited by two factors: they require manual annotation of the images using semantic terms and they ignore the intrinsic visual and semantic relationships between these annotations during the comparison of the images. Based on these considerations, we propose an image retrieval framework based on semantic features that relies on two main strategies: (1) automatic “soft” prediction of ontological terms that describe the image contents from multi-scale Riesz wavelets and (2) retrieval of similar images by evaluating the similarity between their annotations using a new term dissimilarity measure, which takes into account both image-based and ontological term relations. The combination of these strategies provides a means of accurately retrieving similar images in databases based on image annotations and can be considered as a potential solution to the semantic gap problem. We validated this approach in the context of the retrieval of liver lesions from computed tomographic (CT) images and annotated with semantic terms of the RadLex ontology. The relevance of the retrieval results was assessed using two protocols: evaluation relative to a dissimilarity reference standard defined for pairs of images on a 25-images dataset, and evaluation relative to the diagnoses of the retrieved images on a 72-images dataset. A normalized discounted cumulative gain (NDCG) score of more than 0.92 was obtained with the first protocol, while AUC scores of more than 0.77 were obtained with the second protocol. This automatical approach could provide real-time decision support to radiologists by showing them similar images with associated diagnoses and, where available, responses to therapies. PMID:25036769

  15. 77 FR 49003 - Notice of Proposed Information Collection for Public Comment: Request for Acceptance of Changes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-15

    ... changes may affect the value shown on the DUD commitment. HUD requires the builder to use form HUD-92577... changes may affect the value shown on the DUD commitment. HUD requires the builder to use form HUD -92577... specifications for proposed construction properties as required by homebuyers or determined by the builder use...

  16. Transitioning to High Performance Homes: Successes and Lessons Learned From Seven Builders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Widder, Sarah H.; Kora, Angela R.; Baechler, Michael C.

    2013-03-01

    As homebuyers are becoming increasingly concerned about rising energy costs and the impact of fossil fuels as a major source of greenhouse gases, the returning new home market is beginning to demand energy-efficient and comfortable high-performance homes. In response to this, some innovative builders are gaining market share because they are able to market their homes’ comfort, better indoor air quality, and aesthetics, in addition to energy efficiency. The success and marketability of these high-performance homes is creating a builder demand for house plans and information about how to design, build, and sell their own low-energy homes. To help makemore » these and other builders more successful in the transition to high-performance construction techniques, Pacific Northwest National Laboratory (PNNL) partnered with seven interested builders in the hot humid and mixed humid climates to provide technical and design assistance through two building science firms, Florida Home Energy and Resources Organization (FL HERO) and Calcs-Plus, and a designer that offers a line of stock plans designed specifically for energy efficiency, called Energy Smart Home Plans (ESHP). This report summarizes the findings of research on cost-effective high-performance whole-house solutions, focusing on real-world implementation and challenges and identifying effective solutions. The ensuing sections provide project background, profile each of the builders who participated in the program, and describe their houses’ construction characteristics, key challenges the builders encountered during the construction and transaction process); and present primary lessons learned to be applied to future projects. As a result of this technical assistance, 17 homes have been built featuring climate-appropriate efficient envelopes, ducts in conditioned space, and correctly sized and controlled heating, ventilation, and air-conditioning systems. In addition, most builders intend to integrate high-performance features into most or all their homes in the future. As these seven builders have demonstrated, affordable, high-performance homes are possible, but require attention to detail and flexibility in design to accommodate specific regional geographic or market-driven constraints that can increase cost. With better information regarding how energy-efficiency trade-offs or design choices affect overall home performance, builders can make informed decisions regarding home design and construction to minimize cost without sacrificing performance and energy savings.« less

  17. Kinase Pathway Database: An Integrated Protein-Kinase and NLP-Based Protein-Interaction Resource

    PubMed Central

    Koike, Asako; Kobayashi, Yoshiyuki; Takagi, Toshihisa

    2003-01-01

    Protein kinases play a crucial role in the regulation of cellular functions. Various kinds of information about these molecules are important for understanding signaling pathways and organism characteristics. We have developed the Kinase Pathway Database, an integrated database involving major completely sequenced eukaryotes. It contains the classification of protein kinases and their functional conservation, ortholog tables among species, protein–protein, protein–gene, and protein–compound interaction data, domain information, and structural information. It also provides an automatic pathway graphic image interface. The protein, gene, and compound interactions are automatically extracted from abstracts for all genes and proteins by natural-language processing (NLP).The method of automatic extraction uses phrase patterns and the GENA protein, gene, and compound name dictionary, which was developed by our group. With this database, pathways are easily compared among species using data with more than 47,000 protein interactions and protein kinase ortholog tables. The database is available for querying and browsing at http://kinasedb.ontology.ims.u-tokyo.ac.jp/. PMID:12799355

  18. Building the Knowledge Base to Support the Automatic Animation Generation of Chinese Traditional Architecture

    NASA Astrophysics Data System (ADS)

    Wei, Gongjin; Bai, Weijing; Yin, Meifang; Zhang, Songmao

    We present a practice of applying the Semantic Web technologies in the domain of Chinese traditional architecture. A knowledge base consisting of one ontology and four rule bases is built to support the automatic generation of animations that demonstrate the construction of various Chinese timber structures based on the user's input. Different Semantic Web formalisms are used, e.g., OWL DL, SWRL and Jess, to capture the domain knowledge, including the wooden components needed for a given building, construction sequence, and the 3D size and position of every piece of wood. Our experience in exploiting the current Semantic Web technologies in real-world application systems indicates their prominent advantages (such as the reasoning facilities and modeling tools) as well as the limitations (such as low efficiency).

  19. DOE Zero Energy Ready Home Case Study: Sterling Brook Custom Homes — Village Park Eco Home, Double Park, TX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    This builder won a Custom Builder honor in the 2014 Housing Innovation Awards for this showcase home that serves as an energy-efficient model home for the custom home builder: 1,300 visitors toured the home, thousands more learned about the home’s advanced construction via the webpage, YouTube, Twitter, Facebook, Instagram, and Pinterest.

  20. Cost and time study for constructing raised wood floor systems in the Gulf Coast Region of the United States

    Treesearch

    Marie Del Bianco; David B. McKeever; Lance Barta

    2012-01-01

    This report is the result of a co-operative effort between the USDA Forest Service, Forest Products Laboratory (FPL) Advanced Housing Research Center, the National Assocation of Home Builders (NAHB) Research Center, and builder members of the Metropolitan Mobile and Baldwin County Home Builders Associations. The study was undertaken to further knowledge that will...

  1. Transforming Junior Leader Development: Developing the Next Generation of Pentathletes

    DTIC Science & Technology

    2007-12-14

    22 and include the strategic leader competency of world-class warrior. Army Leaders serve as a role 69 model, team builder , warrior, influencer... builder ; 4. Upholds the highest standards of duty, honor, integrity, and character. Leadership and learning are indispensable to each other...national goals; c. Innovative thinkers, self-aware, culturally astute, diplomatically aware, and cohesive team builders ; d. Torchbearers of the highest

  2. PANTHER: a browsable database of gene products organized by biological function, using curated protein family and subfamily classification.

    PubMed

    Thomas, Paul D; Kejariwal, Anish; Campbell, Michael J; Mi, Huaiyu; Diemer, Karen; Guo, Nan; Ladunga, Istvan; Ulitsky-Lazareva, Betty; Muruganujan, Anushya; Rabkin, Steven; Vandergriff, Jody A; Doremieux, Olivier

    2003-01-01

    The PANTHER database was designed for high-throughput analysis of protein sequences. One of the key features is a simplified ontology of protein function, which allows browsing of the database by biological functions. Biologist curators have associated the ontology terms with groups of protein sequences rather than individual sequences. Statistical models (Hidden Markov Models, or HMMs) are built from each of these groups. The advantage of this approach is that new sequences can be automatically classified as they become available. To ensure accurate functional classification, HMMs are constructed not only for families, but also for functionally distinct subfamilies. Multiple sequence alignments and phylogenetic trees, including curator-assigned information, are available for each family. The current version of the PANTHER database includes training sequences from all organisms in the GenBank non-redundant protein database, and the HMMs have been used to classify gene products across the entire genomes of human, and Drosophila melanogaster. The ontology terms and protein families and subfamilies, as well as Drosophila gene c;assifications, can be browsed and searched for free. Due to outstanding contractual obligations, access to human gene classifications and to protein family trees and multiple sequence alignments will temporarily require a nominal registration fee. PANTHER is publicly available on the web at http://panther.celera.com.

  3. Resident Space Object Characterization and Behavior Understanding via Machine Learning and Ontology-based Bayesian Networks

    NASA Astrophysics Data System (ADS)

    Furfaro, R.; Linares, R.; Gaylor, D.; Jah, M.; Walls, R.

    2016-09-01

    In this paper, we present an end-to-end approach that employs machine learning techniques and Ontology-based Bayesian Networks (BN) to characterize the behavior of resident space objects. State-of-the-Art machine learning architectures (e.g. Extreme Learning Machines, Convolutional Deep Networks) are trained on physical models to learn the Resident Space Object (RSO) features in the vectorized energy and momentum states and parameters. The mapping from measurements to vectorized energy and momentum states and parameters enables behavior characterization via clustering in the features space and subsequent RSO classification. Additionally, Space Object Behavioral Ontologies (SOBO) are employed to define and capture the domain knowledge-base (KB) and BNs are constructed from the SOBO in a semi-automatic fashion to execute probabilistic reasoning over conclusions drawn from trained classifiers and/or directly from processed data. Such an approach enables integrating machine learning classifiers and probabilistic reasoning to support higher-level decision making for space domain awareness applications. The innovation here is to use these methods (which have enjoyed great success in other domains) in synergy so that it enables a "from data to discovery" paradigm by facilitating the linkage and fusion of large and disparate sources of information via a Big Data Science and Analytics framework.

  4. Mining and Utilizing Dataset Relevancy from Oceanographic Dataset (MUDROD) Metadata, Usage Metrics, and User Feedback to Improve Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Li, Y.; Jiang, Y.; Yang, C. P.; Armstrong, E. M.; Huang, T.; Moroni, D. F.; McGibbney, L. J.

    2016-12-01

    Big oceanographic data have been produced, archived and made available online, but finding the right data for scientific research and application development is still a significant challenge. A long-standing problem in data discovery is how to find the interrelationships between keywords and data, as well as the intrarelationships of the two individually. Most previous research attempted to solve this problem by building domain-specific ontology either manually or through automatic machine learning techniques. The former is costly, labor intensive and hard to keep up-to-date, while the latter is prone to noise and may be difficult for human to understand. Large-scale user behavior data modelling represents a largely untapped, unique, and valuable source for discovering semantic relationships among domain-specific vocabulary. In this article, we propose a search engine framework for mining and utilizing dataset relevancy from oceanographic dataset metadata, user behaviors, and existing ontology. The objective is to improve discovery accuracy of oceanographic data and reduce time for scientist to discover, download and reformat data for their projects. Experiments and a search example show that the proposed search engine helps both scientists and general users search with better ranking results, recommendation, and ontology navigation.

  5. The language of gene ontology: a Zipf's law analysis.

    PubMed

    Kalankesh, Leila Ranandeh; Stevens, Robert; Brass, Andy

    2012-06-07

    Most major genome projects and sequence databases provide a GO annotation of their data, either automatically or through human annotators, creating a large corpus of data written in the language of GO. Texts written in natural language show a statistical power law behaviour, Zipf's law, the exponent of which can provide useful information on the nature of the language being used. We have therefore explored the hypothesis that collections of GO annotations will show similar statistical behaviours to natural language. Annotations from the Gene Ontology Annotation project were found to follow Zipf's law. Surprisingly, the measured power law exponents were consistently different between annotation captured using the three GO sub-ontologies in the corpora (function, process and component). On filtering the corpora using GO evidence codes we found that the value of the measured power law exponent responded in a predictable way as a function of the evidence codes used to support the annotation. Techniques from computational linguistics can provide new insights into the annotation process. GO annotations show similar statistical behaviours to those seen in natural language with measured exponents that provide a signal which correlates with the nature of the evidence codes used to support the annotations, suggesting that the measured exponent might provide a signal regarding the information content of the annotation.

  6. EuroPhenome: a repository for high-throughput mouse phenotyping data

    PubMed Central

    Morgan, Hugh; Beck, Tim; Blake, Andrew; Gates, Hilary; Adams, Niels; Debouzy, Guillaume; Leblanc, Sophie; Lengger, Christoph; Maier, Holger; Melvin, David; Meziane, Hamid; Richardson, Dave; Wells, Sara; White, Jacqui; Wood, Joe; de Angelis, Martin Hrabé; Brown, Steve D. M.; Hancock, John M.; Mallon, Ann-Marie

    2010-01-01

    The broad aim of biomedical science in the postgenomic era is to link genomic and phenotype information to allow deeper understanding of the processes leading from genomic changes to altered phenotype and disease. The EuroPhenome project (http://www.EuroPhenome.org) is a comprehensive resource for raw and annotated high-throughput phenotyping data arising from projects such as EUMODIC. EUMODIC is gathering data from the EMPReSSslim pipeline (http://www.empress.har.mrc.ac.uk/) which is performed on inbred mouse strains and knock-out lines arising from the EUCOMM project. The EuroPhenome interface allows the user to access the data via the phenotype or genotype. It also allows the user to access the data in a variety of ways, including graphical display, statistical analysis and access to the raw data via web services. The raw phenotyping data captured in EuroPhenome is annotated by an annotation pipeline which automatically identifies statistically different mutants from the appropriate baseline and assigns ontology terms for that specific test. Mutant phenotypes can be quickly identified using two EuroPhenome tools: PhenoMap, a graphical representation of statistically relevant phenotypes, and mining for a mutant using ontology terms. To assist with data definition and cross-database comparisons, phenotype data is annotated using combinations of terms from biological ontologies. PMID:19933761

  7. Automating generation of textual class definitions from OWL to English

    PubMed Central

    2011-01-01

    Background Text definitions for entities within bio-ontologies are a cornerstone of the effort to gain a consensus in understanding and usage of those ontologies. Writing these definitions is, however, a considerable effort and there is often a lag between specification of the main part of an ontology (logical descriptions and definitions of entities) and the development of the text-based definitions. The goal of natural language generation (NLG) from ontologies is to take the logical description of entities and generate fluent natural language. The application described here uses NLG to automatically provide text-based definitions from an ontology that has logical descriptions of its entities, so avoiding the bottleneck of authoring these definitions by hand. Results To produce the descriptions, the program collects all the axioms relating to a given entity, groups them according to common structure, realises each group through an English sentence, and assembles the resulting sentences into a paragraph, to form as ‘coherent’ a text as possible without human intervention. Sentence generation is accomplished using a generic grammar based on logical patterns in OWL, together with a lexicon for realising atomic entities. We have tested our output for the Experimental Factor Ontology (EFO) using a simple survey strategy to explore the fluency of the generated text and how well it conveys the underlying axiomatisation. Two rounds of survey and improvement show that overall the generated English definitions are found to convey the intended meaning of the axiomatisation in a satisfactory manner. The surveys also suggested that one form of generated English will not be universally liked; that intrusion of too much ‘formal ontology’ was not liked; and that too much explicit exposure of OWL semantics was also not liked. Conclusions Our prototype tools can generate reasonable paragraphs of English text that can act as definitions. The definitions were found acceptable by our survey and, as a result, the developers of EFO are sufficiently satisfied with the output that the generated definitions have been incorporated into EFO. Whilst not a substitute for hand-written textual definitions, our generated definitions are a useful starting point. Availability An on-line version of the NLG text definition tool can be found at http://swat.open.ac.uk/tools/. The questionaire and sample generated text definitions may be found at http://mcs.open.ac.uk/nlg/SWAT/bio-ontologies.html. PMID:21624160

  8. COMICS: Cartoon Visualization of Omics Data in Spatial Context Using Anatomical Ontologies

    PubMed Central

    2017-01-01

    COMICS is an interactive and open-access web platform for integration and visualization of molecular expression data in anatomograms of zebrafish, carp, and mouse model systems. Anatomical ontologies are used to map omics data across experiments and between an experiment and a particular visualization in a data-dependent manner. COMICS is built on top of several existing resources. Zebrafish and mouse anatomical ontologies with their controlled vocabulary (CV) and defined hierarchy are used with the ontoCAT R package to aggregate data for comparison and visualization. Libraries from the QGIS geographical information system are used with the R packages “maps” and “maptools” to visualize and interact with molecular expression data in anatomical drawings of the model systems. COMICS allows users to upload their own data from omics experiments, using any gene or protein nomenclature they wish, as long as CV terms are used to define anatomical regions or developmental stages. Common nomenclatures such as the ZFIN gene names and UniProt accessions are provided additional support. COMICS can be used to generate publication-quality visualizations of gene and protein expression across experiments. Unlike previous tools that have used anatomical ontologies to interpret imaging data in several animal models, including zebrafish, COMICS is designed to take spatially resolved data generated by dissection or fractionation and display this data in visually clear anatomical representations rather than large data tables. COMICS is optimized for ease-of-use, with a minimalistic web interface and automatic selection of the appropriate visual representation depending on the input data. PMID:29083911

  9. COMICS: Cartoon Visualization of Omics Data in Spatial Context Using Anatomical Ontologies.

    PubMed

    Travin, Dmitrii; Popov, Iaroslav; Guler, Arzu Tugce; Medvedev, Dmitry; van der Plas-Duivesteijn, Suzanne; Varela, Monica; Kolder, Iris C R M; Meijer, Annemarie H; Spaink, Herman P; Palmblad, Magnus

    2018-01-05

    COMICS is an interactive and open-access web platform for integration and visualization of molecular expression data in anatomograms of zebrafish, carp, and mouse model systems. Anatomical ontologies are used to map omics data across experiments and between an experiment and a particular visualization in a data-dependent manner. COMICS is built on top of several existing resources. Zebrafish and mouse anatomical ontologies with their controlled vocabulary (CV) and defined hierarchy are used with the ontoCAT R package to aggregate data for comparison and visualization. Libraries from the QGIS geographical information system are used with the R packages "maps" and "maptools" to visualize and interact with molecular expression data in anatomical drawings of the model systems. COMICS allows users to upload their own data from omics experiments, using any gene or protein nomenclature they wish, as long as CV terms are used to define anatomical regions or developmental stages. Common nomenclatures such as the ZFIN gene names and UniProt accessions are provided additional support. COMICS can be used to generate publication-quality visualizations of gene and protein expression across experiments. Unlike previous tools that have used anatomical ontologies to interpret imaging data in several animal models, including zebrafish, COMICS is designed to take spatially resolved data generated by dissection or fractionation and display this data in visually clear anatomical representations rather than large data tables. COMICS is optimized for ease-of-use, with a minimalistic web interface and automatic selection of the appropriate visual representation depending on the input data.

  10. Semantics based approach for analyzing disease-target associations.

    PubMed

    Kaalia, Rama; Ghosh, Indira

    2016-08-01

    A complex disease is caused by heterogeneous biological interactions between genes and their products along with the influence of environmental factors. There have been many attempts for understanding the cause of these diseases using experimental, statistical and computational methods. In the present work the objective is to address the challenge of representation and integration of information from heterogeneous biomedical aspects of a complex disease using semantics based approach. Semantic web technology is used to design Disease Association Ontology (DAO-db) for representation and integration of disease associated information with diabetes as the case study. The functional associations of disease genes are integrated using RDF graphs of DAO-db. Three semantic web based scoring algorithms (PageRank, HITS (Hyperlink Induced Topic Search) and HITS with semantic weights) are used to score the gene nodes on the basis of their functional interactions in the graph. Disease Association Ontology for Diabetes (DAO-db) provides a standard ontology-driven platform for describing genes, proteins, pathways involved in diabetes and for integrating functional associations from various interaction levels (gene-disease, gene-pathway, gene-function, gene-cellular component and protein-protein interactions). An automatic instance loader module is also developed in present work that helps in adding instances to DAO-db on a large scale. Our ontology provides a framework for querying and analyzing the disease associated information in the form of RDF graphs. The above developed methodology is used to predict novel potential targets involved in diabetes disease from the long list of loose (statistically associated) gene-disease associations. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Defining datasets and creating data dictionaries for quality improvement and research in chronic disease using routinely collected data: an ontology-driven approach.

    PubMed

    de Lusignan, Simon; Liaw, Siaw-Teng; Michalakidis, Georgios; Jones, Simon

    2011-01-01

    The burden of chronic disease is increasing, and research and quality improvement will be less effective if case finding strategies are suboptimal. To describe an ontology-driven approach to case finding in chronic disease and how this approach can be used to create a data dictionary and make the codes used in case finding transparent. A five-step process: (1) identifying a reference coding system or terminology; (2) using an ontology-driven approach to identify cases; (3) developing metadata that can be used to identify the extracted data; (4) mapping the extracted data to the reference terminology; and (5) creating the data dictionary. Hypertension is presented as an exemplar. A patient with hypertension can be represented by a range of codes including diagnostic, history and administrative. Metadata can link the coding system and data extraction queries to the correct data mapping and translation tool, which then maps it to the equivalent code in the reference terminology. The code extracted, the term, its domain and subdomain, and the name of the data extraction query can then be automatically grouped and published online as a readily searchable data dictionary. An exemplar online is: www.clininf.eu/qickd-data-dictionary.html Adopting an ontology-driven approach to case finding could improve the quality of disease registers and of research based on routine data. It would offer considerable advantages over using limited datasets to define cases. This approach should be considered by those involved in research and quality improvement projects which utilise routine data.

  12. KneeTex: an ontology-driven system for information extraction from MRI reports.

    PubMed

    Spasić, Irena; Zhao, Bo; Jones, Christopher B; Button, Kate

    2015-01-01

    In the realm of knee pathology, magnetic resonance imaging (MRI) has the advantage of visualising all structures within the knee joint, which makes it a valuable tool for increasing diagnostic accuracy and planning surgical treatments. Therefore, clinical narratives found in MRI reports convey valuable diagnostic information. A range of studies have proven the feasibility of natural language processing for information extraction from clinical narratives. However, no study focused specifically on MRI reports in relation to knee pathology, possibly due to the complexity of knee anatomy and a wide range of conditions that may be associated with different anatomical entities. In this paper we describe KneeTex, an information extraction system that operates in this domain. As an ontology-driven information extraction system, KneeTex makes active use of an ontology to strongly guide and constrain text analysis. We used automatic term recognition to facilitate the development of a domain-specific ontology with sufficient detail and coverage for text mining applications. In combination with the ontology, high regularity of the sublanguage used in knee MRI reports allowed us to model its processing by a set of sophisticated lexico-semantic rules with minimal syntactic analysis. The main processing steps involve named entity recognition combined with coordination, enumeration, ambiguity and co-reference resolution, followed by text segmentation. Ontology-based semantic typing is then used to drive the template filling process. We adopted an existing ontology, TRAK (Taxonomy for RehAbilitation of Knee conditions), for use within KneeTex. The original TRAK ontology expanded from 1,292 concepts, 1,720 synonyms and 518 relationship instances to 1,621 concepts, 2,550 synonyms and 560 relationship instances. This provided KneeTex with a very fine-grained lexico-semantic knowledge base, which is highly attuned to the given sublanguage. Information extraction results were evaluated on a test set of 100 MRI reports. A gold standard consisted of 1,259 filled template records with the following slots: finding, finding qualifier, negation, certainty, anatomy and anatomy qualifier. KneeTex extracted information with precision of 98.00 %, recall of 97.63 % and F-measure of 97.81 %, the values of which are in line with human-like performance. KneeTex is an open-source, stand-alone application for information extraction from narrative reports that describe an MRI scan of the knee. Given an MRI report as input, the system outputs the corresponding clinical findings in the form of JavaScript Object Notation objects. The extracted information is mapped onto TRAK, an ontology that formally models knowledge relevant for the rehabilitation of knee conditions. As a result, formally structured and coded information allows for complex searches to be conducted efficiently over the original MRI reports, thereby effectively supporting epidemiologic studies of knee conditions.

  13. Data Model Management for Space Information Systems

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel J.; Ramirez, Paul; Mattmann, chris

    2006-01-01

    The Reference Architecture for Space Information Management (RASIM) suggests the separation of the data model from software components to promote the development of flexible information management systems. RASIM allows the data model to evolve independently from the software components and results in a robust implementation that remains viable as the domain changes. However, the development and management of data models within RASIM are difficult and time consuming tasks involving the choice of a notation, the capture of the model, its validation for consistency, and the export of the model for implementation. Current limitations to this approach include the lack of ability to capture comprehensive domain knowledge, the loss of significant modeling information during implementation, the lack of model visualization and documentation capabilities, and exports being limited to one or two schema types. The advent of the Semantic Web and its demand for sophisticated data models has addressed this situation by providing a new level of data model management in the form of ontology tools. In this paper we describe the use of a representative ontology tool to capture and manage a data model for a space information system. The resulting ontology is implementation independent. Novel on-line visualization and documentation capabilities are available automatically, and the ability to export to various schemas can be added through tool plug-ins. In addition, the ingestion of data instances into the ontology allows validation of the ontology and results in a domain knowledge base. Semantic browsers are easily configured for the knowledge base. For example the export of the knowledge base to RDF/XML and RDFS/XML and the use of open source metadata browsers provide ready-made user interfaces that support both text- and facet-based search. This paper will present the Planetary Data System (PDS) data model as a use case and describe the import of the data model into an ontology tool. We will also describe the current effort to provide interoperability with the European Space Agency (ESA)/Planetary Science Archive (PSA) which is critically dependent on a common data model.

  14. SORTA: a system for ontology-based re-coding and technical annotation of biomedical phenotype data.

    PubMed

    Pang, Chao; Sollie, Annet; Sijtsma, Anna; Hendriksen, Dennis; Charbon, Bart; de Haan, Mark; de Boer, Tommy; Kelpin, Fleur; Jetten, Jonathan; van der Velde, Joeri K; Smidt, Nynke; Sijmons, Rolf; Hillege, Hans; Swertz, Morris A

    2015-01-01

    There is an urgent need to standardize the semantics of biomedical data values, such as phenotypes, to enable comparative and integrative analyses. However, it is unlikely that all studies will use the same data collection protocols. As a result, retrospective standardization is often required, which involves matching of original (unstructured or locally coded) data to widely used coding or ontology systems such as SNOMED CT (clinical terms), ICD-10 (International Classification of Disease) and HPO (Human Phenotype Ontology). This data curation process is usually a time-consuming process performed by a human expert. To help mechanize this process, we have developed SORTA, a computer-aided system for rapidly encoding free text or locally coded values to a formal coding system or ontology. SORTA matches original data values (uploaded in semicolon delimited format) to a target coding system (uploaded in Excel spreadsheet, OWL ontology web language or OBO open biomedical ontologies format). It then semi- automatically shortlists candidate codes for each data value using Lucene and n-gram based matching algorithms, and can also learn from matches chosen by human experts. We evaluated SORTA's applicability in two use cases. For the LifeLines biobank, we used SORTA to recode 90 000 free text values (including 5211 unique values) about physical exercise to MET (Metabolic Equivalent of Task) codes. For the CINEAS clinical symptom coding system, we used SORTA to map to HPO, enriching HPO when necessary (315 terms matched so far). Out of the shortlists at rank 1, we found a precision/recall of 0.97/0.98 in LifeLines and of 0.58/0.45 in CINEAS. More importantly, users found the tool both a major time saver and a quality improvement because SORTA reduced the chances of human mistakes. Thus, SORTA can dramatically ease data (re)coding tasks and we believe it will prove useful for many more projects. Database URL: http://molgenis.org/sorta or as an open source download from http://www.molgenis.org/wiki/SORTA. © The Author(s) 2015. Published by Oxford University Press.

  15. SORTA: a system for ontology-based re-coding and technical annotation of biomedical phenotype data

    PubMed Central

    Pang, Chao; Sollie, Annet; Sijtsma, Anna; Hendriksen, Dennis; Charbon, Bart; de Haan, Mark; de Boer, Tommy; Kelpin, Fleur; Jetten, Jonathan; van der Velde, Joeri K.; Smidt, Nynke; Sijmons, Rolf; Hillege, Hans; Swertz, Morris A.

    2015-01-01

    There is an urgent need to standardize the semantics of biomedical data values, such as phenotypes, to enable comparative and integrative analyses. However, it is unlikely that all studies will use the same data collection protocols. As a result, retrospective standardization is often required, which involves matching of original (unstructured or locally coded) data to widely used coding or ontology systems such as SNOMED CT (clinical terms), ICD-10 (International Classification of Disease) and HPO (Human Phenotype Ontology). This data curation process is usually a time-consuming process performed by a human expert. To help mechanize this process, we have developed SORTA, a computer-aided system for rapidly encoding free text or locally coded values to a formal coding system or ontology. SORTA matches original data values (uploaded in semicolon delimited format) to a target coding system (uploaded in Excel spreadsheet, OWL ontology web language or OBO open biomedical ontologies format). It then semi- automatically shortlists candidate codes for each data value using Lucene and n-gram based matching algorithms, and can also learn from matches chosen by human experts. We evaluated SORTA’s applicability in two use cases. For the LifeLines biobank, we used SORTA to recode 90 000 free text values (including 5211 unique values) about physical exercise to MET (Metabolic Equivalent of Task) codes. For the CINEAS clinical symptom coding system, we used SORTA to map to HPO, enriching HPO when necessary (315 terms matched so far). Out of the shortlists at rank 1, we found a precision/recall of 0.97/0.98 in LifeLines and of 0.58/0.45 in CINEAS. More importantly, users found the tool both a major time saver and a quality improvement because SORTA reduced the chances of human mistakes. Thus, SORTA can dramatically ease data (re)coding tasks and we believe it will prove useful for many more projects. Database URL: http://molgenis.org/sorta or as an open source download from http://www.molgenis.org/wiki/SORTA PMID:26385205

  16. Business Metrics for High-Performance Homes: A Colorado Springs Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beach, R.; Jones, A.

    This report explores the correlation between energy efficiency and the business success of home builders by examining a data set of builders and homes in the Colorado Springs, Colorado, market between 2006 and 2014. During this time, the Great Recession of 2007 to 2009 occurred, and new-home sales plummeted both nationally and in Colorado Springs. What is evident from an analysis of builders and homes in Colorado Springs is that builders who had Home Energy Rating System (HERS) ratings performed on some or all of their homes during the Recession remained in business during this challenging economic period. Many buildersmore » who did not have HERS ratings performed on their homes at that time went out of business or left the area. From the analysis presented in this report, it is evident that a correlation exists between energy efficiency and the business success of home builders, although the reasons for this correlation remain largely anecdotal and not yet clearly understood.« less

  17. An empirical analysis of an innovative application for an underutilized resource: small-diameter roundwood in recreational buildings

    Treesearch

    Randall Cantrell

    2004-01-01

    Builders were surveyed to explore perceptions regarding small-diameter roundwood (SDR). The study empirically tests a model of builders’ attitudes and opinions about using SDR as a building material in recreational buildings. Findings suggest that, of the 130 builders surveyed, most are likely to use SDR in recreational buildings when it meets the following criteria: 1...

  18. Overview of the gene ontology task at BioCreative IV.

    PubMed

    Mao, Yuqing; Van Auken, Kimberly; Li, Donghui; Arighi, Cecilia N; McQuilton, Peter; Hayman, G Thomas; Tweedie, Susan; Schaeffer, Mary L; Laulederkind, Stanley J F; Wang, Shur-Jen; Gobeill, Julien; Ruch, Patrick; Luu, Anh Tuan; Kim, Jung-Jae; Chiang, Jung-Hsien; Chen, Yu-De; Yang, Chia-Jung; Liu, Hongfang; Zhu, Dongqing; Li, Yanpeng; Yu, Hong; Emadzadeh, Ehsan; Gonzalez, Graciela; Chen, Jian-Ming; Dai, Hong-Jie; Lu, Zhiyong

    2014-01-01

    Gene ontology (GO) annotation is a common task among model organism databases (MODs) for capturing gene function data from journal articles. It is a time-consuming and labor-intensive task, and is thus often considered as one of the bottlenecks in literature curation. There is a growing need for semiautomated or fully automated GO curation techniques that will help database curators to rapidly and accurately identify gene function information in full-length articles. Despite multiple attempts in the past, few studies have proven to be useful with regard to assisting real-world GO curation. The shortage of sentence-level training data and opportunities for interaction between text-mining developers and GO curators has limited the advances in algorithm development and corresponding use in practical circumstances. To this end, we organized a text-mining challenge task for literature-based GO annotation in BioCreative IV. More specifically, we developed two subtasks: (i) to automatically locate text passages that contain GO-relevant information (a text retrieval task) and (ii) to automatically identify relevant GO terms for the genes in a given article (a concept-recognition task). With the support from five MODs, we provided teams with >4000 unique text passages that served as the basis for each GO annotation in our task data. Such evidence text information has long been recognized as critical for text-mining algorithm development but was never made available because of the high cost of curation. In total, seven teams participated in the challenge task. From the team results, we conclude that the state of the art in automatically mining GO terms from literature has improved over the past decade while much progress is still needed for computer-assisted GO curation. Future work should focus on addressing remaining technical challenges for improved performance of automatic GO concept recognition and incorporating practical benefits of text-mining tools into real-world GO annotation. http://www.biocreative.org/tasks/biocreative-iv/track-4-GO/. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.

  19. Integrated G and C Implementation within IDOS: A Simulink Based Reusable Launch Vehicle Simulation

    NASA Technical Reports Server (NTRS)

    Fisher, Joseph E.; Bevacqua, Tim; Lawrence, Douglas A.; Zhu, J. Jim; Mahoney, Michael

    2003-01-01

    The implementation of multiple Integrated Guidance and Control (IG&C) algorithms per flight phase within a vehicle simulation poses a daunting task to coordinate algorithm interactions with the other G&C components and with vehicle subsystems. Currently being developed by Universal Space Lines LLC (USL) under contract from NASA, the Integrated Development and Operations System (IDOS) contains a high fidelity Simulink vehicle simulation, which provides a means to test cutting edge G&C technologies. Combining the modularity of this vehicle simulation and Simulink s built-in primitive blocks provide a quick way to implement algorithms. To add discrete-event functionality to the unfinished IDOS simulation, Vehicle Event Manager (VEM) and Integrated Vehicle Health Monitoring (IVHM) subsystems were created to provide discrete-event and pseudo-health monitoring processing capabilities. Matlab's Stateflow is used to create the IVHM and Event Manager subsystems and to implement a supervisory logic controller referred to as the Auto-commander as part of the IG&C to coordinate the control system adaptation and reconfiguration and to select the control and guidance algorithms for a given flight phase. Manual creation of the Stateflow charts for all of these subsystems is a tedious and time-consuming process. The Stateflow Auto-builder was developed as a Matlab based software tool for the automatic generation of a Stateflow chart from information contained in a database. This paper describes the IG&C, VEM and IVHM implementations in IDOS. In addition, this paper describes the Stateflow Auto-builder.

  20. docBUILDER - Building Your Useful Metadata for Earth Science Data and Services.

    NASA Astrophysics Data System (ADS)

    Weir, H. M.; Pollack, J.; Olsen, L. M.; Major, G. R.

    2005-12-01

    The docBUILDER tool, created by NASA's Global Change Master Directory (GCMD), assists the scientific community in efficiently creating quality data and services metadata. Metadata authors are asked to complete five required fields to ensure enough information is provided for users to discover the data and related services they seek. After the metadata record is submitted to the GCMD, it is reviewed for semantic and syntactic consistency. Currently, two versions are available - a Web-based tool accessible with most browsers (docBUILDERweb) and a stand-alone desktop application (docBUILDERsolo). The Web version is available through the GCMD website, at http://gcmd.nasa.gov/User/authoring.html. This version has been updated and now offers: personalized templates to ease entering similar information for multiple data sets/services; automatic population of Data Center/Service Provider URLs based on the selected center/provider; three-color support to indicate required, recommended, and optional fields; an editable text window containing the XML record, to allow for quick editing; and improved overall performance and presentation. The docBUILDERsolo version offers the ability to create metadata records on a computer wherever you are. Except for installation and the occasional update of keywords, data/service providers are not required to have an Internet connection. This freedom will allow users with portable computers (Windows, Mac, and Linux) to create records in field campaigns, whether in Antarctica or the Australian Outback. This version also offers a spell-checker, in addition to all of the features found in the Web version.

  1. My Corporis Fabrica: an ontology-based tool for reasoning and querying on complex anatomical models

    PubMed Central

    2014-01-01

    Background Multiple models of anatomy have been developed independently and for different purposes. In particular, 3D graphical models are specially useful for visualizing the different organs composing the human body, while ontologies such as FMA (Foundational Model of Anatomy) are symbolic models that provide a unified formal description of anatomy. Despite its comprehensive content concerning the anatomical structures, the lack of formal descriptions of anatomical functions in FMA limits its usage in many applications. In addition, the absence of connection between 3D models and anatomical ontologies makes it difficult and time-consuming to set up and access to the anatomical content of complex 3D objects. Results First, we provide a new ontology of anatomy called My Corporis Fabrica (MyCF), which conforms to FMA but extends it by making explicit how anatomical structures are composed, how they contribute to functions, and also how they can be related to 3D complex objects. Second, we have equipped MyCF with automatic reasoning capabilities that enable model checking and complex queries answering. We illustrate the added-value of such a declarative approach for interactive simulation and visualization as well as for teaching applications. Conclusions The novel vision of ontologies that we have developed in this paper enables a declarative assembly of different models to obtain composed models guaranteed to be anatomically valid while capturing the complexity of human anatomy. The main interest of this approach is its declarativity that makes possible for domain experts to enrich the knowledge base at any moment through simple editors without having to change the algorithmic machinery. This provides MyCF software environment a flexibility to process and add semantics on purpose for various applications that incorporate not only symbolic information but also 3D geometric models representing anatomical entities as well as other symbolic information like the anatomical functions. PMID:24936286

  2. LapOntoSPM: an ontology for laparoscopic surgeries and its application to surgical phase recognition.

    PubMed

    Katić, Darko; Julliard, Chantal; Wekerle, Anna-Laura; Kenngott, Hannes; Müller-Stich, Beat Peter; Dillmann, Rüdiger; Speidel, Stefanie; Jannin, Pierre; Gibaud, Bernard

    2015-09-01

    The rise of intraoperative information threatens to outpace our abilities to process it. Context-aware systems, filtering information to automatically adapt to the current needs of the surgeon, are necessary to fully profit from computerized surgery. To attain context awareness, representation of medical knowledge is crucial. However, most existing systems do not represent knowledge in a reusable way, hindering also reuse of data. Our purpose is therefore to make our computational models of medical knowledge sharable, extensible and interoperational with established knowledge representations in the form of the LapOntoSPM ontology. To show its usefulness, we apply it to situation interpretation, i.e., the recognition of surgical phases based on surgical activities. Considering best practices in ontology engineering and building on our ontology for laparoscopy, we formalized the workflow of laparoscopic adrenalectomies, cholecystectomies and pancreatic resections in the framework of OntoSPM, a new standard for surgical process models. Furthermore, we provide a rule-based situation interpretation algorithm based on SQWRL to recognize surgical phases using the ontology. The system was evaluated on ground-truth data from 19 manually annotated surgeries. The aim was to show that the phase recognition capabilities are equal to a specialized solution. The recognition rates of the new system were equal to the specialized one. However, the time needed to interpret a situation rose from 0.5 to 1.8 s on average which is still viable for practical application. We successfully integrated medical knowledge for laparoscopic surgeries into OntoSPM, facilitating knowledge and data sharing. This is especially important for reproducibility of results and unbiased comparison of recognition algorithms. The associated recognition algorithm was adapted to the new representation without any loss of classification power. The work is an important step to standardized knowledge and data representation in the field on context awareness and thus toward unified benchmark data sets.

  3. Arntz Builders, Inc. Information Sheet

    EPA Pesticide Factsheets

    Arntz Builders, Inc. (the Company) is located in Novato, California. The settlement involves renovation activities conducted at property constructed prior to 1978, located in San Francisco, California.

  4. Rodan Builders, Inc. Information Sheet

    EPA Pesticide Factsheets

    Rodan Builders, Inc. (the Company) is located in Burlingame, California. The settlement involves renovation activities conducted at property constructed prior to 1978, located in San Francisco, California.

  5. Towards a social and context-aware multi-sensor fall detection and risk assessment platform.

    PubMed

    De Backere, F; Ongenae, F; Van den Abeele, F; Nelis, J; Bonte, P; Clement, E; Philpott, M; Hoebeke, J; Verstichel, S; Ackaert, A; De Turck, F

    2015-09-01

    For elderly people fall incidents are life-changing events that lead to degradation or even loss of autonomy. Current fall detection systems are not integrated and often associated with undetected falls and/or false alarms. In this paper, a social- and context-aware multi-sensor platform is presented, which integrates information gathered by a plethora of fall detection systems and sensors at the home of the elderly, by using a cloud-based solution, making use of an ontology. Within the ontology, both static and dynamic information is captured to model the situation of a specific patient and his/her (in)formal caregivers. This integrated contextual information allows to automatically and continuously assess the fall risk of the elderly, to more accurately detect falls and identify false alarms and to automatically notify the appropriate caregiver, e.g., based on location or their current task. The main advantage of the proposed platform is that multiple fall detection systems and sensors can be integrated, as they can be easily plugged in, this can be done based on the specific needs of the patient. The combination of several systems and sensors leads to a more reliable system, with better accuracy. The proof of concept was tested with the use of the visualizer, which enables a better way to analyze the data flow within the back-end and with the use of the portable testbed, which is equipped with several different sensors. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. {Semantic metadata application for information resources systematization in water spectroscopy} A.Fazliev (1), A.Privezentsev (1), J.Tennyson (2) (1) Institute of Atmospheric Optics SB RAS, Tomsk, Russia, (2) University College London, London, UK (faz@iao

    NASA Astrophysics Data System (ADS)

    Fazliev, A.

    2009-04-01

    The information and knowledge layers of information-computational system for water spectroscopy are described. Semantic metadata for all the tasks of domain information model that are the basis of the layers have been studied. The principle of semantic metadata determination and mechanisms of the usage during information systematization in molecular spectroscopy has been revealed. The software developed for the work with semantic metadata is described as well. Formation of domain model in the framework of Semantic Web is based on the use of explicit specification of its conceptualization or, in other words, its ontologies. Formation of conceptualization for molecular spectroscopy was described in Refs. 1, 2. In these works two chains of task are selected for zeroth approximation for knowledge domain description. These are direct tasks chain and inverse tasks chain. Solution schemes of these tasks defined approximation of data layer for knowledge domain conceptualization. Spectroscopy tasks solutions properties lead to a step-by-step extension of molecular spectroscopy conceptualization. Information layer of information system corresponds to this extension. An advantage of molecular spectroscopy model designed in a form of tasks chain is actualized in the fact that one can explicitly define data and metadata at each step of solution of these molecular spectroscopy chain tasks. Metadata structure (tasks solutions properties) in knowledge domain also has form of a chain in which input data and metadata of the previous task become metadata of the following tasks. The term metadata is used in its narrow sense: metadata are the properties of spectroscopy tasks solutions. Semantic metadata represented with the help of OWL 3 are formed automatically and they are individuals of classes (A-box). Unification of T-box and A-box is an ontology that can be processed with the help of inference engine. In this work we analyzed the formation of individuals of molecular spectroscopy applied ontologies as well as the software used for their creation by means of OWL DL language. The results of this work are presented in a form of an information layer and a knowledge layer in W@DIS information system 4. 1 FORMATION OF INDIVIDUALS OF WATER SPECTROSCOPY APPLIED ONTOLOGY Applied tasks ontology contains explicit description of input an output data of physical tasks solved in two chains of molecular spectroscopy tasks. Besides physical concepts, related to spectroscopy tasks solutions, an information source, which is a key concept of knowledge domain information model, is also used. Each solution of knowledge domain task is linked to the information source which contains a reference on published task solution, molecule and task solution properties. Each information source allows us to identify a certain knowledge domain task solution contained in the information system. Water spectroscopy applied ontology classes are formed on the basis of molecular spectroscopy concepts taxonomy. They are defined by constrains on properties of the selected conceptualization. Extension of applied ontology in W@DIS information system is actualized according to two scenarios. Individuals (ontology facts or axioms) formation is actualized during the task solution upload in the information system. Ontology user operation that implies molecular spectroscopy taxonomy and individuals is performed solely by the user. For this purpose Protege ontology editor was used. For the formation, processing and visualization of knowledge domain tasks individuals a software was designed and implemented. Method of individual formation determines the sequence of steps of created ontology individuals' generation. Tasks solutions properties (metadata) have qualitative and quantitative values. Qualitative metadata are regarded as metadata describing qualitative side of a task such as solution method or other information that can be explicitly specified by object properties of OWL DL language. Quantitative metadata are metadata that describe quantitative properties of task solution such as minimal and maximal data value or other information that can be explicitly obtained by programmed algorithmic operations. These metadata are related to DatatypeProperty properties of OWL specification language Quantitative metadata can be obtained automatically during data upload into information system. Since ObjectProperty values are objects, processing of qualitative metadata requires logical constraints. In case of the task solved in W@DIS ICS qualitative metadata can be formed automatically (for example in spectral functions calculation task). The used methods of translation of qualitative metadata into quantitative is characterized as roughened representation of knowledge in knowledge domain. The existence of two ways of data obtainment is a key moment in the formation of applied ontology of molecular spectroscopy task. experimental method (metadata for experimental data contain description of equipment, experiment conditions and so on) on the initial stage and inverse task solution on the following stages; calculation method (metadata for calculation data are closely related to the metadata used for the description of physical and mathematical models of molecular spectroscopy) 2 SOFTWARE FOR ONTOLOGY OPERATION Data collection in water spectroscopy information system is organized in a form of workflow that contains such operations as information source creation, entry of bibliographic data on publications, formation of uploaded data schema an so on. Metadata are generated in information source as well. Two methods are used for their formation: automatic metadata generation and manual metadata generation (performed by user). Software implementation of support of actions related to metadata formation is performed by META+ module. Functions of META+ module can be divided into two groups. The first groups contains the functions necessary to software developer while the second one the functions necessary to a user of the information system. META+ module functions necessary to the developer are: 1. creation of taxonomy (T-boxes) of applied ontology classes of knowledge domain tasks; 2. creation of instances of task classes; 3. creation of data schemes of tasks in a form of an XML-pattern and based on XML-syntax. XML-pattern is developed for instances generator and created according to certain rules imposed on software generator implementation. 4. implementation of metadata values calculation algorithms; 5. creation of a request interface and additional knowledge processing function for the solution of these task; 6. unification of the created functions and interfaces into one information system The following sequence is universal for the generation of task classes' individuals that form chains. Special interfaces for user operations management are designed for software developer in META+ module. There are means for qualitative metadata values updating during data reuploading to information source. The list of functions necessary to end user contains: - data sets visualization and editing, taking into account their metadata, e.g.: display of unique number of bands in transitions for a certain data source; - export of OWL/RDF models from information system to the environment in XML-syntax; - visualization of instances of classes of applied ontology tasks on molecular spectroscopy; - import of OWL/RDF models into the information system and their integration with domain vocabulary; - formation of additional knowledge of knowledge domain for the construction of ontological instances of task classes using GTML-formats and their processing; - formation of additional knowledge in knowledge domain for the construction of instances of task classes, using software algorithm for data sets processing; - function of semantic search implementation using an interface that formulates questions in a form of related triplets in order for getting an adequate answer. 3 STRUCTURE OF META+ MODULE META+ software module that provides the above functions contains the following components: - a knowledge base that stores semantic metadata and taxonomies of information system; - software libraries POWL and RAP 5 created by third-party developer and providing access to ontological storage; - function classes and libraries that form the core of the module and perform the tasks of formation, storage and visualization of classes instances; - configuration files and module patterns that allow one to adjust and organize operation of different functional blocks; META+ module also contains scripts and patterns implemented according to the rules of W@DIS information system development environment. - scripts for interaction with environment by means of the software core of information system. These scripts provide organizing web-oriented interactive communication; - patterns for the formation of functionality visualization realized by the scripts Software core of scientific information-computational system W@DIS is created with the help of MVC (Model - View - Controller) design pattern that allows us to separate logic of application from its representation. It realizes the interaction of three logical components, actualizing interactivity with the environment via Web and performing its preprocessing. Functions of «Controller» logical component are realized with the help of scripts designed according to the rules imposed by software core of the information system. Each script represents a definite object-oriented class with obligatory class method of script initiation called "start". Functions of actualization of domain application operation results representation (i.e. "View" component) are sets of HTML-patterns that allow one to visualize the results of domain applications operation with the help of additional constructions processed by software core of the system. Besides the interaction with the software core of the scientific information system this module also deals with configuration files of software core and its database. Such organization of work provides closer integration with software core and deeper and more adequate connection in operating system support. 4 CONCLUSION In this work the problems of semantic metadata creation in information system oriented on information representation in the area of molecular spectroscopy have been discussed. The described method of semantic metadata and functions formation as well as realization and structure of META+ module have been described. Architecture of META+ module is closely related to the existing software of "Molecular spectroscopy" scientific information system. Realization of the module is performed with the use of modern approaches to Web-oriented applications development. It uses the existing applied interfaces. The developed software allows us to: - perform automatic metadata annotation of calculated tasks solutions directly in the information system; - perform automatic annotation of metadata on the solution of tasks on task solution results uploading outside the information system forming an instance of the solved task on the basis of entry data; - use ontological instances of task solution for identification of data in information tasks of viewing, comparison and search solved by information system; - export applied tasks ontologies for the operation with them by external means; - solve the task of semantic search according to the pattern and using question-answer type interface. 5 ACKNOWLEDGEMENT The authors are grateful to RFBR for the financial support of development of distributed information system for molecular spectroscopy. REFERENCES A.D.Bykov, A.Z. Fazliev, N.N.Filippov, A.V. Kozodoev, A.I.Privezentsev, L.N.Sinitsa, M.V.Tonkov and M.Yu.Tretyakov, Distributed information system on atmospheric spectroscopy // Geophysical Research Abstracts, SRef-ID: 1607-7962/gra/EGU2007-A-01906, 2007, v. 9, p. 01906. A.I.Prevezentsev, A.Z. Fazliev Applied task ontology for molecular spectroscopy information resources systematization. The Proceedings of 9th Russian scientific conference "Electronic libraries: advanced methods and technologies, electronic collections" - RCDL'2007, Pereslavl Zalesskii, 2007, part.1, 2007, P.201-210. OWL Web Ontology Language Semantics and Abstract Syntax, W3C Recommendation 10 February 2004, http://www.w3.org/TR/2004/REC-owl-semantics-20040210/ W@DIS information system, http://wadis.saga.iao.ru RAP library, http://www4.wiwiss.fu-berlin.de/bizer/rdfapi/.

  7. USS Cal Builders, Inc. Information Sheet

    EPA Pesticide Factsheets

    USS Cal Builders, Inc. (the Company) is located in Stanton, California. The settlement involves renovation activities conducted at property constructed prior to 1978, located in San Francisco, California.

  8. Ontology-Based Architecture for Intelligent Transportation Systems Using a Traffic Sensor Network.

    PubMed

    Fernandez, Susel; Hadfi, Rafik; Ito, Takayuki; Marsa-Maestre, Ivan; Velasco, Juan R

    2016-08-15

    Intelligent transportation systems are a set of technological solutions used to improve the performance and safety of road transportation. A crucial element for the success of these systems is the exchange of information, not only between vehicles, but also among other components in the road infrastructure through different applications. One of the most important information sources in this kind of systems is sensors. Sensors can be within vehicles or as part of the infrastructure, such as bridges, roads or traffic signs. Sensors can provide information related to weather conditions and traffic situation, which is useful to improve the driving process. To facilitate the exchange of information between the different applications that use sensor data, a common framework of knowledge is needed to allow interoperability. In this paper an ontology-driven architecture to improve the driving environment through a traffic sensor network is proposed. The system performs different tasks automatically to increase driver safety and comfort using the information provided by the sensors.

  9. Ontology aided modeling of organic reaction mechanisms with flexible and fragment based XML markup procedures.

    PubMed

    Sankar, Punnaivanam; Aghila, Gnanasekaran

    2007-01-01

    The mechanism models for primary organic reactions encoding the structural fragments undergoing substitution, addition, elimination, and rearrangements are developed. In the proposed models, each and every structural component of mechanistic pathways is represented with flexible and fragment based markup technique in XML syntax. A significant feature of the system is the encoding of the electron movements along with the other components like charges, partial charges, half bonded species, lone pair electrons, free radicals, reaction arrows, etc. needed for a complete representation of reaction mechanism. The rendering of reaction schemes described with the proposed methodology is achieved with a concise XML extension language interoperating with the structure markup. The reaction scheme is visualized as 2D graphics in a browser by converting them into SVG documents enabling the desired layouts normally perceived by the chemists conventionally. An automatic representation of the complex patterns of the reaction mechanism is achieved by reusing the knowledge in chemical ontologies and developing artificial intelligence components in terms of axioms.

  10. Ontology-Based Architecture for Intelligent Transportation Systems Using a Traffic Sensor Network

    PubMed Central

    Fernandez, Susel; Hadfi, Rafik; Ito, Takayuki; Marsa-Maestre, Ivan; Velasco, Juan R.

    2016-01-01

    Intelligent transportation systems are a set of technological solutions used to improve the performance and safety of road transportation. A crucial element for the success of these systems is the exchange of information, not only between vehicles, but also among other components in the road infrastructure through different applications. One of the most important information sources in this kind of systems is sensors. Sensors can be within vehicles or as part of the infrastructure, such as bridges, roads or traffic signs. Sensors can provide information related to weather conditions and traffic situation, which is useful to improve the driving process. To facilitate the exchange of information between the different applications that use sensor data, a common framework of knowledge is needed to allow interoperability. In this paper an ontology-driven architecture to improve the driving environment through a traffic sensor network is proposed. The system performs different tasks automatically to increase driver safety and comfort using the information provided by the sensors. PMID:27537878

  11. CHARMM-GUI 10 Years for Biomolecular Modeling and Simulation

    PubMed Central

    Jo, Sunhwan; Cheng, Xi; Lee, Jumin; Kim, Seonghoon; Park, Sang-Jun; Patel, Dhilon S.; Beaven, Andrew H.; Lee, Kyu Il; Rui, Huan; Roux, Benoît; MacKerell, Alexander D.; Klauda, Jeffrey B.; Qi, Yifei

    2017-01-01

    CHARMM-GUI, http://www.charmm-gui.org, is a web-based graphical user interface that prepares complex biomolecular systems for molecular simulations. CHARMM-GUI creates input files for a number of programs including CHARMM, NAMD, GROMACS, AMBER, GENESIS, LAMMPS, Desmond, OpenMM, and CHARMM/OpenMM. Since its original development in 2006, CHARMM-GUI has been widely adopted for various purposes and now contains a number of different modules designed to set up a broad range of simulations: (1) PDB Reader & Manipulator, Glycan Reader, and Ligand Reader & Modeler for reading and modifying molecules; (2) Quick MD Simulator, Membrane Builder, Nanodisc Builder, HMMM Builder, Monolayer Builder, Micelle Builder, and Hex Phase Builder for building all-atom simulation systems in various environments; (3) PACE CG Builder and Martini Maker for building coarse-grained simulation systems; (4) DEER Facilitator and MDFF/xMDFF Utilizer for experimentally guided simulations; (5) Implicit Solvent Modeler, PBEQ-Solver, and GCMC/BD Ion Simulator for implicit solvent related calculations; (6) Ligand Binder for ligand solvation and binding free energy simulations; and (7) Drude Prepper for preparation of simulations with the CHARMM Drude polarizable force field. Recently, new modules have been integrated into CHARMM-GUI, such as Glycolipid Modeler for generation of various glycolipid structures, and LPS Modeler for generation of lipopolysaccharide structures from various Gram-negative bacteria. These new features together with existing modules are expected to facilitate advanced molecular modeling and simulation thereby leading to an improved understanding of the molecular details of the structure and dynamics of complex biomolecular systems. Here, we briefly review these capabilities and discuss potential future directions in the CHARMM-GUI development project. PMID:27862047

  12. CHARMM-GUI 10 years for biomolecular modeling and simulation.

    PubMed

    Jo, Sunhwan; Cheng, Xi; Lee, Jumin; Kim, Seonghoon; Park, Sang-Jun; Patel, Dhilon S; Beaven, Andrew H; Lee, Kyu Il; Rui, Huan; Park, Soohyung; Lee, Hui Sun; Roux, Benoît; MacKerell, Alexander D; Klauda, Jeffrey B; Qi, Yifei; Im, Wonpil

    2017-06-05

    CHARMM-GUI, http://www.charmm-gui.org, is a web-based graphical user interface that prepares complex biomolecular systems for molecular simulations. CHARMM-GUI creates input files for a number of programs including CHARMM, NAMD, GROMACS, AMBER, GENESIS, LAMMPS, Desmond, OpenMM, and CHARMM/OpenMM. Since its original development in 2006, CHARMM-GUI has been widely adopted for various purposes and now contains a number of different modules designed to set up a broad range of simulations: (1) PDB Reader & Manipulator, Glycan Reader, and Ligand Reader & Modeler for reading and modifying molecules; (2) Quick MD Simulator, Membrane Builder, Nanodisc Builder, HMMM Builder, Monolayer Builder, Micelle Builder, and Hex Phase Builder for building all-atom simulation systems in various environments; (3) PACE CG Builder and Martini Maker for building coarse-grained simulation systems; (4) DEER Facilitator and MDFF/xMDFF Utilizer for experimentally guided simulations; (5) Implicit Solvent Modeler, PBEQ-Solver, and GCMC/BD Ion Simulator for implicit solvent related calculations; (6) Ligand Binder for ligand solvation and binding free energy simulations; and (7) Drude Prepper for preparation of simulations with the CHARMM Drude polarizable force field. Recently, new modules have been integrated into CHARMM-GUI, such as Glycolipid Modeler for generation of various glycolipid structures, and LPS Modeler for generation of lipopolysaccharide structures from various Gram-negative bacteria. These new features together with existing modules are expected to facilitate advanced molecular modeling and simulation thereby leading to an improved understanding of the structure and dynamics of complex biomolecular systems. Here, we briefly review these capabilities and discuss potential future directions in the CHARMM-GUI development project. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. [Character accentuations as a criterion for psychological risks in the professional activity of the builders of main gas pipelines in the conditions of arctic].

    PubMed

    Korneeva, Ia A; Simonova, N N

    2015-01-01

    The article is devoted to the study of character accentuations as a criterion for psychological risks in the professional activity of builders of main gas pipelines in the conditions of Arctic. to study the severity of character accentuations in rotation-employed builders of main gas pipelines, stipulated by their professional activities, as well as personal resources to overcome these destructions. The study involved 70 rotation-employed builders of trunk pipelines, working in the Tyumen Region (duration of the shift-in--52 days), aged from 23 to 59 (mean age 34,9 ± 8.1) years, with the experience of work from 0.5 years to 14 years (the average length of 4.42 ± 3.1). Methods of the study: questionnaires, psychological testing, participant observation. One-Sample t-test of Student, multiple regression analysis, incremental analysis. In the work there were revealed differences of expression of character accentuations in builders of trunk pipelines with experience in work on rotation less and more than five years. There was determined that builders of the main gas pipelines, working on the rotation in Arctic, with more pronounced accentuation ofthe character use mainly psychological defenses of compensation, substitution and denial, and have an average level of expression of flexibility as the regulatory process.

  14. Muscular development and lean body weight in body builders and weight lifters.

    PubMed

    Katch, V L; Katch, F I; Moffatt, R; Gittleson, M

    1980-01-01

    The extent of extreme muscular development in 39 males identified as body builders (N = 18), power weight lifters (N = 13), and Olympic weight lifters (N = 8) were studied. Body composition and anthropometric data, including calculations of pre-excess muscle body weight (scale weight minus excess muscle) were obtained. The lean body weight and percent fats of the subjects were: body builders = 74.6 kg, 9.3%; power weight lifters = 73.3 kg, 9.1%; and Olympic weight lifters = 68.2 kg, 10.8%. No group differences were present in frame size, percent fat, lean body weight, skinfolds, and diameter measurements. The only group differences were for the shoulders, chest, biceps relaxed and flexed, and forearm girths. In each case the body builders were larger. Calculations of excess muscle by the Behnke method revealed that the body builders had 15.6 kg excess muscle, power weight lifters 14.8 kg, and Olympic weight lifters 13.1 kg. Somatographic comparisons revealed only slight differences between the groups, while differences with reference man were substantial.

  15. Toward more transparent and reproducible omics studies through a common metadata checklist and data publications.

    PubMed

    Kolker, Eugene; Özdemir, Vural; Martens, Lennart; Hancock, William; Anderson, Gordon; Anderson, Nathaniel; Aynacioglu, Sukru; Baranova, Ancha; Campagna, Shawn R; Chen, Rui; Choiniere, John; Dearth, Stephen P; Feng, Wu-Chun; Ferguson, Lynnette; Fox, Geoffrey; Frishman, Dmitrij; Grossman, Robert; Heath, Allison; Higdon, Roger; Hutz, Mara H; Janko, Imre; Jiang, Lihua; Joshi, Sanjay; Kel, Alexander; Kemnitz, Joseph W; Kohane, Isaac S; Kolker, Natali; Lancet, Doron; Lee, Elaine; Li, Weizhong; Lisitsa, Andrey; Llerena, Adrian; Macnealy-Koch, Courtney; Marshall, Jean-Claude; Masuzzo, Paola; May, Amanda; Mias, George; Monroe, Matthew; Montague, Elizabeth; Mooney, Sean; Nesvizhskii, Alexey; Noronha, Santosh; Omenn, Gilbert; Rajasimha, Harsha; Ramamoorthy, Preveen; Sheehan, Jerry; Smarr, Larry; Smith, Charles V; Smith, Todd; Snyder, Michael; Rapole, Srikanth; Srivastava, Sanjeeva; Stanberry, Larissa; Stewart, Elizabeth; Toppo, Stefano; Uetz, Peter; Verheggen, Kenneth; Voy, Brynn H; Warnich, Louise; Wilhelm, Steven W; Yandl, Gregory

    2014-01-01

    Biological processes are fundamentally driven by complex interactions between biomolecules. Integrated high-throughput omics studies enable multifaceted views of cells, organisms, or their communities. With the advent of new post-genomics technologies, omics studies are becoming increasingly prevalent; yet the full impact of these studies can only be realized through data harmonization, sharing, meta-analysis, and integrated research. These essential steps require consistent generation, capture, and distribution of metadata. To ensure transparency, facilitate data harmonization, and maximize reproducibility and usability of life sciences studies, we propose a simple common omics metadata checklist. The proposed checklist is built on the rich ontologies and standards already in use by the life sciences community. The checklist will serve as a common denominator to guide experimental design, capture important parameters, and be used as a standard format for stand-alone data publications. The omics metadata checklist and data publications will create efficient linkages between omics data and knowledge-based life sciences innovation and, importantly, allow for appropriate attribution to data generators and infrastructure science builders in the post-genomics era. We ask that the life sciences community test the proposed omics metadata checklist and data publications and provide feedback for their use and improvement.

  16. Toward More Transparent and Reproducible Omics Studies Through a Common Metadata Checklist and Data Publications.

    PubMed

    Kolker, Eugene; Özdemir, Vural; Martens, Lennart; Hancock, William; Anderson, Gordon; Anderson, Nathaniel; Aynacioglu, Sukru; Baranova, Ancha; Campagna, Shawn R; Chen, Rui; Choiniere, John; Dearth, Stephen P; Feng, Wu-Chun; Ferguson, Lynnette; Fox, Geoffrey; Frishman, Dmitrij; Grossman, Robert; Heath, Allison; Higdon, Roger; Hutz, Mara H; Janko, Imre; Jiang, Lihua; Joshi, Sanjay; Kel, Alexander; Kemnitz, Joseph W; Kohane, Isaac S; Kolker, Natali; Lancet, Doron; Lee, Elaine; Li, Weizhong; Lisitsa, Andrey; Llerena, Adrian; MacNealy-Koch, Courtney; Marshall, Jean-Claude; Masuzzo, Paola; May, Amanda; Mias, George; Monroe, Matthew; Montague, Elizabeth; Mooney, Sean; Nesvizhskii, Alexey; Noronha, Santosh; Omenn, Gilbert; Rajasimha, Harsha; Ramamoorthy, Preveen; Sheehan, Jerry; Smarr, Larry; Smith, Charles V; Smith, Todd; Snyder, Michael; Rapole, Srikanth; Srivastava, Sanjeeva; Stanberry, Larissa; Stewart, Elizabeth; Toppo, Stefano; Uetz, Peter; Verheggen, Kenneth; Voy, Brynn H; Warnich, Louise; Wilhelm, Steven W; Yandl, Gregory

    2013-12-01

    Biological processes are fundamentally driven by complex interactions between biomolecules. Integrated high-throughput omics studies enable multifaceted views of cells, organisms, or their communities. With the advent of new post-genomics technologies, omics studies are becoming increasingly prevalent; yet the full impact of these studies can only be realized through data harmonization, sharing, meta-analysis, and integrated research. These essential steps require consistent generation, capture, and distribution of metadata. To ensure transparency, facilitate data harmonization, and maximize reproducibility and usability of life sciences studies, we propose a simple common omics metadata checklist. The proposed checklist is built on the rich ontologies and standards already in use by the life sciences community. The checklist will serve as a common denominator to guide experimental design, capture important parameters, and be used as a standard format for stand-alone data publications. The omics metadata checklist and data publications will create efficient linkages between omics data and knowledge-based life sciences innovation and, importantly, allow for appropriate attribution to data generators and infrastructure science builders in the post-genomics era. We ask that the life sciences community test the proposed omics metadata checklist and data publications and provide feedback for their use and improvement.

  17. OWLing Clinical Data Repositories With the Ontology Web Language

    PubMed Central

    Pastor, Xavier; Lozano, Esther

    2014-01-01

    Background The health sciences are based upon information. Clinical information is usually stored and managed by physicians with precarious tools, such as spreadsheets. The biomedical domain is more complex than other domains that have adopted information and communication technologies as pervasive business tools. Moreover, medicine continuously changes its corpus of knowledge because of new discoveries and the rearrangements in the relationships among concepts. This scenario makes it especially difficult to offer good tools to answer the professional needs of researchers and constitutes a barrier that needs innovation to discover useful solutions. Objective The objective was to design and implement a framework for the development of clinical data repositories, capable of facing the continuous change in the biomedicine domain and minimizing the technical knowledge required from final users. Methods We combined knowledge management tools and methodologies with relational technology. We present an ontology-based approach that is flexible and efficient for dealing with complexity and change, integrated with a solid relational storage and a Web graphical user interface. Results Onto Clinical Research Forms (OntoCRF) is a framework for the definition, modeling, and instantiation of data repositories. It does not need any database design or programming. All required information to define a new project is explicitly stated in ontologies. Moreover, the user interface is built automatically on the fly as Web pages, whereas data are stored in a generic repository. This allows for immediate deployment and population of the database as well as instant online availability of any modification. Conclusions OntoCRF is a complete framework to build data repositories with a solid relational storage. Driven by ontologies, OntoCRF is more flexible and efficient to deal with complexity and change than traditional systems and does not require very skilled technical people facilitating the engineering of clinical software systems. PMID:25599697

  18. OWLing Clinical Data Repositories With the Ontology Web Language.

    PubMed

    Lozano-Rubí, Raimundo; Pastor, Xavier; Lozano, Esther

    2014-08-01

    The health sciences are based upon information. Clinical information is usually stored and managed by physicians with precarious tools, such as spreadsheets. The biomedical domain is more complex than other domains that have adopted information and communication technologies as pervasive business tools. Moreover, medicine continuously changes its corpus of knowledge because of new discoveries and the rearrangements in the relationships among concepts. This scenario makes it especially difficult to offer good tools to answer the professional needs of researchers and constitutes a barrier that needs innovation to discover useful solutions. The objective was to design and implement a framework for the development of clinical data repositories, capable of facing the continuous change in the biomedicine domain and minimizing the technical knowledge required from final users. We combined knowledge management tools and methodologies with relational technology. We present an ontology-based approach that is flexible and efficient for dealing with complexity and change, integrated with a solid relational storage and a Web graphical user interface. Onto Clinical Research Forms (OntoCRF) is a framework for the definition, modeling, and instantiation of data repositories. It does not need any database design or programming. All required information to define a new project is explicitly stated in ontologies. Moreover, the user interface is built automatically on the fly as Web pages, whereas data are stored in a generic repository. This allows for immediate deployment and population of the database as well as instant online availability of any modification. OntoCRF is a complete framework to build data repositories with a solid relational storage. Driven by ontologies, OntoCRF is more flexible and efficient to deal with complexity and change than traditional systems and does not require very skilled technical people facilitating the engineering of clinical software systems.

  19. Towards Web 3.0: taxonomies and ontologies for medical education -- a systematic review.

    PubMed

    Blaum, Wolf E; Jarczweski, Anne; Balzer, Felix; Stötzner, Philip; Ahlers, Olaf

    2013-01-01

    Both for curricular development and mapping, as well as for orientation within the mounting supply of learning resources in medical education, the Semantic Web ("Web 3.0") poses a low-threshold, effective tool that enables identification of content related items across system boundaries. Replacement of the currently required manual with an automatically generated link, which is based on content and semantics, requires the use of a suitably structured vocabulary for a machine-readable description of object content. Aim of this study is to compile the existing taxonomies and ontologies used for the annotation of medical content and learning resources, to compare those using selected criteria, and to verify their suitability in the context described above. Based on a systematic literature search, existing taxonomies and ontologies for the description of medical learning resources were identified. Through web searches and/or direct contact with the respective editors, each of the structured vocabularies thus identified were examined in regards to topic, structure, language, scope, maintenance, and technology of the taxonomy/ontology. In addition, suitability for use in the Semantic Web was verified. Among 20 identified publications, 14 structured vocabularies were identified, which differed rather strongly in regards to language, scope, currency, and maintenance. None of the identified vocabularies fulfilled the necessary criteria for content description of medical curricula and learning resources in the German-speaking world. While moving towards Web 3.0, a significant problem lies in the selection and use of an appropriate German vocabulary for the machine-readable description of object content. Possible solutions include development, translation and/or combination of existing vocabularies, possibly including partial translations of English vocabularies.

  20. Technology Solutions Case Study: Zero Energy Ready Home and the Challenge of Hot Water on Demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Production builders in the Stapleton community of Denver, Colorado, now build 2,300-ft2 or larger homes that earn the U.S. Environmental Protection Agency (EPA) ENERGY STAR® through the Certified Homes Program, Version 3. These builders are repositioning to build comparably sized homes to the standards of the U.S. Department of Energy’s (DOE’s) Zero Energy Ready Home (ZERH) program. Most ZERH criteria align closely with ENERGY STAR and are familiar to these builders.

  1. The agent-based spatial information semantic grid

    NASA Astrophysics Data System (ADS)

    Cui, Wei; Zhu, YaQiong; Zhou, Yong; Li, Deren

    2006-10-01

    Analyzing the characteristic of multi-Agent and geographic Ontology, The concept of the Agent-based Spatial Information Semantic Grid (ASISG) is defined and the architecture of the ASISG is advanced. ASISG is composed with Multi-Agents and geographic Ontology. The Multi-Agent Systems are composed with User Agents, General Ontology Agent, Geo-Agents, Broker Agents, Resource Agents, Spatial Data Analysis Agents, Spatial Data Access Agents, Task Execution Agent and Monitor Agent. The architecture of ASISG have three layers, they are the fabric layer, the grid management layer and the application layer. The fabric layer what is composed with Data Access Agent, Resource Agent and Geo-Agent encapsulates the data of spatial information system so that exhibits a conceptual interface for the Grid management layer. The Grid management layer, which is composed with General Ontology Agent, Task Execution Agent and Monitor Agent and Data Analysis Agent, used a hybrid method to manage all resources that were registered in a General Ontology Agent that is described by a General Ontology System. The hybrid method is assembled by resource dissemination and resource discovery. The resource dissemination push resource from Local Ontology Agent to General Ontology Agent and the resource discovery pull resource from the General Ontology Agent to Local Ontology Agents. The Local Ontology Agent is derived from special domain and describes the semantic information of local GIS. The nature of the Local Ontology Agents can be filtrated to construct a virtual organization what could provides a global scheme. The virtual organization lightens the burdens of guests because they need not search information site by site manually. The application layer what is composed with User Agent, Geo-Agent and Task Execution Agent can apply a corresponding interface to a domain user. The functions that ASISG should provide are: 1) It integrates different spatial information systems on the semantic The Grid management layer establishes a virtual environment that integrates seamlessly all GIS notes. 2) When the resource management system searches data on different spatial information systems, it transfers the meaning of different Local Ontology Agents rather than access data directly. So the ability of search and query can be said to be on the semantic level. 3) The data access procedure is transparent to guests, that is, they could access the information from remote site as current disk because the General Ontology Agent could automatically link data by the Data Agents that link the Ontology concept to GIS data. 4) The capability of processing massive spatial data. Storing, accessing and managing massive spatial data from TB to PB; efficiently analyzing and processing spatial data to produce model, information and knowledge; and providing 3D and multimedia visualization services. 5) The capability of high performance computing and processing on spatial information. Solving spatial problems with high precision, high quality, and on a large scale; and process spatial information in real time or on time, with high-speed and high efficiency. 6) The capability of sharing spatial resources. The distributed heterogeneous spatial information resources are Shared and realizing integrated and inter-operated on semantic level, so as to make best use of spatial information resources,such as computing resources, storage devices, spatial data (integrating from GIS, RS and GPS), spatial applications and services, GIS platforms, 7) The capability of integrating legacy GIS system. A ASISG can not only be used to construct new advanced spatial application systems, but also integrate legacy GIS system, so as to keep extensibility and inheritance and guarantee investment of users. 8) The capability of collaboration. Large-scale spatial information applications and services always involve different departments in different geographic places, so remote and uniform services are needed. 9) The capability of supporting integration of heterogeneous systems. Large-scale spatial information systems are always synthetically applications, so ASISG should provide interoperation and consistency through adopting open and applied technology standards. 10) The capability of adapting dynamic changes. Business requirements, application patterns, management strategies, and IT products always change endlessly for any departments, so ASISG should be self-adaptive. Two examples are provided in this paper, those examples provide a detailed way on how you design your semantic grid based on Multi-Agent systems and Ontology. In conclusion, the semantic grid of spatial information system could improve the ability of the integration and interoperability of spatial information grid.

  2. 15. Detail view of Boston Bridge Works builders plate, north ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. Detail view of Boston Bridge Works builders plate, north portal brace, looking southwest - India Point Railroad Bridge, Spanning Seekonk River between Providence & East Providence, Providence, Providence County, RI

  3. James Williamson d/b/a Golden Triangle Builders Information Sheet

    EPA Pesticide Factsheets

    James Williamson d/b/a Golden Triangle Builders (the Company) is located in Pittsburgh, Pennsylvania. The settlement involves renovation activities conducted at property constructed prior to 1978, located in Pittsburgh, Pennsylvania.

  4. Kotai Antibody Builder: automated high-resolution structural modeling of antibodies.

    PubMed

    Yamashita, Kazuo; Ikeda, Kazuyoshi; Amada, Karlou; Liang, Shide; Tsuchiya, Yuko; Nakamura, Haruki; Shirai, Hiroki; Standley, Daron M

    2014-11-15

    Kotai Antibody Builder is a Web service for tertiary structural modeling of antibody variable regions. It consists of three main steps: hybrid template selection by sequence alignment and canonical rules, 3D rendering of alignments and CDR-H3 loop modeling. For the last step, in addition to rule-based heuristics used to build the initial model, a refinement option is available that uses fragment assembly followed by knowledge-based scoring. Using targets from the Second Antibody Modeling Assessment, we demonstrate that Kotai Antibody Builder generates models with an overall accuracy equal to that of the best-performing semi-automated predictors using expert knowledge. Kotai Antibody Builder is available at http://kotaiab.org standley@ifrec.osaka-u.ac.jp. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. The Misuse of Anabolic-Androgenic Steroids among Iranian Recreational Male Body-Builders and Their Related Psycho-Socio-Demographic factors

    PubMed Central

    ANGOORANI, Hooman; HALABCHI, Farzin

    2015-01-01

    Background: The high prevalence and potential side effects of anabolic-androgenic steroids (AAS) misuse by athletes has made it a major public health concern. Epidemiological studies on the abuse of such drugs are mandatory for developing effective preventive drug control programs in sports community. This study aimed to investigate the prevalence of AAS abuse and their association with some psycho-socio-demographic factors in Iranian male recreational body-builders. Methods: Between March and October 2011; 906 recreational male body-builders from 103 randomly selected bodybuilding clubs in Tehran, Iran were participated in this study. Some psycho-socio- demographic factors including age, job, average family income, family size, sport experience (months), weekly duration of the sporting activity (h), purpose of participation in sporting activity, mental health as well as body image (via General Health Questionnaire and Multidimensional Body-Self Relations Questionnaire, respectively), and history of AAS use were obtained by interviews using questionnaires. Results: Participants were all recreational male body-builders [mean age (SD): 25.7 (7.1), ranging 14–56 yr]. Self-report of AAS abuse was registered in 150 body-builders (16.6%). Among different psycho-socio-demographic factors, only family income and sport experience were inversely associated with AAS abuse. Conclusion: Lifetime prevalence of AAS abuse is relatively high among recreational body-builders based on their self-report. Some psycho-socio-demographic factors including family income and sport experience may influence the prevalence of AAS abuse. PMID:26811817

  6. The Misuse of Anabolic-Androgenic Steroids among Iranian Recreational Male Body-Builders and Their Related Psycho-Socio-Demographic factors.

    PubMed

    Angoorani, Hooman; Halabchi, Farzin

    2015-12-01

    The high prevalence and potential side effects of anabolic-androgenic steroids (AAS) misuse by athletes has made it a major public health concern. Epidemiological studies on the abuse of such drugs are mandatory for developing effective preventive drug control programs in sports community. This study aimed to investigate the prevalence of AAS abuse and their association with some psycho-socio-demographic factors in Iranian male recreational body-builders. Between March and October 2011; 906 recreational male body-builders from 103 randomly selected bodybuilding clubs in Tehran, Iran were participated in this study. Some psycho-socio- demographic factors including age, job, average family income, family size, sport experience (months), weekly duration of the sporting activity (h), purpose of participation in sporting activity, mental health as well as body image (via General Health Questionnaire and Multidimensional Body-Self Relations Questionnaire, respectively), and history of AAS use were obtained by interviews using questionnaires. Participants were all recreational male body-builders [mean age (SD): 25.7 (7.1), ranging 14-56 yr]. Self-report of AAS abuse was registered in 150 body-builders (16.6%). Among different psycho-socio-demographic factors, only family income and sport experience were inversely associated with AAS abuse. Lifetime prevalence of AAS abuse is relatively high among recreational body-builders based on their self-report. Some psycho-socio-demographic factors including family income and sport experience may influence the prevalence of AAS abuse.

  7. Automated 3D Damaged Cavity Model Builder for Lower Surface Acreage Tile on Orbiter

    NASA Technical Reports Server (NTRS)

    Belknap, Shannon; Zhang, Michael

    2013-01-01

    The 3D Automated Thermal Tool for Damaged Acreage Tile Math Model builder was developed to perform quickly and accurately 3D thermal analyses on damaged lower surface acreage tiles and structures beneath the damaged locations on a Space Shuttle Orbiter. The 3D model builder created both TRASYS geometric math models (GMMs) and SINDA thermal math models (TMMs) to simulate an idealized damaged cavity in the damaged tile(s). The GMMs are processed in TRASYS to generate radiation conductors between the surfaces in the cavity. The radiation conductors are inserted into the TMMs, which are processed in SINDA to generate temperature histories for all of the nodes on each layer of the TMM. The invention allows a thermal analyst to create quickly and accurately a 3D model of a damaged lower surface tile on the orbiter. The 3D model builder can generate a GMM and the correspond ing TMM in one or two minutes, with the damaged cavity included in the tile material. A separate program creates a configuration file, which would take a couple of minutes to edit. This configuration file is read by the model builder program to determine the location of the damage, the correct tile type, tile thickness, structure thickness, and SIP thickness of the damage, so that the model builder program can build an accurate model at the specified location. Once the models are built, they are processed by the TRASYS and SINDA.

  8. e-Stars Template Builder

    NASA Technical Reports Server (NTRS)

    Cox, Brian

    2003-01-01

    e-Stars Template Builder is a computer program that implements a concept of enabling users to rapidly gain access to information on projects of NASA's Jet Propulsion Laboratory. The information about a given project is not stored in a data base, but rather, in a network that follows the project as it develops. e-Stars Template Builder resides on a server computer, using Practical Extraction and Reporting Language (PERL) scripts to create what are called "e-STARS node templates," which are software constructs that allow for project-specific configurations. The software resides on the server and does not require specific software on the user machine except for an Internet browser. A user's computer need not be equipped with special software (other than an Internet-browser program). e-Stars Template Builder is compatible with Windows, Macintosh, and UNIX operating systems. A user invokes e-Stars Template Builder from a browser window. Operations that can be performed by the user include the creation of child processes and the addition of links and descriptions of documentation to existing pages or nodes. By means of this addition of "child processes" of nodes, a network that reflects the development of a project is generated.

  9. A Fuzzy Description Logic with Automatic Object Membership Measurement

    NASA Astrophysics Data System (ADS)

    Cai, Yi; Leung, Ho-Fung

    In this paper, we propose a fuzzy description logic named f om -DL by combining the classical view in cognitive psychology and fuzzy set theory. A formal mechanism used to determine object memberships automatically in concepts is also proposed, which is lacked in previous work fuzzy description logics. In this mechanism, object membership is based on the defining properties of concept definition and properties in object description. Moreover, while previous works cannot express the qualitative measurements of an object possessing a property, we introduce two kinds of properties named N-property and L-property, which are quantitative measurements and qualitative measurements of an object possessing a property respectively. The subsumption and implication of concepts and properties are also explored in our work. We believe that it is useful to the Semantic Web community for reasoning the fuzzy membership of objects for concepts in fuzzy ontologies.

  10. Best Practices Case Study: Devoted Builders, LLC, Mediterrtanean Villas, Pasco,WA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2010-12-01

    Devoted Builders of Kennewick, WA worked with Building America's BIRA team to achieve the 50% Federal tax credit level energy savings on 81 homes at its Mediterranean Villas community in eastern Washington.

  11. Lost in translation? A multilingual Query Builder improves the quality of PubMed queries: a randomised controlled trial.

    PubMed

    Schuers, Matthieu; Joulakian, Mher; Kerdelhué, Gaetan; Segas, Léa; Grosjean, Julien; Darmoni, Stéfan J; Griffon, Nicolas

    2017-07-03

    MEDLINE is the most widely used medical bibliographic database in the world. Most of its citations are in English and this can be an obstacle for some researchers to access the information the database contains. We created a multilingual query builder to facilitate access to the PubMed subset using a language other than English. The aim of our study was to assess the impact of this multilingual query builder on the quality of PubMed queries for non-native English speaking physicians and medical researchers. A randomised controlled study was conducted among French speaking general practice residents. We designed a multi-lingual query builder to facilitate information retrieval, based on available MeSH translations and providing users with both an interface and a controlled vocabulary in their own language. Participating residents were randomly allocated either the French or the English version of the query builder. They were asked to translate 12 short medical questions into MeSH queries. The main outcome was the quality of the query. Two librarians blind to the arm independently evaluated each query, using a modified published classification that differentiated eight types of errors. Twenty residents used the French version of the query builder and 22 used the English version. 492 queries were analysed. There were significantly more perfect queries in the French group vs. the English group (respectively 37.9% vs. 17.9%; p < 0.01). It took significantly more time for the members of the English group than the members of the French group to build each query, respectively 194 sec vs. 128 sec; p < 0.01. This multi-lingual query builder is an effective tool to improve the quality of PubMed queries in particular for researchers whose first language is not English.

  12. 49 CFR 179.220-25 - Stamping.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...: Material ASTM A240-316L. Shell thickness Shell 0.167 in. Head thickness Head 0.150 in. Tank builders initials ABC. Date of original test 00-0000. Outer shell: Material ASTM A285-C. Tank builders initials WYZ...

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    This builder certified its first DOE Zero Energy Ready Home and won a Production Builder honor in the 2014 Housing Innovation Awards. It is the first home in the world to use a tri-generation system to supply electricity, heating, and cooling on site.

  14. 6. BUILDER'S PLATE ON WEST TRUSS: 'MOSELEY IRON BUILDING WORKS, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. BUILDER'S PLATE ON WEST TRUSS: 'MOSELEY IRON BUILDING WORKS, BOSTON 1888, PATENTED 1881 TO T.W.E. MOSELEY' - Upper Pacific Mills Bridge, Moved to Merrimack College, North Andover, MA, Lawrence, Essex County, MA

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    ?Ducts in conditioned space (DCS) represent a high priority measure for moving the next generation of new homes to the Zero Net Energy performance level. Various strategies exist for incorporating ducts within the conditioned thermal envelope. To support this activity, in 2013 the Pacific Gas & Electric Company initiated a project with Davis Energy Group (lead for the Building America team, Alliance for Residential Building Innovation) to solicit builder involvement in California to participate in field demonstrations of various DCS strategies. Builders were given incentives and design support in exchange for providing site access for construction observation, diagnostic testing, andmore » builder survey feedback. Information from the project was designed to feed into California's 2016 Title 24 process, but also to serve as an initial mechanism to engage builders in more high performance construction strategies. This Building America project complemented information collected in the California project with BEopt simulations of DCS performance in hot/dry climate regions.« less

  16. Building America Best Practices Series Volume 12: Builders Challenge Guide to 40% Whole-House Energy Savings in the Cold and Very Cold Climates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baechler, Michael C.; Gilbride, Theresa L.; Hefty, Marye G.

    2011-02-01

    This best practices guide is the twelfth in a series of guides for builders produced by PNNL for the U.S. Department of Energy’s Building America program. This guide book is a resource to help builders design and construct homes that are among the most energy-efficient available, while addressing issues such as building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the cold and very cold climates can build homes that have whole-house energy savings of 40% over the Building America benchmark with no added overall costs for consumers. Themore » best practices described in this document are based on the results of research and demonstration projects conducted by Building America’s research teams. Building America brings together the nation’s leading building scientists with over 300 production builders to develop, test, and apply innovative, energy-efficient construction practices. Building America builders have found they can build homes that meet these aggressive energy-efficiency goals at no net increased costs to the homeowners. Currently, Building America homes achieve energy savings of 40% greater than the Building America benchmark home (a home built to mid-1990s building practices roughly equivalent to the 1993 Model Energy Code). The recommendations in this document meet or exceed the requirements of the 2009 IECC and 2009 IRC and thos erequirements are highlighted in the text. This document will be distributed via the DOE Building America website: www.buildingamerica.gov.« less

  17. 18. Castiron bridge plate citing builder, date, and names of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    18. Cast-iron bridge plate citing builder, date, and names of bridge committee members. Jack Boucher, photographer, 1977 - Neshanic Station Lenticular Truss Bridge, State Route 567, spanning South Branch of Raritan River, Neshanic Station, Somerset County, NJ

  18. JOB BUILDER remote batch processing subsystem

    NASA Technical Reports Server (NTRS)

    Orlov, I. G.; Orlova, T. L.

    1980-01-01

    The functions of the JOB BUILDER remote batch processing subsystem are described. Instructions are given for using it as a component of a display system developed by personnel of the System Programming Laboratory, Institute of Space Research, USSR Academy of Sciences.

  19. Towards a semantic web of paleoclimatology

    NASA Astrophysics Data System (ADS)

    Emile-Geay, J.; Eshleman, J. A.

    2012-12-01

    The paleoclimate record is information-rich, yet signifiant technical barriers currently exist before it can be used to automatically answer scientific questions. Here we make the case for a universal format to structure paleoclimate data. A simple example demonstrates the scientific utility of such a self-contained way of organizing coral data and meta-data in the Matlab language. This example is generalized to a universal ontology that may form the backbone of an open-source, open-access and crowd-sourced paleoclimate database. Its key attributes are: 1. Parsability: the format is self-contained (hence machine-readable), and would therefore enable a semantic web of paleoclimate information. 2. Universality: the format is platform-independent (readable on all computer and operating systems), and language- independent (readable in major programming languages) 3. Extensibility: the format requires a minimum set of fields to appropriately define a paleoclimate record, but allows for the database to grow organically as more records are added, or - equally important - as more metadata are added to existing records. 4. Citability: The format enables the automatic citation of peer- reviewed articles as well as data citations whenever a data record is being used for analysis, making due recognition of scientific work an automatic part and foundational principle of paleoclimate data analysis. 5. Ergonomy: The format will be easy to use, update and manage. This structure is designed to enable semantic searches, and is expected to help accelerate discovery in all workflows where paleoclimate data are being used. Practical steps towards the implementation of such a system at the community level are then discussed.; Preliminary ontology describing relationships between the data and meta-data fields of the Nurhati et al. [2011] climate record. Several fields are viewed as instances of larger classes (ProxyClass,Site,Reference), which would allow computers to perform operations on all records within a specific class (e.g. if the measurement type is δ18O , or if the proxy class is 'Tree Ring Width', or if the resolution is less than 3 months, etc). All records in such a database would be bound to each other by similar links, allowing machines to automatically process any form of query involving existing information. Such a design would also allow growth, by adding records and/or additional information about each record.

  20. Unfreezing the behaviour of two orb spiders.

    PubMed

    Zschokke, S; Vollrath, F

    1995-12-01

    Spider's webs reflect the builders behaviour pattern; yet there are aspects of the construction behaviour that cannot be "read" from the geometry of the finished web alone. Using computerised image analysis we developed an automatic surveillance method to track a spider's path during web-building. Thus we collected data on two orb-weaving spiders--the cribellate Uloborus walckenaerius and the ecribellate Araneus diadematus--for web geometry, movement pattern and time allocation. Representatives of these two species built webs of similar geometry but they used different movement patterns both spatially (which we describe qualitatively) and temporally (which we analyse quantitatively). Most importantly, temporal analysis showed that the two spiders differed significantly in some but not all web-building stages; and from this we deduce that Uloborus--unlike Araneus--was constrained by speed of silk production during the construction of its capture but not its auxiliary spiral.

  1. A domain-centric solution to functional genomics via dcGO Predictor

    PubMed Central

    2013-01-01

    Background Computational/manual annotations of protein functions are one of the first routes to making sense of a newly sequenced genome. Protein domain predictions form an essential part of this annotation process. This is due to the natural modularity of proteins with domains as structural, evolutionary and functional units. Sometimes two, three, or more adjacent domains (called supra-domains) are the operational unit responsible for a function, e.g. via a binding site at the interface. These supra-domains have contributed to functional diversification in higher organisms. Traditionally functional ontologies have been applied to individual proteins, rather than families of related domains and supra-domains. We expect, however, to some extent functional signals can be carried by protein domains and supra-domains, and consequently used in function prediction and functional genomics. Results Here we present a domain-centric Gene Ontology (dcGO) perspective. We generalize a framework for automatically inferring ontological terms associated with domains and supra-domains from full-length sequence annotations. This general framework has been applied specifically to primary protein-level annotations from UniProtKB-GOA, generating GO term associations with SCOP domains and supra-domains. The resulting 'dcGO Predictor', can be used to provide functional annotation to protein sequences. The functional annotation of sequences in the Critical Assessment of Function Annotation (CAFA) has been used as a valuable opportunity to validate our method and to be assessed by the community. The functional annotation of all completely sequenced genomes has demonstrated the potential for domain-centric GO enrichment analysis to yield functional insights into newly sequenced or yet-to-be-annotated genomes. This generalized framework we have presented has also been applied to other domain classifications such as InterPro and Pfam, and other ontologies such as mammalian phenotype and disease ontology. The dcGO and its predictor are available at http://supfam.org/SUPERFAMILY/dcGO including an enrichment analysis tool. Conclusions As functional units, domains offer a unique perspective on function prediction regardless of whether proteins are multi-domain or single-domain. The 'dcGO Predictor' holds great promise for contributing to a domain-centric functional understanding of genomes in the next generation sequencing era. PMID:23514627

  2. Enabling Interoperability - Supporting a Diversity of Search Paradigms Using Shared Ontologies and Federated Registries

    NASA Astrophysics Data System (ADS)

    Hughes, J. S.; Crichton, D. J.; Hardman, S. H.; Mattman, C. A.; Ramirez, P. M.

    2009-12-01

    Experience suggests that no single search paradigm will meet all of a community’s search requirements. Traditional forms based search is still considered critical by a significant percentage of most science communities. However text base and facet based search are improving the community’s perception that search can be easy and that the data is available and can be located. Finally semantic search promises ways to find data that were not conceived when the metadata was first captured and organized. This situation suggests that successful science information systems must be able to deploy new search applications quickly, efficiently, and often for ad-hoc purposes. Federated registries allow data to be packaged or associated with their metadata and managed as simple registry objects. Standard reference models for federated registries now exist that ensure registry objects are uniquely identified at registration and that versioning, classification, and cataloging are addressed automatically. Distributed but locally governed, federated registries also provide notification of registry events and federated query, linking, and replication of registry objects. Key principles for shared ontology development in the space sciences are that the ontology remains independent of its implementation and be extensible, flexible and scalable. The dichotomy between digital things and physical/conceptual things in the domain need to be unified under a standard model, such as the Open Archive Information System (OAIS) Information Object. Finally the fact must be accepted that ontology development is a difficult task that requires time, patience and experts in both the science domain and information modeling. The Planetary Data System (PDS) has adopted this architecture for it next generation information system, PDS 2010. The authors will report on progress, briefly describe key elements, and illustrate how the new system will be phased into operations to handle both legacy and new science data. In particular the shared ontology is being used to drive system implementation through the generation of standards documents and software configuration files. The resulting information system will help meet the expectations of modern scientists by providing more of the information interconnectedness, correlative science, and system interoperability that they desire. Fig.1 - Data Driven Architecture

  3. Structure-based classification and ontology in chemistry

    PubMed Central

    2012-01-01

    Background Recent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving relevant results from the available information, and organising those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures), while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies. Results We analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches. Conclusion Systems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational utilities including algorithmic, statistical and logic-based tools. For the task of automatic structure-based classification of chemical entities, essential to managing the vast swathes of chemical data being brought online, systems which are capable of hybrid reasoning combining several different approaches are crucial. We provide a thorough review of the available tools and methodologies, and identify areas of open research. PMID:22480202

  4. BRIDGE BUILDER WILLIAM FLINN’S “CAMP & BRIDGE BUILDING OUTFIT”. INTERIOR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    BRIDGE BUILDER WILLIAM FLINN’S “CAMP & BRIDGE BUILDING OUTFIT”. INTERIOR VIEW SHOWING LABORERS AT MEAL TIME. - Clear Fork of Brazos River Suspension Bridge, Spanning Clear Fork of Brazos River at County Route 179, Albany, Shackelford County, TX

  5. DETAIL OF CORNERSTONE, WHICH STATES "J.J. DANIELS, BUILDER 1861." NOTE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    DETAIL OF CORNERSTONE, WHICH STATES "J.J. DANIELS, BUILDER 1861." NOTE ALSO IRON STRAP AT EAST CORNER OF ABUTMENT. - Jackson Covered Bridge, Spanning Sugar Creek, CR 775N (Changed from Spanning Sugar Creek), Bloomingdale, Parke County, IN

  6. ExAtlas: An interactive online tool for meta-analysis of gene expression data.

    PubMed

    Sharov, Alexei A; Schlessinger, David; Ko, Minoru S H

    2015-12-01

    We have developed ExAtlas, an on-line software tool for meta-analysis and visualization of gene expression data. In contrast to existing software tools, ExAtlas compares multi-component data sets and generates results for all combinations (e.g. all gene expression profiles versus all Gene Ontology annotations). ExAtlas handles both users' own data and data extracted semi-automatically from the public repository (GEO/NCBI database). ExAtlas provides a variety of tools for meta-analyses: (1) standard meta-analysis (fixed effects, random effects, z-score, and Fisher's methods); (2) analyses of global correlations between gene expression data sets; (3) gene set enrichment; (4) gene set overlap; (5) gene association by expression profile; (6) gene specificity; and (7) statistical analysis (ANOVA, pairwise comparison, and PCA). ExAtlas produces graphical outputs, including heatmaps, scatter-plots, bar-charts, and three-dimensional images. Some of the most widely used public data sets (e.g. GNF/BioGPS, Gene Ontology, KEGG, GAD phenotypes, BrainScan, ENCODE ChIP-seq, and protein-protein interaction) are pre-loaded and can be used for functional annotations.

  7. Reasoning about Resources and Hierarchical Tasks Using OWL and SWRL

    NASA Astrophysics Data System (ADS)

    Elenius, Daniel; Martin, David; Ford, Reginald; Denker, Grit

    Military training and testing events are highly complex affairs, potentially involving dozens of legacy systems that need to interoperate in a meaningful way. There are superficial interoperability concerns (such as two systems not sharing the same messaging formats), but also substantive problems such as different systems not sharing the same understanding of the terrain, positions of entities, and so forth. We describe our approach to facilitating such events: describe the systems and requirements in great detail using ontologies, and use automated reasoning to automatically find and help resolve problems. The complexity of our problem took us to the limits of what one can do with OWL, and we needed to introduce some innovative techniques of using and extending it. We describe our novel ways of using SWRL and discuss its limitations as well as extensions to it that we found necessary or desirable. Another innovation is our representation of hierarchical tasks in OWL, and an engine that reasons about them. Our task ontology has proved to be a very flexible and expressive framework to describe requirements on resources and their capabilities in order to achieve some purpose.

  8. Word add-in for ontology recognition: semantic enrichment of scientific literature.

    PubMed

    Fink, J Lynn; Fernicola, Pablo; Chandran, Rahul; Parastatidis, Savas; Wade, Alex; Naim, Oscar; Quinn, Gregory B; Bourne, Philip E

    2010-02-24

    In the current era of scientific research, efficient communication of information is paramount. As such, the nature of scholarly and scientific communication is changing; cyberinfrastructure is now absolutely necessary and new media are allowing information and knowledge to be more interactive and immediate. One approach to making knowledge more accessible is the addition of machine-readable semantic data to scholarly articles. The Word add-in presented here will assist authors in this effort by automatically recognizing and highlighting words or phrases that are likely information-rich, allowing authors to associate semantic data with those words or phrases, and to embed that data in the document as XML. The add-in and source code are publicly available at http://www.codeplex.com/UCSDBioLit. The Word add-in for ontology term recognition makes it possible for an author to add semantic data to a document as it is being written and it encodes these data using XML tags that are effectively a standard in life sciences literature. Allowing authors to mark-up their own work will help increase the amount and quality of machine-readable literature metadata.

  9. Automated Gene Ontology annotation for anonymous sequence data.

    PubMed

    Hennig, Steffen; Groth, Detlef; Lehrach, Hans

    2003-07-01

    Gene Ontology (GO) is the most widely accepted attempt to construct a unified and structured vocabulary for the description of genes and their products in any organism. Annotation by GO terms is performed in most of the current genome projects, which besides generality has the advantage of being very convenient for computer based classification methods. However, direct use of GO in small sequencing projects is not easy, especially for species not commonly represented in public databases. We present a software package (GOblet), which performs annotation based on GO terms for anonymous cDNA or protein sequences. It uses the species independent GO structure and vocabulary together with a series of protein databases collected from various sites, to perform a detailed GO annotation by sequence similarity searches. The sensitivity and the reference protein sets can be selected by the user. GOblet runs automatically and is available as a public service on our web server. The paper also addresses the reliability of automated GO annotations by using a reference set of more than 6000 human proteins. The GOblet server is accessible at http://goblet.molgen.mpg.de.

  10. A semantic-web oriented representation of the clinical element model for secondary use of electronic health records data.

    PubMed

    Tao, Cui; Jiang, Guoqian; Oniki, Thomas A; Freimuth, Robert R; Zhu, Qian; Sharma, Deepak; Pathak, Jyotishman; Huff, Stanley M; Chute, Christopher G

    2013-05-01

    The clinical element model (CEM) is an information model designed for representing clinical information in electronic health records (EHR) systems across organizations. The current representation of CEMs does not support formal semantic definitions and therefore it is not possible to perform reasoning and consistency checking on derived models. This paper introduces our efforts to represent the CEM specification using the Web Ontology Language (OWL). The CEM-OWL representation connects the CEM content with the Semantic Web environment, which provides authoring, reasoning, and querying tools. This work may also facilitate the harmonization of the CEMs with domain knowledge represented in terminology models as well as other clinical information models such as the openEHR archetype model. We have created the CEM-OWL meta ontology based on the CEM specification. A convertor has been implemented in Java to automatically translate detailed CEMs from XML to OWL. A panel evaluation has been conducted, and the results show that the OWL modeling can faithfully represent the CEM specification and represent patient data.

  11. A semantic-web oriented representation of the clinical element model for secondary use of electronic health records data

    PubMed Central

    Tao, Cui; Jiang, Guoqian; Oniki, Thomas A; Freimuth, Robert R; Zhu, Qian; Sharma, Deepak; Pathak, Jyotishman; Huff, Stanley M; Chute, Christopher G

    2013-01-01

    The clinical element model (CEM) is an information model designed for representing clinical information in electronic health records (EHR) systems across organizations. The current representation of CEMs does not support formal semantic definitions and therefore it is not possible to perform reasoning and consistency checking on derived models. This paper introduces our efforts to represent the CEM specification using the Web Ontology Language (OWL). The CEM-OWL representation connects the CEM content with the Semantic Web environment, which provides authoring, reasoning, and querying tools. This work may also facilitate the harmonization of the CEMs with domain knowledge represented in terminology models as well as other clinical information models such as the openEHR archetype model. We have created the CEM-OWL meta ontology based on the CEM specification. A convertor has been implemented in Java to automatically translate detailed CEMs from XML to OWL. A panel evaluation has been conducted, and the results show that the OWL modeling can faithfully represent the CEM specification and represent patient data. PMID:23268487

  12. Gloss in Sanskrit Wordnet

    NASA Astrophysics Data System (ADS)

    Kulkarni, Malhar; Kulkarni, Irawati; Dangarikar, Chaitali; Bhattacharyya, Pushpak

    Glosses and examples are the essential components of the computational lexical databases like, Wordnet. These two components of the lexical database can be used in building domain ontologies, semantic relations, phrase structure rules etc., and can help automatic or manual word sense disambiguation tasks. The present paper aims to highlight the importance of gloss in the process of WSD based on the experiences from building Sanskrit Wordnet. This paper presents a survey of Sanskrit Synonymy lexica, use of Navya-Nyāya terminology in developing a gloss and the kind of patterns evolved that are useful for the computational purpose of WSD with special reference to Sanskrit.

  13. Declarative Business Process Modelling and the Generation of ERP Systems

    NASA Astrophysics Data System (ADS)

    Schultz-Møller, Nicholas Poul; Hølmer, Christian; Hansen, Michael R.

    We present an approach to the construction of Enterprise Resource Planning (ERP) Systems, which is based on the Resources, Events and Agents (REA) ontology. This framework deals with processes involving exchange and flow of resources in a declarative, graphically-based manner describing what the major entities are rather than how they engage in computations. We show how to develop a domain-specific language on the basis of REA, and a tool which automatically can generate running web-applications. A main contribution is a proof-of-concept showing that business-domain experts can generate their own applications without worrying about implementation details.

  14. Sealife: a semantic grid browser for the life sciences applied to the study of infectious diseases.

    PubMed

    Schroeder, Michael; Burger, Albert; Kostkova, Patty; Stevens, Robert; Habermann, Bianca; Dieng-Kuntz, Rose

    2006-01-01

    The objective of Sealife is the conception and realisation of a semantic Grid browser for the life sciences, which will link the existing Web to the currently emerging eScience infrastructure. The SeaLife Browser will allow users to automatically link a host of Web servers and Web/Grid services to the Web content he/she is visiting. This will be accomplished using eScience's growing number of Web/Grid Services and its XML-based standards and ontologies. The browser will identify terms in the pages being browsed through the background knowledge held in ontologies. Through the use of Semantic Hyperlinks, which link identified ontology terms to servers and services, the SeaLife Browser will offer a new dimension of context-based information integration. In this paper, we give an overview over the different components of the browser and their interplay. This SeaLife Browser will be demonstrated within three application scenarios in evidence-based medicine, literature & patent mining, and molecular biology, all relating to the study of infectious diseases. The three applications vertically integrate the molecule/cell, the tissue/organ and the patient/population level by covering the analysis of high-throughput screening data for endocytosis (the molecular entry pathway into the cell), the expression of proteins in the spatial context of tissue and organs, and a high-level library on infectious diseases designed for clinicians and their patients. For more information see http://www.biote.ctu-dresden.de/sealife.

  15. An ontological system for interoperable spatial generalisation in biodiversity monitoring

    NASA Astrophysics Data System (ADS)

    Nieland, Simon; Moran, Niklas; Kleinschmit, Birgit; Förster, Michael

    2015-11-01

    Semantic heterogeneity remains a barrier to data comparability and standardisation of results in different fields of spatial research. Because of its thematic complexity, differing acquisition methods and national nomenclatures, interoperability of biodiversity monitoring information is especially difficult. Since data collection methods and interpretation manuals broadly vary there is a need for automatised, objective methodologies for the generation of comparable data-sets. Ontology-based applications offer vast opportunities in data management and standardisation. This study examines two data-sets of protected heathlands in Germany and Belgium which are based on remote sensing image classification and semantically formalised in an OWL2 ontology. The proposed methodology uses semantic relations of the two data-sets, which are (semi-)automatically derived from remote sensing imagery, to generate objective and comparable information about the status of protected areas by utilising kernel-based spatial reclassification. This automatised method suggests a generalisation approach, which is able to generate delineation of Special Areas of Conservation (SAC) of the European biodiversity Natura 2000 network. Furthermore, it is able to transfer generalisation rules between areas surveyed with varying acquisition methods in different countries by taking into account automated inference of the underlying semantics. The generalisation results were compared with the manual delineation of terrestrial monitoring. For the different habitats in the two sites an accuracy of above 70% was detected. However, it has to be highlighted that the delineation of the ground-truth data inherits a high degree of uncertainty, which is discussed in this study.

  16. Polyol and Amino Acid-Based Biosurfactants, Builders, and Hydrogels

    USDA-ARS?s Scientific Manuscript database

    This chapter reviews different detergent materials which have been synthesized from natural agricultural commodities. Background information, which gives reasons why the use of biobased materials may be advantageous, is presented. Detergent builders from L-aspartic acid, citric acid and D-sorbitol...

  17. Learning to rank-based gene summary extraction.

    PubMed

    Shang, Yue; Hao, Huihui; Wu, Jiajin; Lin, Hongfei

    2014-01-01

    In recent years, the biomedical literature has been growing rapidly. These articles provide a large amount of information about proteins, genes and their interactions. Reading such a huge amount of literature is a tedious task for researchers to gain knowledge about a gene. As a result, it is significant for biomedical researchers to have a quick understanding of the query concept by integrating its relevant resources. In the task of gene summary generation, we regard automatic summary as a ranking problem and apply the method of learning to rank to automatically solve this problem. This paper uses three features as a basis for sentence selection: gene ontology relevance, topic relevance and TextRank. From there, we obtain the feature weight vector using the learning to rank algorithm and predict the scores of candidate summary sentences and obtain top sentences to generate the summary. ROUGE (a toolkit for summarization of automatic evaluation) was used to evaluate the summarization result and the experimental results showed that our method outperforms the baseline techniques. According to the experimental result, the combination of three features can improve the performance of summary. The application of learning to rank can facilitate the further expansion of features for measuring the significance of sentences.

  18. High Performance Builder Spotlight: Treasure Homes Inc.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2011-01-01

    Treasure Homes, Inc., achieved a HERS rating of 46 without PV on its prototype “Gem” home, located on the shores of Lake Michigan in northern Indiana, thanks in part to training received from a Building America partner, the National Association of Home Builders Research Center.

  19. 49 CFR 230.47 - Boiler number.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., DEPARTMENT OF TRANSPORTATION STEAM LOCOMOTIVE INSPECTION AND MAINTENANCE STANDARDS Boilers and Appurtenances Steam Gauges § 230.47 Boiler number. (a) Generally. The builder's number of the boiler, if known, shall be stamped on the steam dome or manhole flange. If the builder's number cannot be obtained, an...

  20. Conditioning of sewage sludge by Fenton's reagent combined with skeleton builders.

    PubMed

    Liu, Huan; Yang, Jiakuan; Shi, Yafei; Li, Ye; He, Shu; Yang, Changzhu; Yao, Hong

    2012-06-01

    Physical conditioners, often known as skeleton builders, are commonly used to improve the dewaterability of sewage sludge. This study evaluated a novel joint usage of Fenton's reagent and skeleton builders, referred to as the F-S inorganic composite conditioner, focusing on their efficacies and the optimization of the major operational parameters. The results demonstrate that the F-S composite conditioner for conditioning sewage sludge is a viable alternative to conventional organic polymers, especially when ordinary Portland cement (OPC) and lime are used as the skeleton builders. Experimental investigations confirmed that Fenton reaction required sufficient time (80 min in this study) to degrade organics in the sludge. The optimal condition of this process was at pH=5, Fe(2+)=40 mg g(-1) (dry solids), H(2)O(2)=32 mg g(-1), OPC=300 mg g(-1) and lime=400 mg g(-1), in which the specific resistance to filtration reduction efficiency of 95% was achieved. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. TrustBuilder2: A Reconfigurable Framework for Trust Negotiation

    NASA Astrophysics Data System (ADS)

    Lee, Adam J.; Winslett, Marianne; Perano, Kenneth J.

    To date, research in trust negotiation has focused mainly on the theoretical aspects of the trust negotiation process, and the development of proof of concept implementations. These theoretical works and proofs of concept have been quite successful from a research perspective, and thus researchers must now begin to address the systems constraints that act as barriers to the deployment of these systems. To this end, we present TrustBuilder2, a fully-configurable and extensible framework for prototyping and evaluating trust negotiation systems. TrustBuilder2 leverages a plug-in based architecture, extensible data type hierarchy, and flexible communication protocol to provide a framework within which numerous trust negotiation protocols and system configurations can be quantitatively analyzed. In this paper, we discuss the design and implementation of TrustBuilder2, study its performance, examine the costs associated with flexible authorization systems, and leverage this knowledge to identify potential topics for future research, as well as a novel method for attacking trust negotiation systems.

  2. Technology Solutions Case Study: High-Performance Ducts in Hot-Dry Climates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. Hoeschele, A. German, E. Weitzel, R. Chitwood

    2015-08-01

    Ducts in conditioned space (DCS) represent a high priority measure for moving the next generation of new homes to the Zero Net Energy performance level. Various strategies exist for incorporating ducts within the conditioned thermal envelope. To support this activity, in 2013 the Pacific Gas & Electric Company initiated a project with Davis Energy Group (lead for the Building America team, Alliance for Residential Building Innovation) to solicit builder involvement in California to participate in field demonstrations of various DCS strategies. Builders were given incentives and design support in exchange for providing site access for construction observation, diagnostic testing, andmore » builder survey feedback. Information from the project was designed to feed into California's 2016 Title 24 process, but also to serve as an initial mechanism to engage builders in more high performance construction strategies. This Building America project complemented information collected in the California project with BEopt simulations of DCS performance in hot/dry climate regions.« less

  3. Appliance energy efficiency in new home construction

    NASA Astrophysics Data System (ADS)

    1980-11-01

    A survey of 224 builders was conducted to which 160 builders responded. Each respondent completed between one and seven separate questionnaires. Each of the seven questionnaires were designed to collect information about one type of equipment or major appliance. These are: heat pump; heating system; air conditioner; domestic water heater; dishwasher; range; and refrigerator. Analyses of the resulting 406 questionnaires indicated that builders were primarily responsible for brand selection. These choices were made primarily without regard for the secondary efficiency of the product. A similar apparent lack of consideration of energy efficiency during brand and model selection was found among home buyers and specialized subcontractors.

  4. Intelligent Support System of Steel Technical Preparation in an Arc Furnace: Functional Scheme of Interactive Builder of the Multi Objective Optimization Problem

    NASA Astrophysics Data System (ADS)

    Logunova, O. S.; Sibileva, N. S.

    2017-12-01

    The purpose of the study is to increase the efficiency of the steelmaking process in large capacity arc furnace on the basis of implementation a new decision-making system about the composition of charge materials. The authors proposed an interactive builder for the formation of the optimization problem, taking into account the requirements of the customer, normative documents and stocks of charge materials in the warehouse. To implement the interactive builder, the sets of deterministic and stochastic model components are developed, as well as a list of preferences of criteria and constraints.

  5. Artemis and ACT: viewing, annotating and comparing sequences stored in a relational database.

    PubMed

    Carver, Tim; Berriman, Matthew; Tivey, Adrian; Patel, Chinmay; Böhme, Ulrike; Barrell, Barclay G; Parkhill, Julian; Rajandream, Marie-Adèle

    2008-12-01

    Artemis and Artemis Comparison Tool (ACT) have become mainstream tools for viewing and annotating sequence data, particularly for microbial genomes. Since its first release, Artemis has been continuously developed and supported with additional functionality for editing and analysing sequences based on feedback from an active user community of laboratory biologists and professional annotators. Nevertheless, its utility has been somewhat restricted by its limitation to reading and writing from flat files. Therefore, a new version of Artemis has been developed, which reads from and writes to a relational database schema, and allows users to annotate more complex, often large and fragmented, genome sequences. Artemis and ACT have now been extended to read and write directly to the Generic Model Organism Database (GMOD, http://www.gmod.org) Chado relational database schema. In addition, a Gene Builder tool has been developed to provide structured forms and tables to edit coordinates of gene models and edit functional annotation, based on standard ontologies, controlled vocabularies and free text. Artemis and ACT are freely available (under a GPL licence) for download (for MacOSX, UNIX and Windows) at the Wellcome Trust Sanger Institute web sites: http://www.sanger.ac.uk/Software/Artemis/ http://www.sanger.ac.uk/Software/ACT/

  6. Cafe Variome: general-purpose software for making genotype-phenotype data discoverable in restricted or open access contexts.

    PubMed

    Lancaster, Owen; Beck, Tim; Atlan, David; Swertz, Morris; Thangavelu, Dhiwagaran; Veal, Colin; Dalgleish, Raymond; Brookes, Anthony J

    2015-10-01

    Biomedical data sharing is desirable, but problematic. Data "discovery" approaches-which establish the existence rather than the substance of data-precisely connect data owners with data seekers, and thereby promote data sharing. Cafe Variome (http://www.cafevariome.org) was therefore designed to provide a general-purpose, Web-based, data discovery tool that can be quickly installed by any genotype-phenotype data owner, or network of data owners, to make safe or sensitive content appropriately discoverable. Data fields or content of any type can be accommodated, from simple ID and label fields through to extensive genotype and phenotype details based on ontologies. The system provides a "shop window" in front of data, with main interfaces being a simple search box and a powerful "query-builder" that enable very elaborate queries to be formulated. After a successful search, counts of records are reported grouped by "openAccess" (data may be directly accessed), "linkedAccess" (a source link is provided), and "restrictedAccess" (facilitated data requests and subsequent provision of approved records). An administrator interface provides a wide range of options for system configuration, enabling highly customized single-site or federated networks to be established. Current uses include rare disease data discovery, patient matchmaking, and a Beacon Web service. © 2015 WILEY PERIODICALS, INC.

  7. Toward More Transparent and Reproducible Omics Studies Through a Common Metadata Checklist and Data Publications

    PubMed Central

    Özdemir, Vural; Martens, Lennart; Hancock, William; Anderson, Gordon; Anderson, Nathaniel; Aynacioglu, Sukru; Baranova, Ancha; Campagna, Shawn R.; Chen, Rui; Choiniere, John; Dearth, Stephen P.; Feng, Wu-Chun; Ferguson, Lynnette; Fox, Geoffrey; Frishman, Dmitrij; Grossman, Robert; Heath, Allison; Higdon, Roger; Hutz, Mara H.; Janko, Imre; Jiang, Lihua; Joshi, Sanjay; Kel, Alexander; Kemnitz, Joseph W.; Kohane, Isaac S.; Kolker, Natali; Lancet, Doron; Lee, Elaine; Li, Weizhong; Lisitsa, Andrey; Llerena, Adrian; MacNealy-Koch, Courtney; Marshall, Jean-Claude; Masuzzo, Paola; May, Amanda; Mias, George; Monroe, Matthew; Montague, Elizabeth; Mooney, Sean; Nesvizhskii, Alexey; Noronha, Santosh; Omenn, Gilbert; Rajasimha, Harsha; Ramamoorthy, Preveen; Sheehan, Jerry; Smarr, Larry; Smith, Charles V.; Smith, Todd; Snyder, Michael; Rapole, Srikanth; Srivastava, Sanjeeva; Stanberry, Larissa; Stewart, Elizabeth; Toppo, Stefano; Uetz, Peter; Verheggen, Kenneth; Voy, Brynn H.; Warnich, Louise; Wilhelm, Steven W.; Yandl, Gregory

    2014-01-01

    Abstract Biological processes are fundamentally driven by complex interactions between biomolecules. Integrated high-throughput omics studies enable multifaceted views of cells, organisms, or their communities. With the advent of new post-genomics technologies, omics studies are becoming increasingly prevalent; yet the full impact of these studies can only be realized through data harmonization, sharing, meta-analysis, and integrated research. These essential steps require consistent generation, capture, and distribution of metadata. To ensure transparency, facilitate data harmonization, and maximize reproducibility and usability of life sciences studies, we propose a simple common omics metadata checklist. The proposed checklist is built on the rich ontologies and standards already in use by the life sciences community. The checklist will serve as a common denominator to guide experimental design, capture important parameters, and be used as a standard format for stand-alone data publications. The omics metadata checklist and data publications will create efficient linkages between omics data and knowledge-based life sciences innovation and, importantly, allow for appropriate attribution to data generators and infrastructure science builders in the post-genomics era. We ask that the life sciences community test the proposed omics metadata checklist and data publications and provide feedback for their use and improvement. PMID:24456465

  8. NREL: News - 15 Home Builders Earn Energyvalue Awards

    Science.gov Websites

    builders who voluntarily integrate energy and resource efficiency into the design, construction and successful approaches to resource-efficient construction. The 2002 EVHA winners are: Tierra Concrete Homes , Sioux City, IA Artistic Homes, Albuquerque, NM Chuck Miller Construction, Hidden Springs ID Enviro

  9. New Whole-House Solutions Case Study: John Wesley Miller, Tucson, Arizona

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    This builder worked with the National Association of Home Builders Research Center to build two net-zero energy homes with foam-sheathed masonry walls, low-E windows 2.9 ACH50 air sealing, transfer grilles, ducts in insulated attic, PV, and solar water heating.

  10. Building Green the Right Way

    ERIC Educational Resources Information Center

    Lewis, C. Deana

    2009-01-01

    As the workforce development arm of the National Association of Home Builders (NAHB), and cluster leader for the Architecture and Construction career cluster, Home Builders Institute (HBI) has a vested interest in keeping America's elementary, middle and high school students excited and knowledgeable about what's new in residential construction.…

  11. Operationalizing Special Operations Aviation in Indonesia

    DTIC Science & Technology

    2006-12-15

    special operations forces Builder: Lockheed Power Plant: Four Allison T56 -A-15 turboprop engines Thrust: 4,910 shaft horsepower each engine...Builder: Lockheed Power Plant: Four Allison T56 -A-15 turboprop engines Thrust: 4,910 shaft horsepower each engine Length: 98 feet, 9 inches (30.09

  12. Analysis of Navy Flight Scheduling Methods Using FlyAwake

    DTIC Science & Technology

    2009-09-01

    28 Figure 4. FlyAwake Schedule Builder Screenshot..........................................................28...Figure 5. FlyAwake Work Schedule Builder Screenshot................................................29 Figure 6. FlyAwake Graphical Output Screenshot... disqualifies crewmembers from participating in the following day’s flight operations. These rules are subject to operational requirements and deviation

  13. Building America

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brad Oberg

    2010-12-31

    Builders generally use a 'spec and purchase' business management system (BMS) when implementing energy efficiency. A BMS is the overall operational and organizational systems and strategies that a builder uses to set up and run its company. This type of BMS treats building performance as a simple technology swap (e.g. a tank water heater to a tankless water heater) and typically compartmentalizes energy efficiency within one or two groups in the organization (e.g. purchasing and construction). While certain tools, such as details, checklists, and scopes of work, can assist builders in managing the quality of the construction of higher performancemore » homes, they do nothing to address the underlying operational strategies and issues related to change management that builders face when they make high performance homes a core part of their mission. To achieve the systems integration necessary for attaining 40% + levels of energy efficiency, while capturing the cost tradeoffs, builders must use a 'systems approach' BMS, rather than a 'spec and purchase' BMS. The following attributes are inherent in a systems approach BMS; they are also generally seen in quality management systems (QMS), such as the National Housing Quality Certification program: Cultural and corporate alignment, Clear intent for quality and performance, Increased collaboration across internal and external teams, Better communication practices and systems, Disciplined approach to quality control, Measurement and verification of performance, Continuous feedback and improvement, and Whole house integrated design and specification.« less

  14. Toward automated biochemotype annotation for large compound libraries.

    PubMed

    Chen, Xian; Liang, Yizeng; Xu, Jun

    2006-08-01

    Combinatorial chemistry allows scientists to probe large synthetically accessible chemical space. However, identifying the sub-space which is selectively associated with an interested biological target, is crucial to drug discovery and life sciences. This paper describes a process to automatically annotate biochemotypes of compounds in a library and thus to identify bioactivity related chemotypes (biochemotypes) from a large library of compounds. The process consists of two steps: (1) predicting all possible bioactivities for each compound in a library, and (2) deriving possible biochemotypes based on predictions. The Prediction of Activity Spectra for Substances program (PASS) was used in the first step. In second step, structural similarity and scaffold-hopping technologies are employed. These technologies are used to derive biochemotypes from bioactivity predictions and the corresponding annotated biochemotypes from MDL Drug Data Report (MDDR) database. About a one million (982,889) commercially available compound library (CACL) has been tested using this process. This paper demonstrates the feasibility of automatically annotating biochemotypes for large libraries of compounds. Nevertheless, some issues need to be considered in order to improve the process. First, the prediction accuracy of PASS program has no significant correlation with the number of compounds in a training set. Larger training sets do not necessarily increase the maximal error of prediction (MEP), nor do they increase the hit structural diversity. Smaller training sets do not necessarily decrease MEP, nor do they decrease the hit structural diversity. Second, the success of systematic bioactivity prediction relies on modeling, training data, and the definition of bioactivities (biochemotype ontology). Unfortunately, the biochemotype ontology was not well developed in the PASS program. Consequently, "ill-defined" bioactivities can reduce the quality of predictions. This paper suggests the ways in which the systematic bioactivities prediction program should be improved.

  15. An ontology-based approach to patient follow-up assessment for continuous and personalized chronic disease management.

    PubMed

    Zhang, Yi-Fan; Gou, Ling; Zhou, Tian-Shu; Lin, De-Nan; Zheng, Jing; Li, Ye; Li, Jing-Song

    2017-08-01

    Chronic diseases are complex and persistent clinical conditions that require close collaboration among patients and health care providers in the implementation of long-term and integrated care programs. However, current solutions focus partially on intensive interventions at hospitals rather than on continuous and personalized chronic disease management. This study aims to fill this gap by providing computerized clinical decision support during follow-up assessments of chronically ill patients at home. We proposed an ontology-based framework to integrate patient data, medical domain knowledge, and patient assessment criteria for chronic disease patient follow-up assessments. A clinical decision support system was developed to implement this framework for automatic selection and adaptation of standard assessment protocols to suit patient personal conditions. We evaluated our method in the case study of type 2 diabetic patient follow-up assessments. The proposed framework was instantiated using real data from 115,477 follow-up assessment records of 36,162 type 2 diabetic patients. Standard evaluation criteria were automatically selected and adapted to the particularities of each patient. Assessment results were generated as a general typing of patient overall condition and detailed scoring for each criterion, providing important indicators to the case manager about possible inappropriate judgments, in addition to raising patient awareness of their disease control outcomes. Using historical data as the gold standard, our system achieved a rate of accuracy of 99.93% and completeness of 95.00%. This study contributes to improving the accessibility, efficiency and quality of current patient follow-up services. It also provides a generic approach to knowledge sharing and reuse for patient-centered chronic disease management. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. SDMProjectBuilder: SWAT Setup for Nutrient Fate and Transport

    EPA Science Inventory

    This tutorial reviews some of the screens, icons, and basic functions of the SDMProjectBuilder (SDMPB) and explains how one uses SDMPB output to populate the Soil and Water Assessment Tool (SWAT) input files for nutrient fate and transport modeling in the Salt River Basin. It dem...

  17. Antiterrorism Measures For Historic Properties

    DTIC Science & Technology

    2006-09-01

    steel jacket on an existing concrete column (Morley Builders 1997...of the material. Figure 17. Seismic application of a steel jacket on an existing concrete column (Morley Builders 1997). Columns — Reinforced...from a previously unreinforced structure, so future irreversibility of the technique need not disqualify it from consideration by project teams. ERDC

  18. Counselors as Esteem Builders.

    ERIC Educational Resources Information Center

    Santa Rita, Emilio, Jr.

    This paper contains an instrument for evaluating college counselors as esteem-builders, using Borba's Analysis of Self-Esteem. The instrument is divided into five components, each of which should be fostered in a student by the counselor: (1) Security, a feeling of strong assuredness; (2) Selfhood, a feeling of individuality; (3) Affiliation, a…

  19. 49 CFR 230.47 - Boiler number.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 4 2014-10-01 2014-10-01 false Boiler number. 230.47 Section 230.47... Steam Gauges § 230.47 Boiler number. (a) Generally. The builder's number of the boiler, if known, shall be stamped on the steam dome or manhole flange. If the builder's number cannot be obtained, an...

  20. 49 CFR 230.47 - Boiler number.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 4 2012-10-01 2012-10-01 false Boiler number. 230.47 Section 230.47... Steam Gauges § 230.47 Boiler number. (a) Generally. The builder's number of the boiler, if known, shall be stamped on the steam dome or manhole flange. If the builder's number cannot be obtained, an...

  1. 49 CFR 230.47 - Boiler number.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 4 2013-10-01 2013-10-01 false Boiler number. 230.47 Section 230.47... Steam Gauges § 230.47 Boiler number. (a) Generally. The builder's number of the boiler, if known, shall be stamped on the steam dome or manhole flange. If the builder's number cannot be obtained, an...

  2. 49 CFR 230.47 - Boiler number.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false Boiler number. 230.47 Section 230.47... Steam Gauges § 230.47 Boiler number. (a) Generally. The builder's number of the boiler, if known, shall be stamped on the steam dome or manhole flange. If the builder's number cannot be obtained, an...

  3. 13 CFR 120.392 - Who may apply?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Who may apply? 120.392 Section 120.392 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Special Purpose Loans Builders Loan Program § 120.392 Who may apply? A construction contractor or home-builder with a past...

  4. 13 CFR 120.392 - Who may apply?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Who may apply? 120.392 Section 120.392 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Special Purpose Loans Builders Loan Program § 120.392 Who may apply? A construction contractor or home-builder with a past...

  5. 13 CFR 120.392 - Who may apply?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Who may apply? 120.392 Section 120.392 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Special Purpose Loans Builders Loan Program § 120.392 Who may apply? A construction contractor or home-builder with a past...

  6. 13 CFR 120.392 - Who may apply?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Who may apply? 120.392 Section 120.392 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Special Purpose Loans Builders Loan Program § 120.392 Who may apply? A construction contractor or home-builder with a past...

  7. Semantic integration of data on transcriptional regulation

    PubMed Central

    Baitaluk, Michael; Ponomarenko, Julia

    2010-01-01

    Motivation: Experimental and predicted data concerning gene transcriptional regulation are distributed among many heterogeneous sources. However, there are no resources to integrate these data automatically or to provide a ‘one-stop shop’ experience for users seeking information essential for deciphering and modeling gene regulatory networks. Results: IntegromeDB, a semantic graph-based ‘deep-web’ data integration system that automatically captures, integrates and manages publicly available data concerning transcriptional regulation, as well as other relevant biological information, is proposed in this article. The problems associated with data integration are addressed by ontology-driven data mapping, multiple data annotation and heterogeneous data querying, also enabling integration of the user's data. IntegromeDB integrates over 100 experimental and computational data sources relating to genomics, transcriptomics, genetics, and functional and interaction data concerning gene transcriptional regulation in eukaryotes and prokaryotes. Availability: IntegromeDB is accessible through the integrated research environment BiologicalNetworks at http://www.BiologicalNetworks.org Contact: baitaluk@sdsc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20427517

  8. Semantic integration of data on transcriptional regulation.

    PubMed

    Baitaluk, Michael; Ponomarenko, Julia

    2010-07-01

    Experimental and predicted data concerning gene transcriptional regulation are distributed among many heterogeneous sources. However, there are no resources to integrate these data automatically or to provide a 'one-stop shop' experience for users seeking information essential for deciphering and modeling gene regulatory networks. IntegromeDB, a semantic graph-based 'deep-web' data integration system that automatically captures, integrates and manages publicly available data concerning transcriptional regulation, as well as other relevant biological information, is proposed in this article. The problems associated with data integration are addressed by ontology-driven data mapping, multiple data annotation and heterogeneous data querying, also enabling integration of the user's data. IntegromeDB integrates over 100 experimental and computational data sources relating to genomics, transcriptomics, genetics, and functional and interaction data concerning gene transcriptional regulation in eukaryotes and prokaryotes. IntegromeDB is accessible through the integrated research environment BiologicalNetworks at http://www.BiologicalNetworks.org baitaluk@sdsc.edu Supplementary data are available at Bioinformatics online.

  9. BiobankUniverse: automatic matchmaking between datasets for biobank data discovery and integration

    PubMed Central

    Pang, Chao; Kelpin, Fleur; van Enckevort, David; Eklund, Niina; Silander, Kaisa; Hendriksen, Dennis; de Haan, Mark; Jetten, Jonathan; de Boer, Tommy; Charbon, Bart; Holub, Petr; Hillege, Hans; Swertz, Morris A

    2017-01-01

    Abstract Motivation Biobanks are indispensable for large-scale genetic/epidemiological studies, yet it remains difficult for researchers to determine which biobanks contain data matching their research questions. Results To overcome this, we developed a new matching algorithm that identifies pairs of related data elements between biobanks and research variables with high precision and recall. It integrates lexical comparison, Unified Medical Language System ontology tagging and semantic query expansion. The result is BiobankUniverse, a fast matchmaking service for biobanks and researchers. Biobankers upload their data elements and researchers their desired study variables, BiobankUniverse automatically shortlists matching attributes between them. Users can quickly explore matching potential and search for biobanks/data elements matching their research. They can also curate matches and define personalized data-universes. Availability and implementation BiobankUniverse is available at http://biobankuniverse.com or can be downloaded as part of the open source MOLGENIS suite at http://github.com/molgenis/molgenis. Contact m.a.swertz@rug.nl Supplementary information Supplementary data are available at Bioinformatics online. PMID:29036577

  10. Industry Talks the Talk and Walks the Walk

    ERIC Educational Resources Information Center

    Lewis, C. Deanna

    2008-01-01

    Home Builders Institute (HBI), the workforce development arm of the National Association of Home Builders (NAHB), is dedicated to the advancement and enrichment of education and training programs serving the needs of the building industry. For more than 30 years, HBI has trained skilled workers in residential construction, promoted the industry as…

  11. The Columbia-Willamette Skill Builders Consortium. Final Performance Report.

    ERIC Educational Resources Information Center

    Portland Community Coll., OR.

    The Columbia-Willamette Skill Builders Consortium was formed in early 1988 in response to a growing awareness of the need for improved workplace literacy training and coordinated service delivery in Northwest Oregon. In June 1990, the consortium received a National Workplace Literacy Program grant to develop and demonstrate such training. The…

  12. 13 CFR 120.391 - What is the Builders Loan Program?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false What is the Builders Loan Program? 120.391 Section 120.391 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS...)(9) of the Act, SBA may make or guarantee loans to finance small general contractors to construct or...

  13. 13 CFR 120.391 - What is the Builders Loan Program?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false What is the Builders Loan Program? 120.391 Section 120.391 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS...)(9) of the Act, SBA may make or guarantee loans to finance small general contractors to construct or...

  14. 13 CFR 120.391 - What is the Builders Loan Program?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false What is the Builders Loan Program? 120.391 Section 120.391 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS...)(9) of the Act, SBA may make or guarantee loans to finance small general contractors to construct or...

  15. 13 CFR 120.391 - What is the Builders Loan Program?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false What is the Builders Loan Program? 120.391 Section 120.391 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS...)(9) of the Act, SBA may make or guarantee loans to finance small general contractors to construct or...

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    The home that won a Production Builder award in the 2014 Housing Innovation Awards serves as a model for this builder, showcasing high-tech features including an electric car charging station; a compressed natural gas (CNG) car fueling station; a greywater recycling system that filters shower, sink, and clothes washer water for yard irrigation; smart appliances; and an electronic energy management system.

  17. Captivate MenuBuilder: Creating an Online Tutorial for Teaching Software

    ERIC Educational Resources Information Center

    Yelinek, Kathryn; Tarnowski, Lynn; Hannon, Patricia; Oliver, Susan

    2008-01-01

    In this article, the authors, students in an instructional technology graduate course, describe a process to create an online tutorial for teaching software. They created the tutorial for a cyber school's use. Five tutorial modules were linked together through one menu screen using the MenuBuilder feature in the Adobe Captivate program. The…

  18. E-Classical Fairy Tales: Multimedia Builder as a Tool

    ERIC Educational Resources Information Center

    Eteokleous, Nikleia; Ktoridou, Despo; Tsolakidis, Symeon

    2011-01-01

    The study examines pre-service teachers' experiences in delivering a traditional-classical fairy tale using the Multimedia Builder software, in other words an e-fairy tale. A case study approach was employed, collecting qualitative data through classroom observations and focus groups. The results focus on pre-service teachers' reactions, opinions,…

  19. 46 CFR 69.105 - Application for measurement services.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...) Type of vessel. (b) Vessel's name and official number (if assigned). (c) Builder's name and the vessel hull number assigned by the builder. (d) Place and year built. (e) Date keel was laid. (f) Overall length, breadth, and depth of vessel. (g) Lines plan. (h) Booklet of offsets. (i) Capacity plans for...

  20. 46 CFR 69.105 - Application for measurement services.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) Type of vessel. (b) Vessel's name and official number (if assigned). (c) Builder's name and the vessel hull number assigned by the builder. (d) Place and year built. (e) Date keel was laid. (f) Overall length, breadth, and depth of vessel. (g) Lines plan. (h) Booklet of offsets. (i) Capacity plans for...

  1. 46 CFR 69.105 - Application for measurement services.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) Type of vessel. (b) Vessel's name and official number (if assigned). (c) Builder's name and the vessel hull number assigned by the builder. (d) Place and year built. (e) Date keel was laid. (f) Overall length, breadth, and depth of vessel. (g) Lines plan. (h) Booklet of offsets. (i) Capacity plans for...

  2. 46 CFR 69.105 - Application for measurement services.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...) Type of vessel. (b) Vessel's name and official number (if assigned). (c) Builder's name and the vessel hull number assigned by the builder. (d) Place and year built. (e) Date keel was laid. (f) Overall length, breadth, and depth of vessel. (g) Lines plan. (h) Booklet of offsets. (i) Capacity plans for...

  3. 46 CFR 69.105 - Application for measurement services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) Type of vessel. (b) Vessel's name and official number (if assigned). (c) Builder's name and the vessel hull number assigned by the builder. (d) Place and year built. (e) Date keel was laid. (f) Overall length, breadth, and depth of vessel. (g) Lines plan. (h) Booklet of offsets. (i) Capacity plans for...

  4. 13 CFR 120.391 - What is the Builders Loan Program?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false What is the Builders Loan Program? 120.391 Section 120.391 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS...)(9) of the Act, SBA may make or guarantee loans to finance small general contractors to construct or...

  5. Building America Best Practices Series Volume 11. Builders Challenge Guide to 40% Whole-House Energy Savings in the Marine Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baechler, Michael C.; Gilbride, Theresa L.; Hefty, Marye G.

    2010-09-01

    This best practices guide is the eleventh in a series of guides for builders produced by the U.S. Department of Energy’s Building America Program. This guide book is a resource to help builders design and construct homes that are among the most energy-efficient available, while addressing issues such as building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the marine climate (portions of Washington, Oregon, and California) can achieve homes that have whole house energy savings of 40% over the Building America benchmark (a home built to mid-1990s buildingmore » practices roughly equivalent to the 1993 Model Energy Code) with no added overall costs for consumers. These best practices are based on the results of research and demonstration projects conducted by Building America’s research teams. The guide includes information for managers, designers, marketers, site supervisors, and subcontractors, as well as case studies of builders who are successfully building homes that cut energy use by 40% in the marine climate. This document is available on the web at www.buildingamerica.gov. This report was originally cleared 06-29-2010. This version is Rev 1 cleared in Nov 2010. The only change is the reference to the Energy Star Windows critieria shown on pg 8.25 was updated to match the criteria - Version 5.0, 04/07/2009, effective 01/04/2010.« less

  6. Heating, Ventilation, and Air Conditioning Design Strategy for a Hot-Humid Production Builder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerrigan, P.

    2014-03-01

    BSC worked directly with the David Weekley Homes - Houston division to redesign three floor plans in order to locate the HVAC system in conditioned space. The purpose of this project is to develop a cost effective design for moving the HVAC system into conditioned space. In addition, BSC conducted energy analysis to calculate the most economical strategy for increasing the energy performance of future production houses. This is in preparation for the upcoming code changes in 2015. The builder wishes to develop an upgrade package that will allow for a seamless transition to the new code mandate. The followingmore » research questions were addressed by this research project: 1. What is the most cost effective, best performing and most easily replicable method of locating ducts inside conditioned space for a hot-humid production home builder that constructs one and two story single family detached residences? 2. What is a cost effective and practical method of achieving 50% source energy savings vs. the 2006 International Energy Conservation Code for a hot-humid production builder? 3. How accurate are the pre-construction whole house cost estimates compared to confirmed post construction actual cost? BSC and the builder developed a duct design strategy that employs a system of dropped ceilings and attic coffers for moving the ductwork from the vented attic to conditioned space. The furnace has been moved to either a mechanical closet in the conditioned living space or a coffered space in the attic.« less

  7. Carbonate Production by Benthic Communities on Shallow Coralgal Reefs of Abrolhos Bank, Brazil.

    PubMed

    Reis, Vanessa Moura Dos; Karez, Cláudia Santiago; Mariath, Rodrigo; de Moraes, Fernando Coreixas; de Carvalho, Rodrigo Tomazetto; Brasileiro, Poliana Silva; Bahia, Ricardo da Gama; Lotufo, Tito Monteiro da Cruz; Ramalho, Laís Vieira; de Moura, Rodrigo Leão; Francini-Filho, Ronaldo Bastos; Pereira-Filho, Guilherme Henrique; Thompson, Fabiano Lopes; Bastos, Alex Cardoso; Salgado, Leonardo Tavares; Amado-Filho, Gilberto Menezes

    2016-01-01

    The abundance of reef builders, non-builders and the calcium carbonate produced by communities established in Calcification Accretion Units (CAUs) were determined in three Abrolhos Bank shallow reefs during the period from 2012 to 2014. In addition, the seawater temperature, the irradiance, and the amount and composition of the sediments were determined. The inner and outer reef arcs were compared. CAUs located on the inner reef shelf were under the influence of terrigenous sediments. On the outer reefs, the sediments were composed primarily of marine biogenic carbonates. The mean carbonate production in shallow reefs of Abrolhos was 579 ± 98 g m-2 y-1. The builder community was dominated by crustose coralline algae, while the non-builder community was dominated by turf. A marine heat wave was detected during the summer of 2013-2014, and the number of consecutive days with a temperature above or below the summer mean was positively correlated with the turf cover increase. The mean carbonate production of the shallow reefs of Abrolhos Bank was greater than the estimated carbonate production measured for artificial structures on several other shallow reefs of the world. The calcimass was higher than the non-calcareous mass, suggesting that the Abrolhos reefs are still in a positive carbonate production balance. Given that marine heat waves produce an increase of turf cover on the shallow reefs of the Abrolhos, a decrease in the cover represented by reef builders and shifting carbonate production are expected in the near future.

  8. HVAC Design Strategy for a Hot-Humid Production Builder, Houston, Texas (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    BSC worked directly with the David Weekley Homes - Houston division to redesign three floor plans in order to locate the HVAC system in conditioned space. The purpose of this project is to develop a cost effective design for moving the HVAC system into conditioned space. In addition, BSC conducted energy analysis to calculate the most economical strategy for increasing the energy performance of future production houses. This is in preparation for the upcoming code changes in 2015. The builder wishes to develop an upgrade package that will allow for a seamless transition to the new code mandate. The followingmore » research questions were addressed by this research project: 1. What is the most cost effective, best performing and most easily replicable method of locating ducts inside conditioned space for a hot-humid production home builder that constructs one and two story single family detached residences? 2. What is a cost effective and practical method of achieving 50% source energy savings vs. the 2006 International Energy Conservation Code for a hot-humid production builder? 3. How accurate are the pre-construction whole house cost estimates compared to confirmed post construction actual cost? BSC and the builder developed a duct design strategy that employs a system of dropped ceilings and attic coffers for moving the ductwork from the vented attic to conditioned space. The furnace has been moved to either a mechanical closet in the conditioned living space or a coffered space in the attic.« less

  9. Carbonate Production by Benthic Communities on Shallow Coralgal Reefs of Abrolhos Bank, Brazil

    PubMed Central

    dos Reis, Vanessa Moura; Karez, Cláudia Santiago; Mariath, Rodrigo; de Moraes, Fernando Coreixas; de Carvalho, Rodrigo Tomazetto; Brasileiro, Poliana Silva; Bahia, Ricardo da Gama; Lotufo, Tito Monteiro da Cruz; Ramalho, Laís Vieira; de Moura, Rodrigo Leão; Francini-Filho, Ronaldo Bastos; Pereira-Filho, Guilherme Henrique; Thompson, Fabiano Lopes; Bastos, Alex Cardoso; Salgado, Leonardo Tavares; Amado-Filho, Gilberto Menezes

    2016-01-01

    The abundance of reef builders, non-builders and the calcium carbonate produced by communities established in Calcification Accretion Units (CAUs) were determined in three Abrolhos Bank shallow reefs during the period from 2012 to 2014. In addition, the seawater temperature, the irradiance, and the amount and composition of the sediments were determined. The inner and outer reef arcs were compared. CAUs located on the inner reef shelf were under the influence of terrigenous sediments. On the outer reefs, the sediments were composed primarily of marine biogenic carbonates. The mean carbonate production in shallow reefs of Abrolhos was 579 ± 98 g m-2 y-1. The builder community was dominated by crustose coralline algae, while the non-builder community was dominated by turf. A marine heat wave was detected during the summer of 2013–2014, and the number of consecutive days with a temperature above or below the summer mean was positively correlated with the turf cover increase. The mean carbonate production of the shallow reefs of Abrolhos Bank was greater than the estimated carbonate production measured for artificial structures on several other shallow reefs of the world. The calcimass was higher than the non-calcareous mass, suggesting that the Abrolhos reefs are still in a positive carbonate production balance. Given that marine heat waves produce an increase of turf cover on the shallow reefs of the Abrolhos, a decrease in the cover represented by reef builders and shifting carbonate production are expected in the near future. PMID:27119151

  10. A Software Engineering Approach based on WebML and BPMN to the Mediation Scenario of the SWS Challenge

    NASA Astrophysics Data System (ADS)

    Brambilla, Marco; Ceri, Stefano; Valle, Emanuele Della; Facca, Federico M.; Tziviskou, Christina

    Although Semantic Web Services are expected to produce a revolution in the development of Web-based systems, very few enterprise-wide design experiences are available; one of the main reasons is the lack of sound Software Engineering methods and tools for the deployment of Semantic Web applications. In this chapter, we present an approach to software development for the Semantic Web based on classical Software Engineering methods (i.e., formal business process development, computer-aided and component-based software design, and automatic code generation) and on semantic methods and tools (i.e., ontology engineering, semantic service annotation and discovery).

  11. Modeling and formal representation of geospatial knowledge for the Geospatial Semantic Web

    NASA Astrophysics Data System (ADS)

    Huang, Hong; Gong, Jianya

    2008-12-01

    GML can only achieve geospatial interoperation at syntactic level. However, it is necessary to resolve difference of spatial cognition in the first place in most occasions, so ontology was introduced to describe geospatial information and services. But it is obviously difficult and improper to let users to find, match and compose services, especially in some occasions there are complicated business logics. Currently, with the gradual introduction of Semantic Web technology (e.g., OWL, SWRL), the focus of the interoperation of geospatial information has shifted from syntactic level to Semantic and even automatic, intelligent level. In this way, Geospatial Semantic Web (GSM) can be put forward as an augmentation to the Semantic Web that additionally includes geospatial abstractions as well as related reasoning, representation and query mechanisms. To advance the implementation of GSM, we first attempt to construct the mechanism of modeling and formal representation of geospatial knowledge, which are also two mostly foundational phases in knowledge engineering (KE). Our attitude in this paper is quite pragmatical: we argue that geospatial context is a formal model of the discriminate environment characters of geospatial knowledge, and the derivation, understanding and using of geospatial knowledge are located in geospatial context. Therefore, first, we put forward a primitive hierarchy of geospatial knowledge referencing first order logic, formal ontologies, rules and GML. Second, a metamodel of geospatial context is proposed and we use the modeling methods and representation languages of formal ontologies to process geospatial context. Thirdly, we extend Web Process Service (WPS) to be compatible with local DLL for geoprocessing and possess inference capability based on OWL.

  12. GIDL: a rule based expert system for GenBank Intelligent Data Loading into the Molecular Biodiversity database

    PubMed Central

    2012-01-01

    Background In the scientific biodiversity community, it is increasingly perceived the need to build a bridge between molecular and traditional biodiversity studies. We believe that the information technology could have a preeminent role in integrating the information generated by these studies with the large amount of molecular data we can find in bioinformatics public databases. This work is primarily aimed at building a bioinformatic infrastructure for the integration of public and private biodiversity data through the development of GIDL, an Intelligent Data Loader coupled with the Molecular Biodiversity Database. The system presented here organizes in an ontological way and locally stores the sequence and annotation data contained in the GenBank primary database. Methods The GIDL architecture consists of a relational database and of an intelligent data loader software. The relational database schema is designed to manage biodiversity information (Molecular Biodiversity Database) and it is organized in four areas: MolecularData, Experiment, Collection and Taxonomy. The MolecularData area is inspired to an established standard in Generic Model Organism Databases, the Chado relational schema. The peculiarity of Chado, and also its strength, is the adoption of an ontological schema which makes use of the Sequence Ontology. The Intelligent Data Loader (IDL) component of GIDL is an Extract, Transform and Load software able to parse data, to discover hidden information in the GenBank entries and to populate the Molecular Biodiversity Database. The IDL is composed by three main modules: the Parser, able to parse GenBank flat files; the Reasoner, which automatically builds CLIPS facts mapping the biological knowledge expressed by the Sequence Ontology; the DBFiller, which translates the CLIPS facts into ordered SQL statements used to populate the database. In GIDL Semantic Web technologies have been adopted due to their advantages in data representation, integration and processing. Results and conclusions Entries coming from Virus (814,122), Plant (1,365,360) and Invertebrate (959,065) divisions of GenBank rel.180 have been loaded in the Molecular Biodiversity Database by GIDL. Our system, combining the Sequence Ontology and the Chado schema, allows a more powerful query expressiveness compared with the most commonly used sequence retrieval systems like Entrez or SRS. PMID:22536971

  13. Quality Assurance of Cancer Study Common Data Elements Using A Post-Coordination Approach

    PubMed Central

    Jiang, Guoqian; Solbrig, Harold R.; Prud’hommeaux, Eric; Tao, Cui; Weng, Chunhua; Chute, Christopher G.

    2015-01-01

    Domain-specific common data elements (CDEs) are emerging as an effective approach to standards-based clinical research data storage and retrieval. A limiting factor, however, is the lack of robust automated quality assurance (QA) tools for the CDEs in clinical study domains. The objectives of the present study are to prototype and evaluate a QA tool for the study of cancer CDEs using a post-coordination approach. The study starts by integrating the NCI caDSR CDEs and The Cancer Genome Atlas (TCGA) data dictionaries in a single Resource Description Framework (RDF) data store. We designed a compositional expression pattern based on the Data Element Concept model structure informed by ISO/IEC 11179, and developed a transformation tool that converts the pattern-based compositional expressions into the Web Ontology Language (OWL) syntax. Invoking reasoning and explanation services, we tested the system utilizing the CDEs extracted from two TCGA clinical cancer study domains. The system could automatically identify duplicate CDEs, and detect CDE modeling errors. In conclusion, compositional expressions not only enable reuse of existing ontology codes to define new domain concepts, but also provide an automated mechanism for QA of terminological annotations for CDEs. PMID:26958201

  14. From ontology selection and semantic web to an integrated information system for food-borne diseases and food safety.

    PubMed

    Yan, Xianghe; Peng, Yun; Meng, Jianghong; Ruzante, Juliana; Fratamico, Pina M; Huang, Lihan; Juneja, Vijay; Needleman, David S

    2011-01-01

    Several factors have hindered effective use of information and resources related to food safety due to inconsistency among semantically heterogeneous data resources, lack of knowledge on profiling of food-borne pathogens, and knowledge gaps among research communities, government risk assessors/managers, and end-users of the information. This paper discusses technical aspects in the establishment of a comprehensive food safety information system consisting of the following steps: (a) computational collection and compiling publicly available information, including published pathogen genomic, proteomic, and metabolomic data; (b) development of ontology libraries on food-borne pathogens and design automatic algorithms with formal inference and fuzzy and probabilistic reasoning to address the consistency and accuracy of distributed information resources (e.g., PulseNet, FoodNet, OutbreakNet, PubMed, NCBI, EMBL, and other online genetic databases and information); (c) integration of collected pathogen profiling data, Foodrisk.org ( http://www.foodrisk.org ), PMP, Combase, and other relevant information into a user-friendly, searchable, "homogeneous" information system available to scientists in academia, the food industry, and government agencies; and (d) development of a computational model in semantic web for greater adaptability and robustness.

  15. vhMentor: An Ontology Supported Mobile Agent System for Pervasive Health Care Monitoring.

    PubMed

    Christopoulou, Stella C; Kotsilieris, Theodore; Anagnostopoulos, Ioannis; Anagnostopoulos, Christos-Nikolaos; Mylonas, Phivos

    2017-01-01

    Healthcare provision is a set of activities that demands the collaboration of several stakeholders (e.g. physicians, nurses, managers, patients etc.) who hold distinct expertise and responsibilities. In addition, medical knowledge is diversely located and often shared under no central coordination and supervision authority, while medical data flows remain mostly passive regarding the way data is delivered to both clinicians and patients. In this paper, we propose the implementation of a virtual health Mentor (vhMentor) which stands as a dedicated ontology schema and FIPA compliant agent system. Agent technology proves to be ideal for developing healthcare applications due to its distributed operation over systems and data sources of high heterogeneity. Agents are able to perform their tasks by acting pro-actively in order to assist individuals to overcome limitations posed during accessing medical data and executing non-automatic error-prone processes. vhMentor further comprises the Jess rules engine in order to implement reasoning logic. Thus, on the one hand vhMentor is a prototype that fills the gap between healthcare systems and the care provision community, while on the other hand allows the blending of next generation distributed services in healthcare domain.

  16. Word add-in for ontology recognition: semantic enrichment of scientific literature

    PubMed Central

    2010-01-01

    Background In the current era of scientific research, efficient communication of information is paramount. As such, the nature of scholarly and scientific communication is changing; cyberinfrastructure is now absolutely necessary and new media are allowing information and knowledge to be more interactive and immediate. One approach to making knowledge more accessible is the addition of machine-readable semantic data to scholarly articles. Results The Word add-in presented here will assist authors in this effort by automatically recognizing and highlighting words or phrases that are likely information-rich, allowing authors to associate semantic data with those words or phrases, and to embed that data in the document as XML. The add-in and source code are publicly available at http://www.codeplex.com/UCSDBioLit. Conclusions The Word add-in for ontology term recognition makes it possible for an author to add semantic data to a document as it is being written and it encodes these data using XML tags that are effectively a standard in life sciences literature. Allowing authors to mark-up their own work will help increase the amount and quality of machine-readable literature metadata. PMID:20181245

  17. Safe, Seen, and Celebrated with AHA! Peace Builders: Putting Youth in Charge of Change

    ERIC Educational Resources Information Center

    Freed, Jennifer; Lowenstein, Melissa

    2017-01-01

    Since 2013, AHA! (Attitude. Harmony. Achievement) has trained over 300 AHA! Peace Builder youth at six area schools in Santa Barbara, California. These young people have conducted outreach to more than 5,000 additional peers, family members, and community members via Connection Circles, which they led during class, between classes, at AHA! Peace…

  18. Fast data acquisition with the CDF event builder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinervo, P.K.; Ragan, K.J.; Booth, A.W.

    1989-02-01

    The CDF (Collider Detector at Fermilab) Event Builder is an intelligent Fastbus device that performs parallel read out of a set of Fastbus slaves on multiple cable segments, formats the data, and writes the reformatted data to a Fastbus slave module. The authors review the properties of this device, and summarize its performance in the CDF data acquisition system.

  19. 77 FR 47861 - Notice of Submission of Proposed Information Collection to OMB Hispanic-Serving Institutions...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-10

    ... construction properties as required by homebuyers or determined by the builder use the information collection. The lender reviews the changes and amends the approved exhibits. These changes ma affect the value shown on the DUD commitment. HUD requires the builder to use form HUD-92577 to request changes for...

  20. 77 FR 42258 - Notice of Funding Availability (NOFA) of Applications for Section 514 Farm Labor Housing Loans...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-18

    ... Home Builders (NAHB) ICC 700-2008 National Green Building Standard TM: http://www.nahb.org . --Bronze...://www1.eere.energy.gov/builders/challenge/ and Participation in local green/energy efficient building..., Green Communities, LEED for Homes or NAHB's National Green Building Standard (ICC-700) 2008, receive at...

  1. DOE Zero Energy Ready Home Case Study: Cobblestone Homes — 2014 Model Home, Midland, MI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    2014-09-01

    This builder's first DOE Zero Energy Ready Home won a Custom Builder award in the 2014 Housing Innovation Awards, scored HERS 49 without PV or HERS 44 with 1.4 kW of PV, and served as a prototype and energy efficiency demonstration model while performance testing was conducted.

  2. Children's E-Book Technology: Devices, Books, and Book Builder

    ERIC Educational Resources Information Center

    Shiratuddin, Norshuhada; Landoni, Monica

    2003-01-01

    This article describes a study of children's electronic books (e-books) technology. In particular, the focus is on devices used to access children's e-books, current available e-books and an e-book builder specifically for children. Three small case studies were conducted: two to evaluate how children accept the devices and one to test the ease of…

  3. Market segmentation and analysis of Japan's residential post and beam construction market.

    Treesearch

    Joseph A. Roos; Ivan L. Eastin; Hisaaki Matsuguma

    2005-01-01

    A mail survey of Japanese post and beam builders was conducted to measure their level of ethnocentrism, market orientation, risk aversion, and price consciousness. The data were analyzed utilizing factor and cluster analysis. The results showed that Japanese post and beam builders can be divided into three distinct market segments: open to import...

  4. Advanced Decentralized Water/Energy Network Design for Sustainable Infrastructure presentation at the 2012 National Association of Home Builders (NAHB) International Builders'Show

    EPA Science Inventory

    A university/industry panel will report on the progress and findings of a fivesteve-year project funded by the US Environmental Protection Agency. The project's end product will be a Web-based, 3D computer-simulated residential environment with a decision support system to assist...

  5. Builder 3 & 2. Naval Education and Training Command Rate Training Manual and Nonresident Career Course. Revised.

    ERIC Educational Resources Information Center

    Countryman, Gene L.

    This Rate Training Manual (Textbook) and Nonresident Career Course form a correspondence, self-study package to provide information related to tasks assigned to Builders Third and Second Class. Focus is on constructing, maintaining, and repairing wooden, concrete, and masonry structures, concrete pavement, and waterfront and underwater structures;…

  6. MLM Builder: An Integrated Suite for Development and Maintenance of Arden Syntax Medical Logic Modules

    PubMed Central

    Sailors, R. Matthew

    1997-01-01

    The Arden Syntax specification for sharable computerized medical knowledge bases has not been widely utilized in the medical informatics community because of a lack of tools for developing Arden Syntax knowledge bases (Medical Logic Modules). The MLM Builder is a Microsoft Windows-hosted CASE (Computer Aided Software Engineering) tool designed to aid in the development and maintenance of Arden Syntax Medical Logic Modules (MLMs). The MLM Builder consists of the MLM Writer (an MLM generation tool), OSCAR (an anagram of Object-oriented ARden Syntax Compiler), a test database, and the MLManager (an MLM management information system). Working together, these components form a self-contained, unified development environment for the creation, testing, and maintenance of Arden Syntax Medical Logic Modules.

  7. Modeling Code Is Helping Cleveland Develop New Products

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Master Builders, Inc., is a 350-person company in Cleveland, Ohio, that develops and markets specialty chemicals for the construction industry. Developing new products involves creating many potential samples and running numerous tests to characterize the samples' performance. Company engineers enlisted NASA's help to replace cumbersome physical testing with computer modeling of the samples' behavior. Since the NASA Lewis Research Center's Structures Division develops mathematical models and associated computation tools to analyze the deformation and failure of composite materials, its researchers began a two-phase effort to modify Lewis' Integrated Composite Analyzer (ICAN) software for Master Builders' use. Phase I has been completed, and Master Builders is pleased with the results. The company is now working to begin implementation of Phase II.

  8. Automated metadata, provenance cataloging and navigable interfaces: ensuring the usefulness of extreme-scale data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schissel, David; Greenwald, Martin

    The MPO (Metadata, Provenance, Ontology) Project successfully addressed the goal of improving the usefulness and traceability of scientific data by building a system that could capture and display all steps in the process of creating, analyzing and disseminating that data. Throughout history, scientists have generated handwritten logbooks to keep track of data, their hypotheses, assumptions, experimental setup, and computational processes as well as reflections on observations and issues encountered. Over the last several decades, with the growth of personal computers, handheld devices, and the World Wide Web, the handwritten logbook has begun to be replaced by electronic logbooks. This transitionmore » has brought increased capability such as supporting multi-media, hypertext, and fast searching. However, content creation and metadata (a set of data that describes and gives information about other data) capturing has for the most part remained a manual activity just as it was with handwritten logbooks. This has led to a fragmentation of data, processing, and annotation that has only accelerated as scientific workflows continue to increase in complexity. From a scientific perspective, it is very important to be able to understand the lineage of any piece of data: who, what, when, how, and why. This is typically referred to as data provenance. The fragmentation discussed previously often means that data provenance is lost. As scientific workflows move to powerful computers and become more complex, the ability to track all of the steps involved in creating a piece of data become even more difficult. It was the goal of the MPO (Metadata, Provenance, Ontology) Project to create a system (the MPO System) that allows for automatic provenance and metadata capturing in such a way to allow easy searching and browsing. This goal needed to be accomplished in a general way so that it may be used across a broad range of scientific domains, yet allow the addition of vocabulary (Ontology) that is domain specific as is required for intelligent searching and browsing in the scientific context. Through the creation and deployment of the MPO system, the goals of the project were achieved. An enhanced metadata, provenance, and ontology storage system was created. This was combined with innovative methodologies for navigating and exploring these data using a web browser for both experimental and simulation-based scientific research. In addition, a system to allow scientists to instrument their existing workflows for automatic metadata and provenance is part of the MPO system. In that way, a scientist can continue to use their existing methodology yet easily document their work. Workflows and data provenance can be displayed either graphically or in an electronic notebook format and support advanced search features including via ontology. The MPO system was successfully used in both Climate and Magnetic Fusion Energy Research. The software for the MPO system is located at https://github.com/MPO-Group/MPO and is open source distributed under the Revised BSD License. A demonstration site of the MPO system is open to the public and is available at https://mpo.psfc.mit.edu/. A Docker container release of the command line client is available for public download using the command docker pull jcwright/mpo-cli at https://hub.docker.com/r/jcwright/mpo-cli.« less

  9. A semantic proteomics dashboard (SemPoD) for data management in translational research.

    PubMed

    Jayapandian, Catherine P; Zhao, Meng; Ewing, Rob M; Zhang, Guo-Qiang; Sahoo, Satya S

    2012-01-01

    One of the primary challenges in translational research data management is breaking down the barriers between the multiple data silos and the integration of 'omics data with clinical information to complete the cycle from the bench to the bedside. The role of contextual metadata, also called provenance information, is a key factor ineffective data integration, reproducibility of results, correct attribution of original source, and answering research queries involving "What", "Where", "When", "Which", "Who", "How", and "Why" (also known as the W7 model). But, at present there is limited or no effective approach to managing and leveraging provenance information for integrating data across studies or projects. Hence, there is an urgent need for a paradigm shift in creating a "provenance-aware" informatics platform to address this challenge. We introduce an ontology-driven, intuitive Semantic Proteomics Dashboard (SemPoD) that uses provenance together with domain information (semantic provenance) to enable researchers to query, compare, and correlate different types of data across multiple projects, and allow integration with legacy data to support their ongoing research. The SemPoD platform, currently in use at the Case Center for Proteomics and Bioinformatics (CPB), consists of three components: (a) Ontology-driven Visual Query Composer, (b) Result Explorer, and (c) Query Manager. Currently, SemPoD allows provenance-aware querying of 1153 mass-spectrometry experiments from 20 different projects. SemPod uses the systems molecular biology provenance ontology (SysPro) to support a dynamic query composition interface, which automatically updates the components of the query interface based on previous user selections and efficiently prunes the result set usinga "smart filtering" approach. The SysPro ontology re-uses terms from the PROV-ontology (PROV-O) being developed by the World Wide Web Consortium (W3C) provenance working group, the minimum information required for reporting a molecular interaction experiment (MIMIx), and the minimum information about a proteomics experiment (MIAPE) guidelines. The SemPoD was evaluated both in terms of user feedback and as scalability of the system. SemPoD is an intuitive and powerful provenance ontology-driven data access and query platform that uses the MIAPE and MIMIx metadata guideline to create an integrated view over large-scale systems molecular biology datasets. SemPoD leverages the SysPro ontology to create an intuitive dashboard for biologists to compose queries, explore the results, and use a query manager for storing queries for later use. SemPoD can be deployed over many existing database applications storing 'omics data, including, as illustrated here, the LabKey data-management system. The initial user feedback evaluating the usability and functionality of SemPoD has been very positive and it is being considered for wider deployment beyond the proteomics domain, and in other 'omics' centers.

  10. Anatomical entity mention recognition at literature scale

    PubMed Central

    Pyysalo, Sampo; Ananiadou, Sophia

    2014-01-01

    Motivation: Anatomical entities ranging from subcellular structures to organ systems are central to biomedical science, and mentions of these entities are essential to understanding the scientific literature. Despite extensive efforts to automatically analyze various aspects of biomedical text, there have been only few studies focusing on anatomical entities, and no dedicated methods for learning to automatically recognize anatomical entity mentions in free-form text have been introduced. Results: We present AnatomyTagger, a machine learning-based system for anatomical entity mention recognition. The system incorporates a broad array of approaches proposed to benefit tagging, including the use of Unified Medical Language System (UMLS)- and Open Biomedical Ontologies (OBO)-based lexical resources, word representations induced from unlabeled text, statistical truecasing and non-local features. We train and evaluate the system on a newly introduced corpus that substantially extends on previously available resources, and apply the resulting tagger to automatically annotate the entire open access scientific domain literature. The resulting analyses have been applied to extend services provided by the Europe PubMed Central literature database. Availability and implementation: All tools and resources introduced in this work are available from http://nactem.ac.uk/anatomytagger. Contact: sophia.ananiadou@manchester.ac.uk Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:24162468

  11. Correction of gynecomastia in body builders and patients with good physique.

    PubMed

    Blau, Mordcai; Hazani, Ron

    2015-02-01

    Temporary gynecomastia in the form of breast buds is a common finding in young male subjects. In adults, permanent gynecomastia is an aesthetic impairment that may result in interest in surgical correction. Gynecomastia in body builders creates an even greater distress for patients seeking surgical treatment because of the demands of professional competition. The authors present their experience with gynecomastia in body builders as the largest study of such a group in the literature. Between the years 1980 and 2013, 1574 body builders were treated surgically for gynecomastia. Of those, 1073 were followed up for a period of 1 to 5 years. Ages ranged from 18 to 51 years. Subtotal excision in the form of subcutaneous mastectomy with removal of at least 95 percent of the glandular tissue was used in virtually all cases. In cases where body fat was extremely low, liposuction was performed in fewer than 2 percent of the cases. Aesthetically pleasing results were achieved in 98 percent of the cases based on the authors' patient satisfaction survey. The overall rate of hematomas was 9 percent in the first 15 years of the series and 3 percent in the final 15 years. There were no infections, contour deformities, or recurrences. This study demonstrates the importance of direct excision of the glandular tissue over any other surgical technique when correcting gynecomastia deformities in body builders. The novice surgeon is advised to proceed with cases that are less challenging, primarily with patients that require excision of small to medium glandular tissue. Therapeutic, IV.

  12. SciFlo: Semantically-Enabled Grid Workflow for Collaborative Science

    NASA Astrophysics Data System (ADS)

    Yunck, T.; Wilson, B. D.; Raskin, R.; Manipon, G.

    2005-12-01

    SciFlo is a system for Scientific Knowledge Creation on the Grid using a Semantically-Enabled Dataflow Execution Environment. SciFlo leverages Simple Object Access Protocol (SOAP) Web Services and the Grid Computing standards (WS-* standards and the Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable SOAP Services, native executables, local command-line scripts, and python codes into a distributed computing flow (a graph of operators). SciFlo's XML dataflow documents can be a mixture of concrete operators (fully bound operations) and abstract template operators (late binding via semantic lookup). All data objects and operators can be both simply typed (simple and complex types in XML schema) and semantically typed using controlled vocabularies (linked to OWL ontologies such as SWEET). By exploiting ontology-enhanced search and inference, one can discover (and automatically invoke) Web Services and operators that have been semantically labeled as performing the desired transformation, and adapt a particular invocation to the proper interface (number, types, and meaning of inputs and outputs). The SciFlo client & server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The scientist injects a distributed computation into the Grid by simply filling out an HTML form or directly authoring the underlying XML dataflow document, and results are returned directly to the scientist's desktop. A Visual Programming tool is also being developed, but it is not required. Once an analysis has been specified for a granule or day of data, it can be easily repeated with different control parameters and over months or years of data. SciFlo uses and preserves semantics, and also generates and infers new semantic annotations. Specifically, the SciFlo engine uses semantic metadata to understand (infer) what it is doing and potentially improve the data flow; preserves semantics by saving links to the semantics of (metadata describing) the input datasets, related datasets, and the data transformations (algorithms) used to generate downstream products; generates new metadata by allowing the user to add semantic annotations to the generated data products (or simply accept automatically generated provenance annotations); and infers new semantic metadata by understanding and applying logic to the semantics of the data and the transformations performed. Much ontology development still needs to be done but, nevertheless, SciFlo documents provide a substrate for using and preserving more semantics as ontologies develop. We will give a live demonstration of the growing SciFlo network using an example dataflow in which atmospheric temperature and water vapor profiles from three Earth Observing System (EOS) instruments are retrieved using SOAP (geo-location query & data access) services, co-registered, and visually & statistically compared on demand (see http://sciflo.jpl.nasa.gov for more information).

  13. Ontology Sparse Vector Learning Algorithm for Ontology Similarity Measuring and Ontology Mapping via ADAL Technology

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Zhu, Linli; Wang, Kaiyun

    2015-12-01

    Ontology, a model of knowledge representation and storage, has had extensive applications in pharmaceutics, social science, chemistry and biology. In the age of “big data”, the constructed concepts are often represented as higher-dimensional data by scholars, and thus the sparse learning techniques are introduced into ontology algorithms. In this paper, based on the alternating direction augmented Lagrangian method, we present an ontology optimization algorithm for ontological sparse vector learning, and a fast version of such ontology technologies. The optimal sparse vector is obtained by an iterative procedure, and the ontology function is then obtained from the sparse vector. Four simulation experiments show that our ontological sparse vector learning model has a higher precision ratio on plant ontology, humanoid robotics ontology, biology ontology and physics education ontology data for similarity measuring and ontology mapping applications.

  14. 75 FR 14487 - Requested Administrative Waiver of the Coastwise Trade Laws

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-25

    ... for the vessel CLOUD NINE. SUMMARY: As authorized by 46 U.S.C. 12121, the Secretary of Transportation... . Interested parties may comment on the effect this action may have on U.S. vessel builders or businesses in... have an unduly adverse effect on a U.S.-vessel builder or a business that uses U.S.-flag vessels in...

  15. Landscape Builder: software for the creation of initial landscapes for LANDIS from FIA data

    Treesearch

    William Dijak

    2013-01-01

    I developed Landscape Builder to create spatially explicit landscapes as starting conditions for LANDIS Pro 7.0 and LANDIS II landscape forest simulation models from classified satellite imagery and Forest Inventory and Analysis (FIA) data collected over multiple years. LANDIS Pro and LANDIS II models project future landscapes by simulating tree growth, tree species...

  16. DOE Zero Energy Ready Home Case Study: Alliance Green Builders, Casa Aguila

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pacific Northwest National Laboratory

    Alliance Green Builders built this 3,129-ft2 home in the hills above Ramona, California, to the high-performance criteria of the DOE Zero Energy Ready Home (ZERH) program. The home should perform far better than net zero thanks to a super-efficient building shell, a wind turbine, three suntracking solar photovoltaic arrays, and solar thermal water heating.

  17. Chemical Inventory Management at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Kraft, Shirley S.; Homan, Joseph R.; Bajorek, Michael J.; Dominguez, Manuel B.; Smith, Vanessa L.

    1997-01-01

    The Chemical Management System (CMS) is a client/server application developed with Power Builder and Sybase for the Lewis Research Center (LeRC). Power Builder is a client-server application development tool, Sybase is a Relational Database Management System. The entire LeRC community can access the CMS from any desktop environment. The multiple functions and benefits of the CMS are addressed.

  18. ScholarshipBuilder Program, Second Annual Report 1989-1990. Report No. 10, Vol. 25.

    ERIC Educational Resources Information Center

    Ruffin, Minnie R.; And Others

    The ScholarshipBuilder Program, an adopt-a-class program, began in November 1988 to encourage selected first-grade students to stay in school until graduation in the year 2000. Scholarships will be provided by Merrill Lynch, Incorporated, for students who go on to college, and a one-time stipend will be provided to students who enter the military…

  19. 75 FR 34528 - Requested Administrative Waiver of the Coastwise Trade Laws

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-17

    ... comment on the effect this action may have on U.S. vessel builders or businesses in the U.S. that use U.S... effect on a U.S.-vessel builder or a business that uses U.S.-flag vessels in that business, a waiver will... vessel ERIC K is: Intended Commercial Use of Vessel: ``Charter and teaching trawler skills.'' Geographic...

  20. Software for Building Models of 3D Objects via the Internet

    NASA Technical Reports Server (NTRS)

    Schramer, Tim; Jensen, Jeff

    2003-01-01

    The Virtual EDF Builder (where EDF signifies Electronic Development Fixture) is a computer program that facilitates the use of the Internet for building and displaying digital models of three-dimensional (3D) objects that ordinarily comprise assemblies of solid models created previously by use of computer-aided-design (CAD) programs. The Virtual EDF Builder resides on a Unix-based server computer. It is used in conjunction with a commercially available Web-based plug-in viewer program that runs on a client computer. The Virtual EDF Builder acts as a translator between the viewer program and a database stored on the server. The translation function includes the provision of uniform resource locator (URL) links to other Web-based computer systems and databases. The Virtual EDF builder can be used in two ways: (1) If the client computer is Unix-based, then it can assemble a model locally; the computational load is transferred from the server to the client computer. (2) Alternatively, the server can be made to build the model, in which case the server bears the computational load and the results are downloaded to the client computer or workstation upon completion.

  1. Solving a real-world problem using an evolving heuristically driven schedule builder.

    PubMed

    Hart, E; Ross, P; Nelson, J

    1998-01-01

    This work addresses the real-life scheduling problem of a Scottish company that must produce daily schedules for the catching and transportation of large numbers of live chickens. The problem is complex and highly constrained. We show that it can be successfully solved by division into two subproblems and solving each using a separate genetic algorithm (GA). We address the problem of whether this produces locally optimal solutions and how to overcome this. We extend the traditional approach of evolving a "permutation + schedule builder" by concentrating on evolving the schedule builder itself. This results in a unique schedule builder being built for each daily scheduling problem, each individually tailored to deal with the particular features of that problem. This results in a robust, fast, and flexible system that can cope with most of the circumstances imaginable at the factory. We also compare the performance of a GA approach to several other evolutionary methods and show that population-based methods are superior to both hill-climbing and simulated annealing in the quality of solutions produced. Population-based methods also have the distinct advantage of producing multiple, equally fit solutions, which is of particular importance when considering the practical aspects of the problem.

  2. Coronary calcification in body builders using anabolic steroids.

    PubMed

    Santora, Lawrence J; Marin, Jairo; Vangrow, Jack; Minegar, Craig; Robinson, Mary; Mora, Janet; Friede, Gerald

    2006-01-01

    The authors measured coronary artery calcification as a means of examining the impact of anabolic steroids on the development of atherosclerotic disease in body builders using anabolic steroids over an extended period of time. Fourteen male professional body builders with no history of cardiovascular disease were evaluated for coronary artery calcium, serum lipids, left ventricular function, and exercise-induced myocardial ischemia. Seven subjects had coronary artery calcium, with a much higher than expected mean score of 98. Six of the 7 calcium scores were >90th percentile. Mean total cholesterol was 192 mg/dL, while mean high-density lipoprotein was 23 mg/dL and the mean ratio of total cholesterol to high-density lipoprotein was 8.3. Left ventricular ejection fraction ranged between 49% and 68%, with a mean of 59%. No subject had evidence of myocardial ischemia. This small group of professional body builders with a long history of steroid abuse had high levels of coronary artery calcium for age. The authors conclude that in this small pilot study there is an association between early coronary artery calcium and long-term steroid abuse. Large-scale studies are warranted to further explore this association.

  3. Citizens Utilities Company's successful residential new construction market transformation program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caulfield, T.O.; Shepherd, M.A.

    1998-07-01

    Citizens Utilities Company, Arizona Electric Division (CUC/AED) fielded a Residential New Construction Program (RNC) in the forth quarter of 1994 that had been designed from conception as a market transformation program. The CUC RNC Program encouraged builders to adopt energy efficient building practices for new homes by supplying builders estimates of energy savings, supplying inspections services to assist builders in applying energy efficient building practices while verifying compliance, and posting and promoting the home as energy efficient during the sales period. Measures generally required to qualify for the program were R-38 ceiling insulation, R-21 wall insulation, polysealing of all infiltrationmore » gaps during construction, well sealed air-conditioning ducts, and an air conditioner Seasonal Energy Efficiency Rating (SEER) of 11.0 or greater. In less than two years the program achieved over 17% market penetration without offering rebates to builders. This paper reviews the design of the program, including a discussion of the features felt to be primarily responsible for its success. It reviews the levels of penetration achieved, free-ridership, spillover, and market barriers encountered. Finally it proposes improvements to the program designed to carry it the next step toward a self-sustaining market transformation program.« less

  4. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases.

    PubMed

    Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel

    2013-04-15

    In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.

  5. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases

    PubMed Central

    2013-01-01

    Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394

  6. Development and Evaluation of an Adolescents' Depression Ontology for Analyzing Social Data.

    PubMed

    Jung, Hyesil; Park, Hyeoun-Ae; Song, Tae-Min

    2016-01-01

    This study aims to develop and evaluate an ontology for adolescents' depression to be used for collecting and analyzing social data. The ontology was developed according to the 'ontology development 101' methodology. Concepts were extracted from clinical practice guidelines and related literatures. The ontology is composed of five sub-ontologies which represent risk factors, sign and symptoms, measurement, diagnostic result and management care. The ontology was evaluated in four different ways: First, we examined the frequency of ontology concept appeared in social data; Second, the content coverage of ontology was evaluated by comparing ontology concepts with concepts extracted from the youth depression counseling records; Third, the structural and representational layer of the ontology were evaluated by 5 ontology and psychiatric nursing experts; Fourth, the scope of the ontology was examined by answering 59 competency questions. The ontology was improved by adding new concepts and synonyms and revising the level of structure.

  7. A Method for Evaluating and Standardizing Ontologies

    ERIC Educational Resources Information Center

    Seyed, Ali Patrice

    2012-01-01

    The Open Biomedical Ontology (OBO) Foundry initiative is a collaborative effort for developing interoperable, science-based ontologies. The Basic Formal Ontology (BFO) serves as the upper ontology for the domain-level ontologies of OBO. BFO is an upper ontology of types as conceived by defenders of realism. Among the ontologies developed for OBO…

  8. 5. DETAIL OF BUILDER'S PLATE, WHICH READS '1898, THE SANITARY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. DETAIL OF BUILDER'S PLATE, WHICH READS '1898, THE SANITARY DISTRICT OF CHICAGO, BOARD OF TRUSTEES, WILLIAM BOLDENWECK, JOSEPH C. BRADEN, ZINA R. CARTER, BERNARD A. ECKART, ALEXANDER J. JONES, THOMAS KELLY, JAMES P. MALLETTE, THOMAS SMYTHE, FRANK WINTER; ISHAM RANDOLPH, CHIEF ENGINEER.' - Santa Fe Railroad, Sanitary & Ship Canal Bridge, Spanning Sanitary & Ship Canal east of Harlem Avenue, Chicago, Cook County, IL

  9. Car Builder: Design, Construct and Test Your Own Cars. School Version with Lesson Plans. [CD-ROM].

    ERIC Educational Resources Information Center

    Highsmith, Joni Bitman

    Car Builder is a scientific CD-ROM-based simulation program that lets students design, construct, modify, test, and compare their own cars. Students can design sedans, four-wheel-drive vehicles, vans, sport cars, and hot rods. They may select for aerodynamics, power, and racing ability, or economic and fuel efficiency. It is a program that teaches…

  10. nmsBuilder: Freeware to create subject-specific musculoskeletal models for OpenSim.

    PubMed

    Valente, Giordano; Crimi, Gianluigi; Vanella, Nicola; Schileo, Enrico; Taddei, Fulvia

    2017-12-01

    Musculoskeletal modeling and simulations of movement have been increasingly used in orthopedic and neurological scenarios, with increased attention to subject-specific applications. In general, musculoskeletal modeling applications have been facilitated by the development of dedicated software tools; however, subject-specific studies have been limited also by time-consuming modeling workflows and high skilled expertise required. In addition, no reference tools exist to standardize the process of musculoskeletal model creation and make it more efficient. Here we present a freely available software application, nmsBuilder 2.0, to create musculoskeletal models in the file format of OpenSim, a widely-used open-source platform for musculoskeletal modeling and simulation. nmsBuilder 2.0 is the result of a major refactoring of a previous implementation that moved a first step toward an efficient workflow for subject-specific model creation. nmsBuilder includes a graphical user interface that provides access to all functionalities, based on a framework for computer-aided medicine written in C++. The operations implemented can be used in a workflow to create OpenSim musculoskeletal models from 3D surfaces. A first step includes data processing to create supporting objects necessary to create models, e.g. surfaces, anatomical landmarks, reference systems; and a second step includes the creation of OpenSim objects, e.g. bodies, joints, muscles, and the corresponding model. We present a case study using nmsBuilder 2.0: the creation of an MRI-based musculoskeletal model of the lower limb. The model included four rigid bodies, five degrees of freedom and 43 musculotendon actuators, and was created from 3D surfaces of the segmented images of a healthy subject through the modeling workflow implemented in the software application. We have presented nmsBuilder 2.0 for the creation of musculoskeletal OpenSim models from image-based data, and made it freely available via nmsbuilder.org. This application provides an efficient workflow for model creation and helps standardize the process. We hope this would help promote personalized applications in musculoskeletal biomechanics, including larger sample size studies, and might also represent a basis for future developments for specific applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Modeling of phosphorus loads in sugarcane in a low-relief landscape using ontology-based simulation.

    PubMed

    Kwon, Ho-Young; Grunwald, Sabine; Beck, Howard W; Jung, Yunchul; Daroub, Samira H; Lang, Timothy A; Morgan, Kelly T

    2010-01-01

    Water flow and P dynamics in a low-relief landscape manipulated by extensive canal and ditch drainage systems were modeled utilizing an ontology-based simulation model. In the model, soil water flux and processes between three soil inorganic P pools (labile, active, and stable) and organic P are represented as database objects. And user-defined relationships among objects are used to automatically generate computer code (Java) for running the simulation of discharge and P loads. Our objectives were to develop ontology-based descriptions of soil P dynamics within sugarcane- (Saccharum officinarum L.) grown farm basins of the Everglades Agricultural Area (EAA) and to calibrate and validate such processes with water quality monitoring data collected at one farm basin (1244 ha). In the calibration phase (water year [WY] 99-00), observed discharge totaled 11,114 m3 ha(-1) and dissolved P 0.23 kg P ha(-1); and in the validation phase (WY 02-03), discharge was 10,397 m3 ha(-1) and dissolved P 0.11 kg P ha(-). During WY 99-00 the root mean square error (RMSE) for monthly discharge was 188 m3 ha(-1) and for monthly dissolved P 0.0077 kg P ha(-1); whereas during WY 02-03 the RMSE for monthly discharge was 195 m3 ha(-1) and monthly dissolved P 0.0022 kg P ha(-1). These results were confirmed by Nash-Sutcliffe Coefficient of 0.69 (calibration) and 0.81 (validation) comparing measured and simulated P loads. The good model performance suggests that our model has promise to simulate P dynamics, which may be useful as a management tool to reduce P loads in other similar low-relief areas.

  12. Extending TOPS: Ontology-driven Anomaly Detection and Analysis System

    NASA Astrophysics Data System (ADS)

    Votava, P.; Nemani, R. R.; Michaelis, A.

    2010-12-01

    Terrestrial Observation and Prediction System (TOPS) is a flexible modeling software system that integrates ecosystem models with frequent satellite and surface weather observations to produce ecosystem nowcasts (assessments of current conditions) and forecasts useful in natural resources management, public health and disaster management. We have been extending the Terrestrial Observation and Prediction System (TOPS) to include a capability for automated anomaly detection and analysis of both on-line (streaming) and off-line data. In order to best capture the knowledge about data hierarchies, Earth science models and implied dependencies between anomalies and occurrences of observable events such as urbanization, deforestation, or fires, we have developed an ontology to serve as a knowledge base. We can query the knowledge base and answer questions about dataset compatibilities, similarities and dependencies so that we can, for example, automatically analyze similar datasets in order to verify a given anomaly occurrence in multiple data sources. We are further extending the system to go beyond anomaly detection towards reasoning about possible causes of anomalies that are also encoded in the knowledge base as either learned or implied knowledge. This enables us to scale up the analysis by eliminating a large number of anomalies early on during the processing by either failure to verify them from other sources, or matching them directly with other observable events without having to perform an extensive and time-consuming exploration and analysis. The knowledge is captured using OWL ontology language, where connections are defined in a schema that is later extended by including specific instances of datasets and models. The information is stored using Sesame server and is accessible through both Java API and web services using SeRQL and SPARQL query languages. Inference is provided using OWLIM component integrated with Sesame.

  13. Improved object optimal synthetic description, modeling, learning, and discrimination by GEOGINE computational kernel

    NASA Astrophysics Data System (ADS)

    Fiorini, Rodolfo A.; Dacquino, Gianfranco

    2005-03-01

    GEOGINE (GEOmetrical enGINE), a state-of-the-art OMG (Ontological Model Generator) based on n-D Tensor Invariants for n-Dimensional shape/texture optimal synthetic representation, description and learning, was presented in previous conferences elsewhere recently. Improved computational algorithms based on the computational invariant theory of finite groups in Euclidean space and a demo application is presented. Progressive model automatic generation is discussed. GEOGINE can be used as an efficient computational kernel for fast reliable application development and delivery in advanced biomedical engineering, biometric, intelligent computing, target recognition, content image retrieval, data mining technological areas mainly. Ontology can be regarded as a logical theory accounting for the intended meaning of a formal dictionary, i.e., its ontological commitment to a particular conceptualization of the world object. According to this approach, "n-D Tensor Calculus" can be considered a "Formal Language" to reliably compute optimized "n-Dimensional Tensor Invariants" as specific object "invariant parameter and attribute words" for automated n-Dimensional shape/texture optimal synthetic object description by incremental model generation. The class of those "invariant parameter and attribute words" can be thought as a specific "Formal Vocabulary" learned from a "Generalized Formal Dictionary" of the "Computational Tensor Invariants" language. Even object chromatic attributes can be effectively and reliably computed from object geometric parameters into robust colour shape invariant characteristics. As a matter of fact, any highly sophisticated application needing effective, robust object geometric/colour invariant attribute capture and parameterization features, for reliable automated object learning and discrimination can deeply benefit from GEOGINE progressive automated model generation computational kernel performance. Main operational advantages over previous, similar approaches are: 1) Progressive Automated Invariant Model Generation, 2) Invariant Minimal Complete Description Set for computational efficiency, 3) Arbitrary Model Precision for robust object description and identification.

  14. EarthCube Data Discovery Hub: Enhancing, Curating and Finding Data across Multiple Geoscience Data Sources.

    NASA Astrophysics Data System (ADS)

    Zaslavsky, I.; Valentine, D.; Richard, S. M.; Gupta, A.; Meier, O.; Peucker-Ehrenbrink, B.; Hudman, G.; Stocks, K. I.; Hsu, L.; Whitenack, T.; Grethe, J. S.; Ozyurt, I. B.

    2017-12-01

    EarthCube Data Discovery Hub (DDH) is an EarthCube Building Block project using technologies developed in CINERGI (Community Inventory of EarthCube Resources for Geoscience Interoperability) to enable geoscience users to explore a growing portfolio of EarthCube-created and other geoscience-related resources. Over 1 million metadata records are available for discovery through the project portal (cinergi.sdsc.edu). These records are retrieved from data facilities, including federal, state and academic sources, or contributed by geoscientists through workshops, surveys, or other channels. CINERGI metadata augmentation pipeline components 1) provide semantic enhancement based on a large ontology of geoscience terms, using text analytics to generate keywords with references to ontology classes, 2) add spatial extents based on place names found in the metadata record, and 3) add organization identifiers to the metadata. The records are indexed and can be searched via a web portal and standard search APIs. The added metadata content improves discoverability and interoperability of the registered resources. Specifically, the addition of ontology-anchored keywords enables faceted browsing and lets users navigate to datasets related by variables measured, equipment used, science domain, processes described, geospatial features studied, and other dataset characteristics that are generated by the pipeline. DDH also lets data curators access and edit the automatically generated metadata records using the CINERGI metadata editor, accept or reject the enhanced metadata content, and consider it in updating their metadata descriptions. We consider several complex data discovery workflows, in environmental seismology (quantifying sediment and water fluxes using seismic data), marine biology (determining available temperature, location, weather and bleaching characteristics of coral reefs related to measurements in a given coral reef survey), and river geochemistry (discovering observations relevant to geochemical measurements outside the tidal zone, given specific discharge conditions).

  15. Service composition towards increasing end-user accessibility.

    PubMed

    Kaklanis, Nikolaos; Votis, Konstantinos; Tzovaras, Dimitrios

    2015-01-01

    This paper presents the Cloud4all Service Synthesizer Tool, a framework that enables efficient orchestration of accessibility services, as well as their combination into complex forms, providing more advanced functionalities towards increasing the accessibility of end-users with various types of functional limitations. The supported services are described formally within an ontology, enabling, thus, semantic service composition. The proposed service composition approach is based on semantic matching between services specifications on the one hand and user needs/preferences and current context of use on the other hand. The use of automatic composition of accessibility services can significantly enhance end-users' accessibility, especially in cases where assistive solutions are not available in their device.

  16. WebDB Component Builder - Lessons Learned

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macedo, C.

    2000-02-15

    Oracle WebDB is the easiest way to produce web enabled lightweight and enterprise-centric applications. This concept from Oracle has tantalized our taste for simplistic web development by using a purely web based tool that lives nowhere else but in the database. The use of online wizards, templates, and query builders, which produces PL/SQL behind the curtains, can be used straight ''out of the box'' by both novice and seasoned developers. The topic of this presentation will introduce lessons learned by developing and deploying applications built using the WebDB Component Builder in conjunction with custom PL/SQL code to empower a hybridmore » application. There are two kinds of WebDB components: those that display data to end users via reporting, and those that let end users update data in the database via entry forms. The presentation will also discuss various methods within the Component Builder to enhance the applications pushed to the desktop. The demonstrated example is an application entitled HOME (Helping Other's More Effectively) that was built to manage a yearly United Way Campaign effort. Our task was to build an end to end application which could manage approximately 900 non-profit agencies, an average of 4,100 individual contributions, and $1.2 million dollars. Using WebDB, the shell of the application was put together in a matter of a few weeks. However, we did encounter some hurdles that WebDB, in it's stage of infancy (v2.0), could not solve for us directly. Together with custom PL/SQL, WebDB's Component Builder became a powerful tool that enabled us to produce a very flexible hybrid application.« less

  17. Building America Best Practices Series Volume 15: 40% Whole-House Energy Savings in the Hot-Humid Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baechler, Michael C.; Gilbride, Theresa L.; Hefty, Marye G.

    2011-09-01

    This best practices guide is the 15th in a series of guides for builders produced by PNNL for the U.S. Department of Energy’s Building America program. This guide book is a resource to help builders design and construct homes that are among the most energy-efficient available, while addressing issues such as building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the hot-humid climate can build homes that have whole-house energy savings of 40% over the Building America benchmark with no added overall costs for consumers. The best practices describedmore » in this document are based on the results of research and demonstration projects conducted by Building America’s research teams. Building America brings together the nation’s leading building scientists with over 300 production builders to develop, test, and apply innovative, energy-efficient construction practices. Building America builders have found they can build homes that meet these aggressive energy-efficiency goals at no net increased costs to the homeowners. Currently, Building America homes achieve energy savings of 40% greater than the Building America benchmark home (a home built to mid-1990s building practices roughly equivalent to the 1993 Model Energy Code). The recommendations in this document meet or exceed the requirements of the 2009 IECC and 2009 IRC and those requirements are highlighted in the text. Requirements of the 2012 IECC and 2012 IRC are also noted in text and tables throughout the guide. This document will be distributed via the DOE Building America website: www.buildingamerica.gov.« less

  18. Building America Best Practices Series Volume 16: 40% Whole-House Energy Savings in the Mixed-Humid Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baechler, Michael C.; Gilbride, Theresa L.; Hefty, Marye G.

    2011-09-01

    This best practices guide is the 16th in a series of guides for builders produced by PNNL for the U.S. Department of Energy’s Building America program. This guide book is a resource to help builders design and construct homes that are among the most energy-efficient available, while addressing issues such as building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the mixed-humid climate can build homes that have whole-house energy savings of 40% over the Building America benchmark with no added overall costs for consumers. The best practices describedmore » in this document are based on the results of research and demonstration projects conducted by Building America’s research teams. Building America brings together the nation’s leading building scientists with over 300 production builders to develop, test, and apply innovative, energy-efficient construction practices. Building America builders have found they can build homes that meet these aggressive energy-efficiency goals at no net increased costs to the homeowners. Currently, Building America homes achieve energy savings of 40% greater than the Building America benchmark home (a home built to mid-1990s building practices roughly equivalent to the 1993 Model Energy Code). The recommendations in this document meet or exceed the requirements of the 2009 IECC and 2009 IRC and those requirements are highlighted in the text. Requirements of the 2012 IECC and 2012 IRC are also noted in text and tables throughout the guide. This document will be distributed via the DOE Building America website: www.buildingamerica.gov.« less

  19. GeneBuilder: interactive in silico prediction of gene structure.

    PubMed

    Milanesi, L; D'Angelo, D; Rogozin, I B

    1999-01-01

    Prediction of gene structure in newly sequenced DNA becomes very important in large genome sequencing projects. This problem is complicated due to the exon-intron structure of eukaryotic genes and because gene expression is regulated by many different short nucleotide domains. In order to be able to analyse the full gene structure in different organisms, it is necessary to combine information about potential functional signals (promoter region, splice sites, start and stop codons, 3' untranslated region) together with the statistical properties of coding sequences (coding potential), information about homologous proteins, ESTs and repeated elements. We have developed the GeneBuilder system which is based on prediction of functional signals and coding regions by different approaches in combination with similarity searches in proteins and EST databases. The potential gene structure models are obtained by using a dynamic programming method. The program permits the use of several parameters for gene structure prediction and refinement. During gene model construction, selecting different exon homology levels with a protein sequence selected from a list of homologous proteins can improve the accuracy of the gene structure prediction. In the case of low homology, GeneBuilder is still able to predict the gene structure. The GeneBuilder system has been tested by using the standard set (Burset and Guigo, Genomics, 34, 353-367, 1996) and the performances are: 0.89 sensitivity and 0.91 specificity at the nucleotide level. The total correlation coefficient is 0.88. The GeneBuilder system is implemented as a part of the WebGene a the URL: http://www.itba.mi. cnr.it/webgene and TRADAT (TRAncription Database and Analysis Tools) launcher URL: http://www.itba.mi.cnr.it/tradat.

  20. The Interaction Network Ontology-supported modeling and mining of complex interactions represented with multiple keywords in biomedical literature.

    PubMed

    Özgür, Arzucan; Hur, Junguk; He, Yongqun

    2016-01-01

    The Interaction Network Ontology (INO) logically represents biological interactions, pathways, and networks. INO has been demonstrated to be valuable in providing a set of structured ontological terms and associated keywords to support literature mining of gene-gene interactions from biomedical literature. However, previous work using INO focused on single keyword matching, while many interactions are represented with two or more interaction keywords used in combination. This paper reports our extension of INO to include combinatory patterns of two or more literature mining keywords co-existing in one sentence to represent specific INO interaction classes. Such keyword combinations and related INO interaction type information could be automatically obtained via SPARQL queries, formatted in Excel format, and used in an INO-supported SciMiner, an in-house literature mining program. We studied the gene interaction sentences from the commonly used benchmark Learning Logic in Language (LLL) dataset and one internally generated vaccine-related dataset to identify and analyze interaction types containing multiple keywords. Patterns obtained from the dependency parse trees of the sentences were used to identify the interaction keywords that are related to each other and collectively represent an interaction type. The INO ontology currently has 575 terms including 202 terms under the interaction branch. The relations between the INO interaction types and associated keywords are represented using the INO annotation relations: 'has literature mining keywords' and 'has keyword dependency pattern'. The keyword dependency patterns were generated via running the Stanford Parser to obtain dependency relation types. Out of the 107 interactions in the LLL dataset represented with two-keyword interaction types, 86 were identified by using the direct dependency relations. The LLL dataset contained 34 gene regulation interaction types, each of which associated with multiple keywords. A hierarchical display of these 34 interaction types and their ancestor terms in INO resulted in the identification of specific gene-gene interaction patterns from the LLL dataset. The phenomenon of having multi-keyword interaction types was also frequently observed in the vaccine dataset. By modeling and representing multiple textual keywords for interaction types, the extended INO enabled the identification of complex biological gene-gene interactions represented with multiple keywords.

  1. ontologyX: a suite of R packages for working with ontological data.

    PubMed

    Greene, Daniel; Richardson, Sylvia; Turro, Ernest

    2017-04-01

    Ontologies are widely used constructs for encoding and analyzing biomedical data, but the absence of simple and consistent tools has made exploratory and systematic analysis of such data unnecessarily difficult. Here we present three packages which aim to simplify such procedures. The ontologyIndex package enables arbitrary ontologies to be read into R, supports representation of ontological objects by native R types, and provides a parsimonius set of performant functions for querying ontologies. ontologySimilarity and ontologyPlot extend ontologyIndex with functionality for straightforward visualization and semantic similarity calculations, including statistical routines. ontologyIndex , ontologyPlot and ontologySimilarity are all available on the Comprehensive R Archive Network website under https://cran.r-project.org/web/packages/ . Daniel Greene dg333@cam.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  2. VQone MATLAB toolbox: A graphical experiment builder for image and video quality evaluations: VQone MATLAB toolbox.

    PubMed

    Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka

    2016-03-01

    This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  3. Automated prediction of protein function and detection of functional sites from structure.

    PubMed

    Pazos, Florencio; Sternberg, Michael J E

    2004-10-12

    Current structural genomics projects are yielding structures for proteins whose functions are unknown. Accordingly, there is a pressing requirement for computational methods for function prediction. Here we present PHUNCTIONER, an automatic method for structure-based function prediction using automatically extracted functional sites (residues associated to functions). The method relates proteins with the same function through structural alignments and extracts 3D profiles of conserved residues. Functional features to train the method are extracted from the Gene Ontology (GO) database. The method extracts these features from the entire GO hierarchy and hence is applicable across the whole range of function specificity. 3D profiles associated with 121 GO annotations were extracted. We tested the power of the method both for the prediction of function and for the extraction of functional sites. The success of function prediction by our method was compared with the standard homology-based method. In the zone of low sequence similarity (approximately 15%), our method assigns the correct GO annotation in 90% of the protein structures considered, approximately 20% higher than inheritance of function from the closest homologue.

  4. Autonomous mental development with selective attention, object perception, and knowledge representation

    NASA Astrophysics Data System (ADS)

    Ban, Sang-Woo; Lee, Minho

    2008-04-01

    Knowledge-based clustering and autonomous mental development remains a high priority research topic, among which the learning techniques of neural networks are used to achieve optimal performance. In this paper, we present a new framework that can automatically generate a relevance map from sensory data that can represent knowledge regarding objects and infer new knowledge about novel objects. The proposed model is based on understating of the visual what pathway in our brain. A stereo saliency map model can selectively decide salient object areas by additionally considering local symmetry feature. The incremental object perception model makes clusters for the construction of an ontology map in the color and form domains in order to perceive an arbitrary object, which is implemented by the growing fuzzy topology adaptive resonant theory (GFTART) network. Log-polar transformed color and form features for a selected object are used as inputs of the GFTART. The clustered information is relevant to describe specific objects, and the proposed model can automatically infer an unknown object by using the learned information. Experimental results with real data have demonstrated the validity of this approach.

  5. BiobankUniverse: automatic matchmaking between datasets for biobank data discovery and integration.

    PubMed

    Pang, Chao; Kelpin, Fleur; van Enckevort, David; Eklund, Niina; Silander, Kaisa; Hendriksen, Dennis; de Haan, Mark; Jetten, Jonathan; de Boer, Tommy; Charbon, Bart; Holub, Petr; Hillege, Hans; Swertz, Morris A

    2017-11-15

    Biobanks are indispensable for large-scale genetic/epidemiological studies, yet it remains difficult for researchers to determine which biobanks contain data matching their research questions. To overcome this, we developed a new matching algorithm that identifies pairs of related data elements between biobanks and research variables with high precision and recall. It integrates lexical comparison, Unified Medical Language System ontology tagging and semantic query expansion. The result is BiobankUniverse, a fast matchmaking service for biobanks and researchers. Biobankers upload their data elements and researchers their desired study variables, BiobankUniverse automatically shortlists matching attributes between them. Users can quickly explore matching potential and search for biobanks/data elements matching their research. They can also curate matches and define personalized data-universes. BiobankUniverse is available at http://biobankuniverse.com or can be downloaded as part of the open source MOLGENIS suite at http://github.com/molgenis/molgenis. m.a.swertz@rug.nl. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  6. Building Learning Modules for Undergraduate Education Using LEAD Technology

    NASA Astrophysics Data System (ADS)

    Clark, R. D.; Yalda, S.

    2006-12-01

    Linked Environments for Atmospheric Discovery (LEAD) has as its goal to make meteorological data, forecast models, and analysis and visualization tools available to anyone who wants to interactively explore the weather as it evolves. LEAD advances through the development and beta-deployment of Integrated Test Beds (ITBs), which are technology build-outs that are the fruition of collaborative IT and meteorological research. As the ITBs mature, opportunities emerge for the integration of this new technological capability into the education arena. The LEAD education and outreach initiative is aimed at bringing new capabilities into classroom from the middle school level to graduate education and beyond, and ensuring the congruency of this technology with curricular. One of the principal goals of LEAD is to democratize the availability of advanced weather technologies for research and education. The degree of democratization is tied to the growth of student knowledge and skills, and is correlated with education level (though not for every student in the same way). The average high school student may experience LEAD through an environment that retains a higher level of instructor control compared to the undergraduate and graduate student. This is necessary to accommodate not only differences in knowledge and skills, but the computer capabilities in the classroom such that the "teachable moment" is not lost.Undergraduates will have the opportunity to query observation data and model output, explore and discover relationships through concept mapping using an ontology service, select domains of interest based on current weather, and employ an experiment builder within the LEAD portal as an interface to configure, launch the WRF model, monitor the workflow, and visualize results using Unidata's Integrated Data Viewer (IDV), whether it be on a local server or across the TeraGrid. Such a robust and comprehensive suite of tools and services can create new paradigms for embedding students in an authentic, contextualized environment where the knowledge domain is an extension, yet integral supplement, to the classroom experience.This presentation describes two different approaches for the use of LEAD in undergraduate education: 1) a use-case for integrating LEAD technology into undergraduate subject material; and 2) making LEAD capability available to a select group of students participating in the National Collegiate Forecasting Contest (NCFC). The use-case (1) is designed to have students explore a particular weather phenomenon (e.g., a frontal boundary, jet streak, or lake effect snow event) through self-guided inquiry, and is intended as a supplement to classroom instruction. Students will use interactive, Web-based, LEAD-to-Learn modules created specifically to build conceptual knowledge of the phenomenon, adjoin germane terminology, explore relationships between concepts and similar phenomena using the LEAD ontology, and guide them through the experiment builder and workflow orchestration process in order to establish a high-resolution WRF run over a region that exhibits the characteristics of the phenomenon they wish to study. The results of the experiment will be stored in the student's MyLEAD workspace from which it can be retrieved, visualized and analyzed for atmospheric signatures characteristic of the phenomenon. The learning process is authentic in that students will be exposed to the same process of investigation, and will have available many of the same tools, as researchers. The modules serve to build content knowledge, guide discovery, and provide assessment while the LEAD portal opens the gateway to real-time observations, model accessibility, and a variety of tools, services, and resources.

  7. XAL Application Framework and Bricks GUI Builder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelaia II, Tom

    2007-01-01

    The XAL [1] Application Framework is a framework for rapidly developing document based Java applications with a common look and feel along with many built-in user interface behaviors. The Bricks GUI builder consists of a modern application and framework for rapidly building user interfaces in support of true Model-View-Controller (MVC) compliant Java applications. Bricks and the XAL Application Framework allow developers to rapidly create quality applications.

  8. Object Toolkit Version 4.3 User’s Manual

    DTIC Science & Technology

    2016-12-31

    unlimited. (OPS-17-12855 dtd 19 Jan 2017) 13. SUPPLEMENTARY NOTES 14. ABSTRACT Object Toolkit is a finite - element model builder specifically designed for...INTRODUCTION 1 What Is Object Toolkit? Object Toolkit is a finite - element model builder specifically designed for creating representations of spacecraft...Nascap-2k and EPIC, the user is not required to purchase or learn expensive finite element generators to create system models. Second, Object Toolkit

  9. 76 FR 75949 - Requested Administrative Waiver of the Coastwise Trade Laws: Vessel NAGA; Invitation for Public...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-05

    ... multi-day charters for up to 6 passengers with particular emphasis on bird watching, natural history and... may have on U.S. vessel builders or businesses in the U.S. that use U.S.-flag vessels. If MARAD... of the waiver will have an unduly adverse effect on a U.S.-vessel builder or a business that uses U.S...

  10. Assessing Security Cooperation: Improving Methods to Maximize Effects

    DTIC Science & Technology

    2013-03-01

    Carl Builder notes, “Institutional and personal interest are not intrinsically bad; but they may be made so if they are always cloaked in altruism and...cooperation professional may be able to lay the groundwork for validating or disqualifying actions based on both countable metrics and anecdotal...Pre-decisional Working Draft), (Washington DC: Multi-Agency, July 31, 2012), 37. 6 Carl H. Builder , The Masks of War: American Military Styles in

  11. CHARMM-GUI Membrane Builder toward realistic biological membrane simulations.

    PubMed

    Wu, Emilia L; Cheng, Xi; Jo, Sunhwan; Rui, Huan; Song, Kevin C; Dávila-Contreras, Eder M; Qi, Yifei; Lee, Jumin; Monje-Galvan, Viviana; Venable, Richard M; Klauda, Jeffery B; Im, Wonpil

    2014-10-15

    CHARMM-GUI Membrane Builder, http://www.charmm-gui.org/input/membrane, is a web-based user interface designed to interactively build all-atom protein/membrane or membrane-only systems for molecular dynamics simulations through an automated optimized process. In this work, we describe the new features and major improvements in Membrane Builder that allow users to robustly build realistic biological membrane systems, including (1) addition of new lipid types, such as phosphoinositides, cardiolipin (CL), sphingolipids, bacterial lipids, and ergosterol, yielding more than 180 lipid types, (2) enhanced building procedure for lipid packing around protein, (3) reliable algorithm to detect lipid tail penetration to ring structures and protein surface, (4) distance-based algorithm for faster initial ion displacement, (5) CHARMM inputs for P21 image transformation, and (6) NAMD equilibration and production inputs. The robustness of these new features is illustrated by building and simulating a membrane model of the polar and septal regions of E. coli membrane, which contains five lipid types: CL lipids with two types of acyl chains and phosphatidylethanolamine lipids with three types of acyl chains. It is our hope that CHARMM-GUI Membrane Builder becomes a useful tool for simulation studies to better understand the structure and dynamics of proteins and lipids in realistic biological membrane environments. Copyright © 2014 Wiley Periodicals, Inc.

  12. Class in construction: London building workers, dirty work and physical cultures.

    PubMed

    Thiel, Darren

    2007-06-01

    Descriptions of manual employment tend to ignore its diversity and overstate the homogenizing effects of technology and industrialization. Based on ethnographic research on a London construction site, building work was found to be shaped by the forms of a pre-industrial work pattern characterized by task autonomy and freedom from managerial control. The builders' identities were largely free from personal identification as working class, and collective identification was fractured by trade status, and ethnic and gender divisions. Yet the shadow of a class-based discursive symbolism, which centered partly on the division of minds/bodies, mental/manual, and clean/dirty work, framed their accounts, identities and cultures. The builders displayed what is frequently termed working-class culture, and it was highly masculine. This physical and bodily-centered culture shielded them from the possible stigmatization of class and provided them with a source of localized capital. 'Physical capital' in conjunction with social capital (the builders' networks of friends and family) had largely guided their position in the stratification system, and values associated with these forms of capital were paramount to their public cultures. This cultural emphasis offered a continuing functionality in the builders' lives, not having broken free from tradition or becoming an object of reflexive choice.

  13. New Whole-House Solutions Case Study: HVAC Design Strategy for a Hot-Humid Production Builder, Houston, Texas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Building Science Corporation (BSC) worked directly with the David Weekley Homes - Houston division to develop a cost-effective design for moving the HVAC system into conditioned space. In addition, BSC conducted energy analysis to calculate the most economical strategy for increasing the energy performance of future production houses in preparation for the upcoming code changes in 2015. The following research questions were addressed by this research project: 1. What is the most cost effective, best performing and most easily replicable method of locating ducts inside conditioned space for a hot-humid production home builder that constructs one and two story singlemore » family detached residences? 2. What is a cost effective and practical method of achieving 50% source energy savings vs. the 2006 International Energy Conservation Code for a hot-humid production builder? 3. How accurate are the pre-construction whole house cost estimates compared to confirmed post construction actual cost? BSC and the builder developed a duct design strategy that employs a system of dropped ceilings and attic coffers for moving the ductwork from the vented attic to conditioned space. The furnace has been moved to either a mechanical closet in the conditioned living space or a coffered space in the attic.« less

  14. Validation of bioelectrical impedance analysis to hydrostatic weighing in male body builders.

    PubMed

    Volpe, Stella Lucia; Melanson, Edward L; Kline, Gregory

    2010-03-01

    The purpose of this study was to compare bioelectrical impedance analysis (BIA) to hydrostatic weighing (HW) in male weight lifters and body builders. Twenty-two male body builders and weight lifters, 23 +/- 3 years of age (mean +/- SD), were studied to determine the efficacy of BIA to HW in this population. Subjects were measured on two separate occasions, 6 weeks apart, for test-retest reliability purposes. Participants recorded 3-day dietary intakes and average work-out times and regimens between the two testing periods. Subjects were, on average, 75 +/- 8 kg of body weight and 175 +/- 7 cm tall. Validation results were as follows: constant error for HW-BIA = 0.128 +/- 3.7%, r for HW versus BIA = -0.294. Standard error of the estimate for BIA = 2.32% and the total error for BIA = 3.6%. Percent body fat was 7.8 +/- 1% from BIA and 8.5 +/- 2% from HW (P > 0.05). Subjects consumed 3,217 +/- 1,027 kcals; 1,848 +/- 768 kcals from carbohydrates; 604 +/- 300 kcals from protein; and 783 +/- 369 kcals from fat. Although work-outs differed among one another, within subject training did not vary. These results suggest that measurement of percent body fat in male body builders and weight trainers is equally as accurate using BIA or HW.

  15. Ontology Research and Development. Part 2 - A Review of Ontology Mapping and Evolving.

    ERIC Educational Resources Information Center

    Ding, Ying; Foo, Schubert

    2002-01-01

    Reviews ontology research and development, specifically ontology mapping and evolving. Highlights include an overview of ontology mapping projects; maintaining existing ontologies and extending them as appropriate when new information or knowledge is acquired; and ontology's role and the future of the World Wide Web, or Semantic Web. (Contains 55…

  16. NCBO Ontology Recommender 2.0: an enhanced approach for biomedical ontology recommendation.

    PubMed

    Martínez-Romero, Marcos; Jonquet, Clement; O'Connor, Martin J; Graybeal, John; Pazos, Alejandro; Musen, Mark A

    2017-06-07

    Ontologies and controlled terminologies have become increasingly important in biomedical research. Researchers use ontologies to annotate their data with ontology terms, enabling better data integration and interoperability across disparate datasets. However, the number, variety and complexity of current biomedical ontologies make it cumbersome for researchers to determine which ones to reuse for their specific needs. To overcome this problem, in 2010 the National Center for Biomedical Ontology (NCBO) released the Ontology Recommender, which is a service that receives a biomedical text corpus or a list of keywords and suggests ontologies appropriate for referencing the indicated terms. We developed a new version of the NCBO Ontology Recommender. Called Ontology Recommender 2.0, it uses a novel recommendation approach that evaluates the relevance of an ontology to biomedical text data according to four different criteria: (1) the extent to which the ontology covers the input data; (2) the acceptance of the ontology in the biomedical community; (3) the level of detail of the ontology classes that cover the input data; and (4) the specialization of the ontology to the domain of the input data. Our evaluation shows that the enhanced recommender provides higher quality suggestions than the original approach, providing better coverage of the input data, more detailed information about their concepts, increased specialization for the domain of the input data, and greater acceptance and use in the community. In addition, it provides users with more explanatory information, along with suggestions of not only individual ontologies but also groups of ontologies to use together. It also can be customized to fit the needs of different ontology recommendation scenarios. Ontology Recommender 2.0 suggests relevant ontologies for annotating biomedical text data. It combines the strengths of its predecessor with a range of adjustments and new features that improve its reliability and usefulness. Ontology Recommender 2.0 recommends over 500 biomedical ontologies from the NCBO BioPortal platform, where it is openly available (both via the user interface at http://bioportal.bioontology.org/recommender , and via a Web service API).

  17. The eXtensible ontology development (XOD) principles and tool implementation to support ontology interoperability.

    PubMed

    He, Yongqun; Xiang, Zuoshuang; Zheng, Jie; Lin, Yu; Overton, James A; Ong, Edison

    2018-01-12

    Ontologies are critical to data/metadata and knowledge standardization, sharing, and analysis. With hundreds of biological and biomedical ontologies developed, it has become critical to ensure ontology interoperability and the usage of interoperable ontologies for standardized data representation and integration. The suite of web-based Ontoanimal tools (e.g., Ontofox, Ontorat, and Ontobee) support different aspects of extensible ontology development. By summarizing the common features of Ontoanimal and other similar tools, we identified and proposed an "eXtensible Ontology Development" (XOD) strategy and its associated four principles. These XOD principles reuse existing terms and semantic relations from reliable ontologies, develop and apply well-established ontology design patterns (ODPs), and involve community efforts to support new ontology development, promoting standardized and interoperable data and knowledge representation and integration. The adoption of the XOD strategy, together with robust XOD tool development, will greatly support ontology interoperability and robust ontology applications to support data to be Findable, Accessible, Interoperable and Reusable (i.e., FAIR).

  18. Closing the loop: from paper to protein annotation using supervised Gene Ontology classification.

    PubMed

    Gobeill, Julien; Pasche, Emilie; Vishnyakova, Dina; Ruch, Patrick

    2014-01-01

    Gene function curation of the literature with Gene Ontology (GO) concepts is one particularly time-consuming task in genomics, and the help from bioinformatics is highly requested to keep up with the flow of publications. In 2004, the first BioCreative challenge already designed a task of automatic GO concepts assignment from a full text. At this time, results were judged far from reaching the performances required by real curation workflows. In particular, supervised approaches produced the most disappointing results because of lack of training data. Ten years later, the available curation data have massively grown. In 2013, the BioCreative IV GO task revisited the automatic GO assignment task. For this issue, we investigated the power of our supervised classifier, GOCat. GOCat computes similarities between an input text and already curated instances contained in a knowledge base to infer GO concepts. The subtask A consisted in selecting GO evidence sentences for a relevant gene in a full text. For this, we designed a state-of-the-art supervised statistical approach, using a naïve Bayes classifier and the official training set, and obtained fair results. The subtask B consisted in predicting GO concepts from the previous output. For this, we applied GOCat and reached leading results, up to 65% for hierarchical recall in the top 20 outputted concepts. Contrary to previous competitions, machine learning has this time outperformed standard dictionary-based approaches. Thanks to BioCreative IV, we were able to design a complete workflow for curation: given a gene name and a full text, this system is able to select evidence sentences for curation and to deliver highly relevant GO concepts. Contrary to previous competitions, machine learning this time outperformed dictionary-based systems. Observed performances are sufficient for being used in a real semiautomatic curation workflow. GOCat is available at http://eagl.unige.ch/GOCat/. http://eagl.unige.ch/GOCat4FT/. © The Author(s) 2014. Published by Oxford University Press.

  19. Semantic biomedical resource discovery: a Natural Language Processing framework.

    PubMed

    Sfakianaki, Pepi; Koumakis, Lefteris; Sfakianakis, Stelios; Iatraki, Galatia; Zacharioudakis, Giorgos; Graf, Norbert; Marias, Kostas; Tsiknakis, Manolis

    2015-09-30

    A plethora of publicly available biomedical resources do currently exist and are constantly increasing at a fast rate. In parallel, specialized repositories are been developed, indexing numerous clinical and biomedical tools. The main drawback of such repositories is the difficulty in locating appropriate resources for a clinical or biomedical decision task, especially for non-Information Technology expert users. In parallel, although NLP research in the clinical domain has been active since the 1960s, progress in the development of NLP applications has been slow and lags behind progress in the general NLP domain. The aim of the present study is to investigate the use of semantics for biomedical resources annotation with domain specific ontologies and exploit Natural Language Processing methods in empowering the non-Information Technology expert users to efficiently search for biomedical resources using natural language. A Natural Language Processing engine which can "translate" free text into targeted queries, automatically transforming a clinical research question into a request description that contains only terms of ontologies, has been implemented. The implementation is based on information extraction techniques for text in natural language, guided by integrated ontologies. Furthermore, knowledge from robust text mining methods has been incorporated to map descriptions into suitable domain ontologies in order to ensure that the biomedical resources descriptions are domain oriented and enhance the accuracy of services discovery. The framework is freely available as a web application at ( http://calchas.ics.forth.gr/ ). For our experiments, a range of clinical questions were established based on descriptions of clinical trials from the ClinicalTrials.gov registry as well as recommendations from clinicians. Domain experts manually identified the available tools in a tools repository which are suitable for addressing the clinical questions at hand, either individually or as a set of tools forming a computational pipeline. The results were compared with those obtained from an automated discovery of candidate biomedical tools. For the evaluation of the results, precision and recall measurements were used. Our results indicate that the proposed framework has a high precision and low recall, implying that the system returns essentially more relevant results than irrelevant. There are adequate biomedical ontologies already available, sufficiency of existing NLP tools and quality of biomedical annotation systems for the implementation of a biomedical resources discovery framework, based on the semantic annotation of resources and the use on NLP techniques. The results of the present study demonstrate the clinical utility of the application of the proposed framework which aims to bridge the gap between clinical question in natural language and efficient dynamic biomedical resources discovery.

  20. NaviGO: interactive tool for visualization and functional similarity and coherence analysis with gene ontology.

    PubMed

    Wei, Qing; Khan, Ishita K; Ding, Ziyun; Yerneni, Satwica; Kihara, Daisuke

    2017-03-20

    The number of genomics and proteomics experiments is growing rapidly, producing an ever-increasing amount of data that are awaiting functional interpretation. A number of function prediction algorithms were developed and improved to enable fast and automatic function annotation. With the well-defined structure and manual curation, Gene Ontology (GO) is the most frequently used vocabulary for representing gene functions. To understand relationship and similarity between GO annotations of genes, it is important to have a convenient pipeline that quantifies and visualizes the GO function analyses in a systematic fashion. NaviGO is a web-based tool for interactive visualization, retrieval, and computation of functional similarity and associations of GO terms and genes. Similarity of GO terms and gene functions is quantified with six different scores including protein-protein interaction and context based association scores we have developed in our previous works. Interactive navigation of the GO function space provides intuitive and effective real-time visualization of functional groupings of GO terms and genes as well as statistical analysis of enriched functions. We developed NaviGO, which visualizes and analyses functional similarity and associations of GO terms and genes. The NaviGO webserver is freely available at: http://kiharalab.org/web/navigo .

  1. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  2. a Task-Oriented Disaster Information Correlation Method

    NASA Astrophysics Data System (ADS)

    Linyao, Q.; Zhiqiang, D.; Qing, Z.

    2015-07-01

    With the rapid development of sensor networks and Earth observation technology, a large quantity of disaster-related data is available, such as remotely sensed data, historic data, case data, simulated data, and disaster products. However, the efficiency of current data management and service systems has become increasingly difficult due to the task variety and heterogeneous data. For emergency task-oriented applications, the data searches primarily rely on artificial experience based on simple metadata indices, the high time consumption and low accuracy of which cannot satisfy the speed and veracity requirements for disaster products. In this paper, a task-oriented correlation method is proposed for efficient disaster data management and intelligent service with the objectives of 1) putting forward disaster task ontology and data ontology to unify the different semantics of multi-source information, 2) identifying the semantic mapping from emergency tasks to multiple data sources on the basis of uniform description in 1), and 3) linking task-related data automatically and calculating the correlation between each data set and a certain task. The method goes beyond traditional static management of disaster data and establishes a basis for intelligent retrieval and active dissemination of disaster information. The case study presented in this paper illustrates the use of the method on an example flood emergency relief task.

  3. Semantic Web repositories for genomics data using the eXframe platform.

    PubMed

    Merrill, Emily; Corlosquet, Stéphane; Ciccarese, Paolo; Clark, Tim; Das, Sudeshna

    2014-01-01

    With the advent of inexpensive assay technologies, there has been an unprecedented growth in genomics data as well as the number of databases in which it is stored. In these databases, sample annotation using ontologies and controlled vocabularies is becoming more common. However, the annotation is rarely available as Linked Data, in a machine-readable format, or for standardized queries using SPARQL. This makes large-scale reuse, or integration with other knowledge bases very difficult. To address this challenge, we have developed the second generation of our eXframe platform, a reusable framework for creating online repositories of genomics experiments. This second generation model now publishes Semantic Web data. To accomplish this, we created an experiment model that covers provenance, citations, external links, assays, biomaterials used in the experiment, and the data collected during the process. The elements of our model are mapped to classes and properties from various established biomedical ontologies. Resource Description Framework (RDF) data is automatically produced using these mappings and indexed in an RDF store with a built-in Sparql Protocol and RDF Query Language (SPARQL) endpoint. Using the open-source eXframe software, institutions and laboratories can create Semantic Web repositories of their experiments, integrate it with heterogeneous resources and make it interoperable with the vast Semantic Web of biomedical knowledge.

  4. Benchmarking infrastructure for mutation text mining.

    PubMed

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  5. Integrating concept ontology and multitask learning to achieve more effective classifier training for multilevel image annotation.

    PubMed

    Fan, Jianping; Gao, Yuli; Luo, Hangzai

    2008-03-01

    In this paper, we have developed a new scheme for achieving multilevel annotations of large-scale images automatically. To achieve more sufficient representation of various visual properties of the images, both the global visual features and the local visual features are extracted for image content representation. To tackle the problem of huge intraconcept visual diversity, multiple types of kernels are integrated to characterize the diverse visual similarity relationships between the images more precisely, and a multiple kernel learning algorithm is developed for SVM image classifier training. To address the problem of huge interconcept visual similarity, a novel multitask learning algorithm is developed to learn the correlated classifiers for the sibling image concepts under the same parent concept and enhance their discrimination and adaptation power significantly. To tackle the problem of huge intraconcept visual diversity for the image concepts at the higher levels of the concept ontology, a novel hierarchical boosting algorithm is developed to learn their ensemble classifiers hierarchically. In order to assist users on selecting more effective hypotheses for image classifier training, we have developed a novel hyperbolic framework for large-scale image visualization and interactive hypotheses assessment. Our experiments on large-scale image collections have also obtained very positive results.

  6. Simple and efficient machine learning frameworks for identifying protein-protein interaction relevant articles and experimental methods used to study the interactions.

    PubMed

    Agarwal, Shashank; Liu, Feifan; Yu, Hong

    2011-10-03

    Protein-protein interaction (PPI) is an important biomedical phenomenon. Automatically detecting PPI-relevant articles and identifying methods that are used to study PPI are important text mining tasks. In this study, we have explored domain independent features to develop two open source machine learning frameworks. One performs binary classification to determine whether the given article is PPI relevant or not, named "Simple Classifier", and the other one maps the PPI relevant articles with corresponding interaction method nodes in a standardized PSI-MI (Proteomics Standards Initiative-Molecular Interactions) ontology, named "OntoNorm". We evaluated our system in the context of BioCreative challenge competition using the standardized data set. Our systems are amongst the top systems reported by the organizers, attaining 60.8% F1-score for identifying relevant documents, and 52.3% F1-score for mapping articles to interaction method ontology. Our results show that domain-independent machine learning frameworks can perform competitively well at the tasks of detecting PPI relevant articles and identifying the methods that were used to study the interaction in such articles. Simple Classifier is available at http://sourceforge.net/p/simpleclassify/home/ and OntoNorm at http://sourceforge.net/p/ontonorm/home/.

  7. Knowledge representation and management: towards an integration of a semantic web in daily health practice.

    PubMed

    Griffon, N; Charlet, J; Darmoni, Sj

    2013-01-01

    To summarize the best papers in the field of Knowledge Representation and Management (KRM). A synopsis of the four selected articles for the IMIA Yearbook 2013 KRM section is provided, as well as highlights of current KRM trends, in particular, of the semantic web in daily health practice. The manual selection was performed in three stages: first a set of 3,106 articles, then a second set of 86 articles followed by a third set of 15 articles, and finally the last set of four chosen articles. Among the four selected articles (see Table 1), one focuses on knowledge engineering to prevent adverse drug events; the objective of the second is to propose mappings between clinical archetypes and SNOMED CT in the context of clinical practice; the third presents an ontology to create a question-answering system; the fourth describes a biomonitoring network based on semantic web technologies. These four articles clearly indicate that the health semantic web has become a part of daily practice of health professionals since 2012. In the review of the second set of 86 articles, the same topics included in the previous IMIA yearbook remain active research fields: Knowledge extraction, automatic indexing, information retrieval, natural language processing, management of health terminologies and ontologies.

  8. Distributed and Collaborative Software Analysis

    NASA Astrophysics Data System (ADS)

    Ghezzi, Giacomo; Gall, Harald C.

    Throughout the years software engineers have come up with a myriad of specialized tools and techniques that focus on a certain type of software analysissoftware analysis such as source code analysis, co-change analysis or bug prediction. However, easy and straight forward synergies between these analyses and tools rarely exist because of their stand-alone nature, their platform dependence, their different input and output formats and the variety of data to analyze. As a consequence, distributed and collaborative software analysiscollaborative software analysis scenarios and in particular interoperability are severely limited. We describe a distributed and collaborative software analysis platform that allows for a seamless interoperability of software analysis tools across platform, geographical and organizational boundaries. We realize software analysis tools as services that can be accessed and composed over the Internet. These distributed analysis services shall be widely accessible in our incrementally augmented Software Analysis Broker software analysis broker where organizations and tool providers can register and share their tools. To allow (semi-) automatic use and composition of these tools, they are classified and mapped into a software analysis taxonomy and adhere to specific meta-models and ontologiesontologies for their category of analysis.

  9. Inferring ontology graph structures using OWL reasoning.

    PubMed

    Rodríguez-García, Miguel Ángel; Hoehndorf, Robert

    2018-01-05

    Ontologies are representations of a conceptualization of a domain. Traditionally, ontologies in biology were represented as directed acyclic graphs (DAG) which represent the backbone taxonomy and additional relations between classes. These graphs are widely exploited for data analysis in the form of ontology enrichment or computation of semantic similarity. More recently, ontologies are developed in a formal language such as the Web Ontology Language (OWL) and consist of a set of axioms through which classes are defined or constrained. While the taxonomy of an ontology can be inferred directly from the axioms of an ontology as one of the standard OWL reasoning tasks, creating general graph structures from OWL ontologies that exploit the ontologies' semantic content remains a challenge. We developed a method to transform ontologies into graphs using an automated reasoner while taking into account all relations between classes. Searching for (existential) patterns in the deductive closure of ontologies, we can identify relations between classes that are implied but not asserted and generate graph structures that encode for a large part of the ontologies' semantic content. We demonstrate the advantages of our method by applying it to inference of protein-protein interactions through semantic similarity over the Gene Ontology and demonstrate that performance is increased when graph structures are inferred using deductive inference according to our method. Our software and experiment results are available at http://github.com/bio-ontology-research-group/Onto2Graph . Onto2Graph is a method to generate graph structures from OWL ontologies using automated reasoning. The resulting graphs can be used for improved ontology visualization and ontology-based data analysis.

  10. Development and Evaluation of an Ontology for Guiding Appropriate Antibiotic Prescribing

    PubMed Central

    Furuya, E. Yoko; Kuperman, Gilad J.; Cimino, James J.; Bakken, Suzanne

    2011-01-01

    Objectives To develop and apply formal ontology creation methods to the domain of antimicrobial prescribing and to formally evaluate the resulting ontology through intrinsic and extrinsic evaluation studies. Methods We extended existing ontology development methods to create the ontology and implemented the ontology using Protégé-OWL. Correctness of the ontology was assessed using a set of ontology design principles and domain expert review via the laddering technique. We created three artifacts to support the extrinsic evaluation (set of prescribing rules, alerts and an ontology-driven alert module, and a patient database) and evaluated the usefulness of the ontology for performing knowledge management tasks to maintain the ontology and for generating alerts to guide antibiotic prescribing. Results The ontology includes 199 classes, 10 properties, and 1,636 description logic restrictions. Twenty-three Semantic Web Rule Language rules were written to generate three prescribing alerts: 1) antibiotic-microorganism mismatch alert; 2) medication-allergy alert; and 3) non-recommended empiric antibiotic therapy alert. The evaluation studies confirmed the correctness of the ontology, usefulness of the ontology for representing and maintaining antimicrobial treatment knowledge rules, and usefulness of the ontology for generating alerts to provide feedback to clinicians during antibiotic prescribing. Conclusions This study contributes to the understanding of ontology development and evaluation methods and addresses one knowledge gap related to using ontologies as a clinical decision support system component—a need for formal ontology evaluation methods to measure their quality from the perspective of their intrinsic characteristics and their usefulness for specific tasks. PMID:22019377

  11. Utilizing a structural meta-ontology for family-based quality assurance of the BioPortal ontologies.

    PubMed

    Ochs, Christopher; He, Zhe; Zheng, Ling; Geller, James; Perl, Yehoshua; Hripcsak, George; Musen, Mark A

    2016-06-01

    An Abstraction Network is a compact summary of an ontology's structure and content. In previous research, we showed that Abstraction Networks support quality assurance (QA) of biomedical ontologies. The development of an Abstraction Network and its associated QA methodologies, however, is a labor-intensive process that previously was applicable only to one ontology at a time. To improve the efficiency of the Abstraction-Network-based QA methodology, we introduced a QA framework that uses uniform Abstraction Network derivation techniques and QA methodologies that are applicable to whole families of structurally similar ontologies. For the family-based framework to be successful, it is necessary to develop a method for classifying ontologies into structurally similar families. We now describe a structural meta-ontology that classifies ontologies according to certain structural features that are commonly used in the modeling of ontologies (e.g., object properties) and that are important for Abstraction Network derivation. Each class of the structural meta-ontology represents a family of ontologies with identical structural features, indicating which types of Abstraction Networks and QA methodologies are potentially applicable to all of the ontologies in the family. We derive a collection of 81 families, corresponding to classes of the structural meta-ontology, that enable a flexible, streamlined family-based QA methodology, offering multiple choices for classifying an ontology. The structure of 373 ontologies from the NCBO BioPortal is analyzed and each ontology is classified into multiple families modeled by the structural meta-ontology. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Development and evaluation of an ontology for guiding appropriate antibiotic prescribing.

    PubMed

    Bright, Tiffani J; Yoko Furuya, E; Kuperman, Gilad J; Cimino, James J; Bakken, Suzanne

    2012-02-01

    To develop and apply formal ontology creation methods to the domain of antimicrobial prescribing and to formally evaluate the resulting ontology through intrinsic and extrinsic evaluation studies. We extended existing ontology development methods to create the ontology and implemented the ontology using Protégé-OWL. Correctness of the ontology was assessed using a set of ontology design principles and domain expert review via the laddering technique. We created three artifacts to support the extrinsic evaluation (set of prescribing rules, alerts and an ontology-driven alert module, and a patient database) and evaluated the usefulness of the ontology for performing knowledge management tasks to maintain the ontology and for generating alerts to guide antibiotic prescribing. The ontology includes 199 classes, 10 properties, and 1636 description logic restrictions. Twenty-three Semantic Web Rule Language rules were written to generate three prescribing alerts: (1) antibiotic-microorganism mismatch alert; (2) medication-allergy alert; and (3) non-recommended empiric antibiotic therapy alert. The evaluation studies confirmed the correctness of the ontology, usefulness of the ontology for representing and maintaining antimicrobial treatment knowledge rules, and usefulness of the ontology for generating alerts to provide feedback to clinicians during antibiotic prescribing. This study contributes to the understanding of ontology development and evaluation methods and addresses one knowledge gap related to using ontologies as a clinical decision support system component-a need for formal ontology evaluation methods to measure their quality from the perspective of their intrinsic characteristics and their usefulness for specific tasks. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Assessing the practice of biomedical ontology evaluation: Gaps and opportunities.

    PubMed

    Amith, Muhammad; He, Zhe; Bian, Jiang; Lossio-Ventura, Juan Antonio; Tao, Cui

    2018-04-01

    With the proliferation of heterogeneous health care data in the last three decades, biomedical ontologies and controlled biomedical terminologies play a more and more important role in knowledge representation and management, data integration, natural language processing, as well as decision support for health information systems and biomedical research. Biomedical ontologies and controlled terminologies are intended to assure interoperability. Nevertheless, the quality of biomedical ontologies has hindered their applicability and subsequent adoption in real-world applications. Ontology evaluation is an integral part of ontology development and maintenance. In the biomedicine domain, ontology evaluation is often conducted by third parties as a quality assurance (or auditing) effort that focuses on identifying modeling errors and inconsistencies. In this work, we first organized four categorical schemes of ontology evaluation methods in the existing literature to create an integrated taxonomy. Further, to understand the ontology evaluation practice in the biomedicine domain, we reviewed a sample of 200 ontologies from the National Center for Biomedical Ontology (NCBO) BioPortal-the largest repository for biomedical ontologies-and observed that only 15 of these ontologies have documented evaluation in their corresponding inception papers. We then surveyed the recent quality assurance approaches for biomedical ontologies and their use. We also mapped these quality assurance approaches to the ontology evaluation criteria. It is our anticipation that ontology evaluation and quality assurance approaches will be more widely adopted in the development life cycle of biomedical ontologies. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. A novel algorithm for fully automated mapping of geospatial ontologies

    NASA Astrophysics Data System (ADS)

    Chaabane, Sana; Jaziri, Wassim

    2018-01-01

    Geospatial information is collected from different sources thus making spatial ontologies, built for the same geographic domain, heterogeneous; therefore, different and heterogeneous conceptualizations may coexist. Ontology integrating helps creating a common repository of the geospatial ontology and allows removing the heterogeneities between the existing ontologies. Ontology mapping is a process used in ontologies integrating and consists in finding correspondences between the source ontologies. This paper deals with the "mapping" process of geospatial ontologies which consist in applying an automated algorithm in finding the correspondences between concepts referring to the definitions of matching relationships. The proposed algorithm called "geographic ontologies mapping algorithm" defines three types of mapping: semantic, topological and spatial.

  15. GOMMA: a component-based infrastructure for managing and analyzing life science ontologies and their evolution

    PubMed Central

    2011-01-01

    Background Ontologies are increasingly used to structure and semantically describe entities of domains, such as genes and proteins in life sciences. Their increasing size and the high frequency of updates resulting in a large set of ontology versions necessitates efficient management and analysis of this data. Results We present GOMMA, a generic infrastructure for managing and analyzing life science ontologies and their evolution. GOMMA utilizes a generic repository to uniformly and efficiently manage ontology versions and different kinds of mappings. Furthermore, it provides components for ontology matching, and determining evolutionary ontology changes. These components are used by analysis tools, such as the Ontology Evolution Explorer (OnEX) and the detection of unstable ontology regions. We introduce the component-based infrastructure and show analysis results for selected components and life science applications. GOMMA is available at http://dbs.uni-leipzig.de/GOMMA. Conclusions GOMMA provides a comprehensive and scalable infrastructure to manage large life science ontologies and analyze their evolution. Key functions include a generic storage of ontology versions and mappings, support for ontology matching and determining ontology changes. The supported features for analyzing ontology changes are helpful to assess their impact on ontology-dependent applications such as for term enrichment. GOMMA complements OnEX by providing functionalities to manage various versions of mappings between two ontologies and allows combining different match approaches. PMID:21914205

  16. Interestingness measures and strategies for mining multi-ontology multi-level association rules from gene ontology annotations for the discovery of new GO relationships.

    PubMed

    Manda, Prashanti; McCarthy, Fiona; Bridges, Susan M

    2013-10-01

    The Gene Ontology (GO), a set of three sub-ontologies, is one of the most popular bio-ontologies used for describing gene product characteristics. GO annotation data containing terms from multiple sub-ontologies and at different levels in the ontologies is an important source of implicit relationships between terms from the three sub-ontologies. Data mining techniques such as association rule mining that are tailored to mine from multiple ontologies at multiple levels of abstraction are required for effective knowledge discovery from GO annotation data. We present a data mining approach, Multi-ontology data mining at All Levels (MOAL) that uses the structure and relationships of the GO to mine multi-ontology multi-level association rules. We introduce two interestingness measures: Multi-ontology Support (MOSupport) and Multi-ontology Confidence (MOConfidence) customized to evaluate multi-ontology multi-level association rules. We also describe a variety of post-processing strategies for pruning uninteresting rules. We use publicly available GO annotation data to demonstrate our methods with respect to two applications (1) the discovery of co-annotation suggestions and (2) the discovery of new cross-ontology relationships. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  17. 40% Whole-House Energy Savings in the Hot-Humid Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    This guide book is a resource to help builders design and construct highly energy-efficient homes, while addressing building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the hot-humid climate can build homes that achieve whole house energy savings of 40% over the Building America benchmark (the 1993 Model Energy Code) with no added overall costs for consumers.

  18. 40% Whole-House Energy Savings in the Mixed-Humid Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baechler, Michael C.; Gilbride, T. L.; Hefty, M. G.

    2011-09-01

    This guide book is a resource to help builders design and construct highly energy-efficient homes, while addressing building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the mixed-humid climate can build homes that achieve whole house energy savings of 40% over the Building America benchmark (the 1993 Model Energy Code) with no added overall costs for consumers.

  19. Adaptive Long-Term Monitoring at Environmental Restoration Sites (ER-0629)

    DTIC Science & Technology

    2009-05-01

    Figures Figure 2-1 General Flowchart of Software Application Figure 2-2 Overview of the Genetic Algorithm Approach Figure 2-3 Example of a...and Model Builder) are highlighted on Figure 2-1, which is a general flowchart illustrating the application of the software. The software is applied...monitoring event (e.g., contaminant mass based on interpolation) that modeling is provided by Model Builder. 4 Figure 2-1. General Flowchart of Software

  20. The Air Land Sea Bulletin. Issue No. 2008-2, May 2008

    DTIC Science & Technology

    2008-05-01

    during the meeting. We saw this as a key event because in Iraq the act of breaking bread together is a significant rapport builder and the first...meaning for both parties and were incredible rapport builders —more than that, they created brothers in arms. BUILDING RELATIONSHIPS Once you...possible. Primary disqualifiers for selection as an advisor are medical profiles/issues, lack of security clearances or inability to gain a clearance

  1. OpenSesame: an open-source, graphical experiment builder for the social sciences.

    PubMed

    Mathôt, Sebastiaan; Schreij, Daniel; Theeuwes, Jan

    2012-06-01

    In the present article, we introduce OpenSesame, a graphical experiment builder for the social sciences. OpenSesame is free, open-source, and cross-platform. It features a comprehensive and intuitive graphical user interface and supports Python scripting for complex tasks. Additional functionality, such as support for eyetrackers, input devices, and video playback, is available through plug-ins. OpenSesame can be used in combination with existing software for creating experiments.

  2. Preparation of Entangled Polymer Melts of Various Architecture for Coarse-Grained Models

    DTIC Science & Technology

    2011-09-01

    Simulator ( LAMMPS ). This report presents a theory overview and a manual how to use the method. 15. SUBJECT TERMS Ammunition, coarse-grained model...polymer builder, LAMMPS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 26 19a. NAME OF RESPONSIBLE PERSON...scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ). Gel is an in house written C program of coarse- grained polymer builder, and LAMMPS is

  3. Best Practices Case Study: Pulte Homes and Communities of Del Webb, Las Vegas Division

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2009-10-01

    Case study of Pulte Homes Las Vegas Division, who certified nearly 1,200 homes to the DOE Builders Challenge between 2008 and 2012. All of the homes by Las Vegas’ biggest builder achieved HERS scores of lower than 70, and many floor plans got down to the mid 50s, with ducts located in sealed attics insulated along the roof line, advanced framing, and extra attention to air sealing.

  4. The design and implementation of signal decomposition system of CL multi-wavelet transform based on DSP builder

    NASA Astrophysics Data System (ADS)

    Huang, Yan; Wang, Zhihui

    2015-12-01

    With the development of FPGA, DSP Builder is widely applied to design system-level algorithms. The algorithm of CL multi-wavelet is more advanced and effective than scalar wavelets in processing signal decomposition. Thus, a system of CL multi-wavelet based on DSP Builder is designed for the first time in this paper. The system mainly contains three parts: a pre-filtering subsystem, a one-level decomposition subsystem and a two-level decomposition subsystem. It can be converted into hardware language VHDL by the Signal Complier block that can be used in Quartus II. After analyzing the energy indicator, it shows that this system outperforms Daubenchies wavelet in signal decomposition. Furthermore, it has proved to be suitable for the implementation of signal fusion based on SoPC hardware, and it will become a solid foundation in this new field.

  5. Energy Value Housing Award Guide: How to Build and Profit with Energy Efficiency in New Home Construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sikora, J. L.

    2001-06-01

    As concern over the environment grows, builders have the potential to fulfill a market niche by building homes that use fewer resources and have lower environmental impact than conventional construction. Builders can increase their marketability and customer satisfaction and, at the same time, reduce the environmental impact of their homes. However, it takes dedication to build environmentally sound homes along with a solid marketing approach to ensure that customers recognize the added value of energy and resource efficiency. This guide is intended for builders seeking suggestions on how to improve energy and resource efficiency in their new homes. It ismore » a compilation of ideas and concepts for designing, building, and marketing energy- and resource-efficient homes based on the experience of recipients of the national Energy Value Housing Award (EVHA).« less

  6. Heating, Ventilation, and Air Conditioning Design Strategy for a Hot-Humid Production Builder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerrigan, P.

    2014-03-01

    Building Science Corporation (BSC) worked directly with the David Weekley Homes - Houston division to develop a cost-effective design for moving the HVAC system into conditioned space. In addition, BSC conducted energy analysis to calculate the most economical strategy for increasing the energy performance of future production houses in preparation for the upcoming code changes in 2015. This research project addressed the following questions: 1. What is the most cost effective, best performing and most easily replicable method of locating ducts inside conditioned space for a hot-humid production home builder that constructs one and two story single family detached residences?more » 2. What is a cost effective and practical method of achieving 50% source energy savings vs. the 2006 International Energy Conservation Code for a hot-humid production builder? 3. How accurate are the pre-construction whole house cost estimates compared to confirmed post construction actual cost?« less

  7. Development and application of CATIA-GDML geometry builder

    NASA Astrophysics Data System (ADS)

    Belogurov, S.; Berchun, Yu; Chernogorov, A.; Malzacher, P.; Ovcharenko, E.; Schetinin, V.

    2014-06-01

    Due to conceptual difference between geometry descriptions in Computer-Aided Design (CAD) systems and particle transport Monte Carlo (MC) codes direct conversion of detector geometry in either direction is not feasible. The paper presents an update on functionality and application practice of the CATIA-GDML geometry builder first introduced at CHEP2010. This set of CATIAv5 tools has been developed for building a MC optimized GEANT4/ROOT compatible geometry based on the existing CAD model. The model can be exported via Geometry Description Markup Language (GDML). The builder allows also import and visualization of GEANT4/ROOT geometries in CATIA. The structure of a GDML file, including replicated volumes, volume assemblies and variables, is mapped into a part specification tree. A dedicated file template, a wide range of primitives, tools for measurement and implicit calculation of parameters, different types of multiple volume instantiation, mirroring, positioning and quality check have been implemented. Several use cases are discussed.

  8. Achieving Challenge Home in Affordable Housing in the Hot-Humid Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beal, D.; McIlvaine, J.; Winter, B.

    2014-08-01

    The Building America Partnership for Improved Residential Construction (BA-PIRC), one of the Building America research team leads, has partnered with two builders as they work through the Challenge Home certification process in one test home each. The builder partners participating in this cost-shared research are Southeast Volusia County Habitat for Humanity near Daytona, Florida and Manatee County Habitat for Humanity near Tampa, Florida. Both are affiliates of Habitat for Humanity International, a non-profit affordable housing organization. This research serves to identify viable technical pathways to meeting the CH criteria for other builders in the region. A further objective of thismore » research is to identify gaps and barriers in the marketplace related to product availability, labor force capability, code issues, cost effectiveness, and business case issues that hinder or prevent broader adoption on a production scale.« less

  9. Achieving Challenge Home in Affordable Housing in the Hot-Humid Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beal, D.; McIlvaine, J.; Winter, B.

    2014-08-01

    The Building America Partnership for Improved Residential Construction (BA-PIRC), one of the Building America research team leads, has partnered with two builders as they work through the Challenge Home certification process (now Zero Energy Ready Home) in one test home each. The builder partners participating in this cost-shared research are Southeast Volusia County Habitat for Humanity near Daytona, Florida and Manatee County Habitat for Humanity near Tampa, Florida. Both are affiliates of Habitat for Humanity International, a non-profit affordable housing organization. This research serves to identify viable technical pathways to meeting the CH criteria for other builders in the region.more » A further objective of this research is to identify gaps and barriers in the marketplace related to product availability, labor force capability, code issues, cost effectiveness, and business case issues that hinder or prevent broader adoption on a production scale.« less

  10. Modulated evaluation metrics for drug-based ontologies.

    PubMed

    Amith, Muhammad; Tao, Cui

    2017-04-24

    Research for ontology evaluation is scarce. If biomedical ontological datasets and knowledgebases are to be widely used, there needs to be quality control and evaluation for the content and structure of the ontology. This paper introduces how to effectively utilize a semiotic-inspired approach to ontology evaluation, specifically towards drug-related ontologies hosted on the National Center for Biomedical Ontology BioPortal. Using the semiotic-based evaluation framework for drug-based ontologies, we adjusted the quality metrics based on the semiotic features of drug ontologies. Then, we compared the quality scores before and after tailoring. The scores revealed a more precise measurement and a closer distribution compared to the before-tailoring. The results of this study reveal that a tailored semiotic evaluation produced a more meaningful and accurate assessment of drug-based ontologies, lending to the possible usefulness of semiotics in ontology evaluation.

  11. An ontological case base engineering methodology for diabetes management.

    PubMed

    El-Sappagh, Shaker H; El-Masri, Samir; Elmogy, Mohammed; Riad, A M; Saddik, Basema

    2014-08-01

    Ontology engineering covers issues related to ontology development and use. In Case Based Reasoning (CBR) system, ontology plays two main roles; the first as case base and the second as domain ontology. However, the ontology engineering literature does not provide adequate guidance on how to build, evaluate, and maintain ontologies. This paper proposes an ontology engineering methodology to generate case bases in the medical domain. It mainly focuses on the research of case representation in the form of ontology to support the case semantic retrieval and enhance all knowledge intensive CBR processes. A case study on diabetes diagnosis case base will be provided to evaluate the proposed methodology.

  12. Community-based Ontology Development, Annotation and Discussion with MediaWiki extension Ontokiwi and Ontokiwi-based Ontobedia

    PubMed Central

    Ong, Edison; He, Yongqun

    2016-01-01

    Hundreds of biological and biomedical ontologies have been developed to support data standardization, integration and analysis. Although ontologies are typically developed for community usage, community efforts in ontology development are limited. To support ontology visualization, distribution, and community-based annotation and development, we have developed Ontokiwi, an ontology extension to the MediaWiki software. Ontokiwi displays hierarchical classes and ontological axioms. Ontology classes and axioms can be edited and added using Ontokiwi form or MediaWiki source editor. Ontokiwi also inherits MediaWiki features such as Wikitext editing and version control. Based on the Ontokiwi/MediaWiki software package, we have developed Ontobedia, which targets to support community-based development and annotations of biological and biomedical ontologies. As demonstrations, we have loaded the Ontology of Adverse Events (OAE) and the Cell Line Ontology (CLO) into Ontobedia. Our studies showed that Ontobedia was able to achieve expected Ontokiwi features. PMID:27570653

  13. Applications of Ontology Design Patterns in Biomedical Ontologies

    PubMed Central

    Mortensen, Jonathan M.; Horridge, Matthew; Musen, Mark A.; Noy, Natalya F.

    2012-01-01

    Ontology design patterns (ODPs) are a proposed solution to facilitate ontology development, and to help users avoid some of the most frequent modeling mistakes. ODPs originate from similar approaches in software engineering, where software design patterns have become a critical aspect of software development. There is little empirical evidence for ODP prevalence or effectiveness thus far. In this work, we determine the use and applicability of ODPs in a case study of biomedical ontologies. We encoded ontology design patterns from two ODP catalogs. We then searched for these patterns in a set of eight ontologies. We found five patterns of the 69 patterns. Two of the eight ontologies contained these patterns. While ontology design patterns provide a vehicle for capturing formally reoccurring models and best practices in ontology design, we show that today their use in a case study of widely used biomedical ontologies is limited. PMID:23304337

  14. An empirical analysis of ontology reuse in BioPortal.

    PubMed

    Ochs, Christopher; Perl, Yehoshua; Geller, James; Arabandi, Sivaram; Tudorache, Tania; Musen, Mark A

    2017-07-01

    Biomedical ontologies often reuse content (i.e., classes and properties) from other ontologies. Content reuse enables a consistent representation of a domain and reusing content can save an ontology author significant time and effort. Prior studies have investigated the existence of reused terms among the ontologies in the NCBO BioPortal, but as of yet there has not been a study investigating how the ontologies in BioPortal utilize reused content in the modeling of their own content. In this study we investigate how 355 ontologies hosted in the NCBO BioPortal reuse content from other ontologies for the purposes of creating new ontology content. We identified 197 ontologies that reuse content. Among these ontologies, 108 utilize reused classes in the modeling of their own classes and 116 utilize reused properties in class restrictions. Current utilization of reuse and quality issues related to reuse are discussed. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. The ontology life cycle: Integrated tools for editing, publishing, peer review, and evolution of ontologies

    PubMed Central

    Noy, Natalya; Tudorache, Tania; Nyulas, Csongor; Musen, Mark

    2010-01-01

    Ontologies have become a critical component of many applications in biomedical informatics. However, the landscape of the ontology tools today is largely fragmented, with independent tools for ontology editing, publishing, and peer review: users develop an ontology in an ontology editor, such as Protégé; and publish it on a Web server or in an ontology library, such as BioPortal, in order to share it with the community; they use the tools provided by the library or mailing lists and bug trackers to collect feedback from users. In this paper, we present a set of tools that bring the ontology editing and publishing closer together, in an integrated platform for the entire ontology lifecycle. This integration streamlines the workflow for collaborative development and increases integration between the ontologies themselves through the reuse of terms. PMID:21347039

  16. Effects of an ontology display with history representation on organizational memory information systems.

    PubMed

    Hwang, Wonil; Salvendy, Gavriel

    2005-06-10

    Ontologies, as a possible element of organizational memory information systems, appear to support organizational learning. Ontology tools can be used to share knowledge among the members of an organization. However, current ontology-viewing user interfaces of ontology tools do not fully support organizational learning, because most of them lack proper history representation in their display. In this study, a conceptual model was developed that emphasized the role of ontology in the organizational learning cycle and explored the integration of history representation in the ontology display. Based on the experimental results from a split-plot design with 30 participants, two conclusions were derived: first, appropriately selected history representations in the ontology display help users to identify changes in the ontologies; and second, compatibility between types of ontology display and history representation is more important than ontology display and history representation in themselves.

  17. Supporting the analysis of ontology evolution processes through the combination of static and dynamic scaling functions in OQuaRE.

    PubMed

    Duque-Ramos, Astrid; Quesada-Martínez, Manuel; Iniesta-Moreno, Miguela; Fernández-Breis, Jesualdo Tomás; Stevens, Robert

    2016-10-17

    The biomedical community has now developed a significant number of ontologies. The curation of biomedical ontologies is a complex task and biomedical ontologies evolve rapidly, so new versions are regularly and frequently published in ontology repositories. This has the implication of there being a high number of ontology versions over a short time span. Given this level of activity, ontology designers need to be supported in the effective management of the evolution of biomedical ontologies as the different changes may affect the engineering and quality of the ontology. This is why there is a need for methods that contribute to the analysis of the effects of changes and evolution of ontologies. In this paper we approach this issue from the ontology quality perspective. In previous work we have developed an ontology evaluation framework based on quantitative metrics, called OQuaRE. Here, OQuaRE is used as a core component in a method that enables the analysis of the different versions of biomedical ontologies using the quality dimensions included in OQuaRE. Moreover, we describe and use two scales for evaluating the changes between the versions of a given ontology. The first one is the static scale used in OQuaRE and the second one is a new, dynamic scale, based on the observed values of the quality metrics of a corpus defined by all the versions of a given ontology (life-cycle). In this work we explain how OQuaRE can be adapted for understanding the evolution of ontologies. Its use has been illustrated with the ontology of bioinformatics operations, types of data, formats, and topics (EDAM). The two scales included in OQuaRE provide complementary information about the evolution of the ontologies. The application of the static scale, which is the original OQuaRE scale, to the versions of the EDAM ontology reveals a design based on good ontological engineering principles. The application of the dynamic scale has enabled a more detailed analysis of the evolution of the ontology, measured through differences between versions. The statistics of change based on the OQuaRE quality scores make possible to identify key versions where some changes in the engineering of the ontology triggered a change from the OQuaRE quality perspective. In the case of the EDAM, this study let us to identify that the fifth version of the ontology has the largest impact in the quality metrics of the ontology, when comparative analyses between the pairs of consecutive versions are performed.

  18. Ontology Mappings to Improve Learning Resource Search

    ERIC Educational Resources Information Center

    Gasevic, Dragan; Hatala, Marek

    2006-01-01

    This paper proposes an ontology mapping-based framework that allows searching for learning resources using multiple ontologies. The present applications of ontologies in e-learning use various ontologies (eg, domain, curriculum, context), but they do not give a solution on how to interoperate e-learning systems based on different ontologies. The…

  19. Webulous and the Webulous Google Add-On--a web service and application for ontology building from templates.

    PubMed

    Jupp, Simon; Burdett, Tony; Welter, Danielle; Sarntivijai, Sirarat; Parkinson, Helen; Malone, James

    2016-01-01

    Authoring bio-ontologies is a task that has traditionally been undertaken by skilled experts trained in understanding complex languages such as the Web Ontology Language (OWL), in tools designed for such experts. As requests for new terms are made, the need for expert ontologists represents a bottleneck in the development process. Furthermore, the ability to rigorously enforce ontology design patterns in large, collaboratively developed ontologies is difficult with existing ontology authoring software. We present Webulous, an application suite for supporting ontology creation by design patterns. Webulous provides infrastructure to specify templates for populating ontology design patterns that get transformed into OWL assertions in a target ontology. Webulous provides programmatic access to the template server and a client application has been developed for Google Sheets that allows templates to be loaded, populated and resubmitted to the Webulous server for processing. The development and delivery of ontologies to the community requires software support that goes beyond the ontology editor. Building ontologies by design patterns and providing simple mechanisms for the addition of new content helps reduce the overall cost and effort required to develop an ontology. The Webulous system provides support for this process and is used as part of the development of several ontologies at the European Bioinformatics Institute.

  20. Ontobee: A linked ontology data server to support ontology term dereferencing, linkage, query and integration

    PubMed Central

    Ong, Edison; Xiang, Zuoshuang; Zhao, Bin; Liu, Yue; Lin, Yu; Zheng, Jie; Mungall, Chris; Courtot, Mélanie; Ruttenberg, Alan; He, Yongqun

    2017-01-01

    Linked Data (LD) aims to achieve interconnected data by representing entities using Unified Resource Identifiers (URIs), and sharing information using Resource Description Frameworks (RDFs) and HTTP. Ontologies, which logically represent entities and relations in specific domains, are the basis of LD. Ontobee (http://www.ontobee.org/) is a linked ontology data server that stores ontology information using RDF triple store technology and supports query, visualization and linkage of ontology terms. Ontobee is also the default linked data server for publishing and browsing biomedical ontologies in the Open Biological Ontology (OBO) Foundry (http://obofoundry.org) library. Ontobee currently hosts more than 180 ontologies (including 131 OBO Foundry Library ontologies) with over four million terms. Ontobee provides a user-friendly web interface for querying and visualizing the details and hierarchy of a specific ontology term. Using the eXtensible Stylesheet Language Transformation (XSLT) technology, Ontobee is able to dereference a single ontology term URI, and then output RDF/eXtensible Markup Language (XML) for computer processing or display the HTML information on a web browser for human users. Statistics and detailed information are generated and displayed for each ontology listed in Ontobee. In addition, a SPARQL web interface is provided for custom advanced SPARQL queries of one or multiple ontologies. PMID:27733503

Top