Science.gov

Sample records for rule-based semantic integration

  1. Annotation of SBML models through rule-based semantic integration

    PubMed Central

    2010-01-01

    Background The creation of accurate quantitative Systems Biology Markup Language (SBML) models is a time-intensive, manual process often complicated by the many data sources and formats required to annotate even a small and well-scoped model. Ideally, the retrieval and integration of biological knowledge for model annotation should be performed quickly, precisely, and with a minimum of manual effort. Results Here we present rule-based mediation, a method of semantic data integration applied to systems biology model annotation. The heterogeneous data sources are first syntactically converted into ontologies, which are then aligned to a small domain ontology by applying a rule base. We demonstrate proof-of-principle of this application of rule-based mediation using off-the-shelf semantic web technology through two use cases for SBML model annotation. Existing tools and technology provide a framework around which the system is built, reducing development time and increasing usability. Conclusions Integrating resources in this way accommodates multiple formats with different semantics, and provides richly-modelled biological knowledge suitable for annotation of SBML models. This initial work establishes the feasibility of rule-based mediation as part of an automated SBML model annotation system. Availability Detailed information on the project files as well as further information on and comparisons with similar projects is available from the project page at http://cisban-silico.cs.ncl.ac.uk/RBM/. PMID:20626923

  2. Rule-based semantic web services matching strategy

    NASA Astrophysics Data System (ADS)

    Fan, Hong; Wang, Zhihua

    2011-12-01

    With the development of Web services technology, the number of service increases rapidly, and it becomes a challenge task that how to efficiently discovery the services that exactly match the user's requirements from the large scale of services library. Many semantic Web services discovery technologies proposed by the recent literatures only focus on the keyword-based or primary semantic based service's matching. This paper studies the rules and rule reasoning based service matching algorithm in the background of large scale services library. Firstly, the formal descriptions of semantic web services and service matching is presented. The services' matching are divided into four levels: Exact, Plugin, Subsume and Fail and their formal descriptions are also presented. Then, the service matching is regarded as rule-based reasoning issues. A set of match rules are firstly given and the related services set is retrieved from services ontology base through rule-based reasoning, and their matching levels are determined by distinguishing the relationships between service's I/O and user's request I/O. Finally, the experiment based on two services sets show that the proposed services matching strategy can easily implement the smart service discovery and obtains the high service discovery efficiency in comparison with the traditional global traversal strategy.

  3. Semantic classification of diseases in discharge summaries using a context-aware rule-based classifier.

    PubMed

    Solt, Illés; Tikk, Domonkos; Gál, Viktor; Kardkovács, Zsolt T

    2009-01-01

    OBJECTIVE Automated and disease-specific classification of textual clinical discharge summaries is of great importance in human life science, as it helps physicians to make medical studies by providing statistically relevant data for analysis. This can be further facilitated if, at the labeling of discharge summaries, semantic labels are also extracted from text, such as whether a given disease is present, absent, questionable in a patient, or is unmentioned in the document. The authors present a classification technique that successfully solves the semantic classification task. DESIGN The authors introduce a context-aware rule-based semantic classification technique for use on clinical discharge summaries. The classification is performed in subsequent steps. First, some misleading parts are removed from the text; then the text is partitioned into positive, negative, and uncertain context segments, then a sequence of binary classifiers is applied to assign the appropriate semantic labels. Measurement For evaluation the authors used the documents of the i2b2 Obesity Challenge and adopted its evaluation measures: F(1)-macro and F(1)-micro for measurements. RESULTS On the two subtasks of the Obesity Challenge (textual and intuitive classification) the system performed very well, and achieved a F(1)-macro = 0.80 for the textual and F(1)-macro = 0.67 for the intuitive tasks, and obtained second place at the textual and first place at the intuitive subtasks of the challenge. CONCLUSIONS The authors show in the paper that a simple rule-based classifier can tackle the semantic classification task more successfully than machine learning techniques, if the training data are limited and some semantic labels are very sparse. PMID:19390101

  4. Annotation of rule-based models with formal semantics to enable creation, analysis, reuse and visualization

    PubMed Central

    Misirli, Goksel; Cavaliere, Matteo; Waites, William; Pocock, Matthew; Madsen, Curtis; Gilfellon, Owen; Honorato-Zimmer, Ricardo; Zuliani, Paolo; Danos, Vincent; Wipat, Anil

    2016-01-01

    Motivation: Biological systems are complex and challenging to model and therefore model reuse is highly desirable. To promote model reuse, models should include both information about the specifics of simulations and the underlying biology in the form of metadata. The availability of computationally tractable metadata is especially important for the effective automated interpretation and processing of models. Metadata are typically represented as machine-readable annotations which enhance programmatic access to information about models. Rule-based languages have emerged as a modelling framework to represent the complexity of biological systems. Annotation approaches have been widely used for reaction-based formalisms such as SBML. However, rule-based languages still lack a rich annotation framework to add semantic information, such as machine-readable descriptions, to the components of a model. Results: We present an annotation framework and guidelines for annotating rule-based models, encoded in the commonly used Kappa and BioNetGen languages. We adapt widely adopted annotation approaches to rule-based models. We initially propose a syntax to store machine-readable annotations and describe a mapping between rule-based modelling entities, such as agents and rules, and their annotations. We then describe an ontology to both annotate these models and capture the information contained therein, and demonstrate annotating these models using examples. Finally, we present a proof of concept tool for extracting annotations from a model that can be queried and analyzed in a uniform way. The uniform representation of the annotations can be used to facilitate the creation, analysis, reuse and visualization of rule-based models. Although examples are given, using specific implementations the proposed techniques can be applied to rule-based models in general. Availability and implementation: The annotation ontology for rule-based models can be found at http

  5. An HL7-CDA wrapper for facilitating semantic interoperability to rule-based Clinical Decision Support Systems.

    PubMed

    Sáez, Carlos; Bresó, Adrián; Vicente, Javier; Robles, Montserrat; García-Gómez, Juan Miguel

    2013-03-01

    The success of Clinical Decision Support Systems (CDSS) greatly depends on its capability of being integrated in Health Information Systems (HIS). Several proposals have been published up to date to permit CDSS gathering patient data from HIS. Some base the CDSS data input on the HL7 reference model, however, they are tailored to specific CDSS or clinical guidelines technologies, or do not focus on standardizing the CDSS resultant knowledge. We propose a solution for facilitating semantic interoperability to rule-based CDSS focusing on standardized input and output documents conforming an HL7-CDA wrapper. We define the HL7-CDA restrictions in a HL7-CDA implementation guide. Patient data and rule inference results are mapped respectively to and from the CDSS by means of a binding method based on an XML binding file. As an independent clinical document, the results of a CDSS can present clinical and legal validity. The proposed solution is being applied in a CDSS for providing patient-specific recommendations for the care management of outpatients with diabetes mellitus. PMID:23199936

  6. Rule-Based and Information-Integration Category Learning in Normal Aging

    ERIC Educational Resources Information Center

    Maddox, W. Todd; Pacheco, Jennifer; Reeves, Maia; Zhu, Bo; Schnyer, David M.

    2010-01-01

    The basal ganglia and prefrontal cortex play critical roles in category learning. Both regions evidence age-related structural and functional declines. The current study examined rule-based and information-integration category learning in a group of older and younger adults. Rule-based learning is thought to involve explicit, frontally mediated…

  7. Rule-based and information-integration category learning in normal aging.

    PubMed

    Maddox, W Todd; Pacheco, Jennifer; Reeves, Maia; Zhu, Bo; Schnyer, David M

    2010-08-01

    The basal ganglia and prefrontal cortex play critical roles in category learning. Both regions evidence age-related structural and functional declines. The current study examined rule-based and information-integration category learning in a group of older and younger adults. Rule-based learning is thought to involve explicit, frontally mediated processes, whereas information-integration is thought to involve implicit, striatally mediated processes. As a group, older adults showed rule-based and information-integration deficits. A series of models were applied that provided insights onto the type of strategy used to solve the task. Interestingly, when the analyses focused only on participants who used the task appropriate strategy in the final block of trials, the age-related rule-based deficit disappeared whereas the information-integration deficit remained. For this group of individuals, the final block information-integration deficit was due to less consistent application of the task appropriate strategy by older adults, and over the course of learning these older adults shifted from an explicit hypothesis-testing strategy to the task appropriate strategy later in learning. In addition, the use of the task appropriate strategy was associated with less interference and better inhibitory control for rule-based and information-information learning, whereas use of the task appropriate strategy was associated with greater working memory and better new verbal learning only for the rule-based task. These results suggest that normal aging impacts both forms of category learning and that there are some important similarities and differences in the explanatory locus of these deficits. The data also support a two-component model of information-integration category learning that includes a striatal component that mediated procedural-based learning, and a prefrontal cortical component that mediates the transition from hypothesis-testing to procedural-based strategies

  8. Prefrontal Contributions to Rule-Based and Information-Integration Category Learning

    ERIC Educational Resources Information Center

    Schnyer, David M.; Maddox, W. Todd; Ell, Shawn; Davis, Sarah; Pacheco, Jenni; Verfaellie, Mieke

    2009-01-01

    Previous research revealed that the basal ganglia play a critical role in category learning [Ell, S. W., Marchant, N. L., & Ivry, R. B. (2006). "Focal putamen lesions impair learning in rule-based, but not information-integration categorization tasks." "Neuropsychologia", 44(10), 1737-1751; Maddox, W. T. & Filoteo, J. V. (2007). "Modeling visual…

  9. A Rule-Based Expert System as an Integrated Resource in an Outpatient Clinic Information System

    PubMed Central

    Wilton, Richard

    1990-01-01

    A rule-based expert system can be integrated in a useful way into a microcomputer-based clinical information system by using symmetric data-communication methods and intuitive user-interface design. To users of the computer system, the expert system appears as one of several distributed information resources, among which are database management systems and a gateway to a mainframe computing system. Transparent access to the expert system is based on the use of both commercial and public-domain data-communication standards.

  10. Integration of object-oriented knowledge representation with the CLIPS rule based system

    NASA Technical Reports Server (NTRS)

    Logie, David S.; Kamil, Hasan

    1990-01-01

    The paper describes a portion of the work aimed at developing an integrated, knowledge based environment for the development of engineering-oriented applications. An Object Representation Language (ORL) was implemented in C++ which is used to build and modify an object-oriented knowledge base. The ORL was designed in such a way so as to be easily integrated with other representation schemes that could effectively reason with the object base. Specifically, the integration of the ORL with the rule based system C Language Production Systems (CLIPS), developed at the NASA Johnson Space Center, will be discussed. The object-oriented knowledge representation provides a natural means of representing problem data as a collection of related objects. Objects are comprised of descriptive properties and interrelationships. The object-oriented model promotes efficient handling of the problem data by allowing knowledge to be encapsulated in objects. Data is inherited through an object network via the relationship links. Together, the two schemes complement each other in that the object-oriented approach efficiently handles problem data while the rule based knowledge is used to simulate the reasoning process. Alone, the object based knowledge is little more than an object-oriented data storage scheme; however, the CLIPS inference engine adds the mechanism to directly and automatically reason with that knowledge. In this hybrid scheme, the expert system dynamically queries for data and can modify the object base with complete access to all the functionality of the ORL from rules.

  11. Does Cognitive Development Predict Semantic Integration?

    ERIC Educational Resources Information Center

    Johnson, Janet W.; Scholnick, Ellin Kofsky

    1979-01-01

    Investigates the influence of logical skills (inclusion and seriation) on the degree and kind of semantic integration performed on remembered material among 47 third- and fourth-grade boys and girls and college students. (JMB)

  12. Project Integration Architecture: Formulation of Semantic Parameters

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    One of several key elements of the Project Integration Architecture (PIA) is the intention to formulate parameter objects which convey meaningful semantic information. In so doing, it is expected that a level of automation can be achieved in the consumption of information content by PIA-consuming clients outside the programmatic boundary of a presenting PIA-wrapped application. This paper discusses the steps that have been recently taken in formulating such semantically-meaningful parameters.

  13. Integration Proposal for Description Logic and Attributive Logic - Towards Semantic Web Rules

    NASA Astrophysics Data System (ADS)

    Nalepa, Grzegorz J.; Furmańska, Weronika T.

    The current challenge of the Semantic Web is the development of an expressive yet effective rule language. This paper presents an integration proposal for Description Logics (DL) and Attributive Logics (ALSV) is presented. These two formalisms stem from fields of Knowledge Representation and Artificial Intelligence. However, they are based on different design goals and therefore provide different description and reasoning capabilities. ALSV is the foundation of XTT2, an expressive language for rule-based systems. DL provide formulation for expressive ontology languages such as OWL2. An important research direction is the development of rule languages that can be integrated with ontologies. The contribution of the paper consists in introducing a possible transition from ALSV to DL. This opens up possibilities of using XTT2, a well-founded rule-based system modelling rule language, to improve the design of Semantic Web rules.

  14. Category Number Impacts Rule-Based but Not Information-Integration Category Learning: Further Evidence for Dissociable Category-Learning Systems

    ERIC Educational Resources Information Center

    Maddox, W. Todd; Filoteo, J. Vincent; Hejl, Kelli D.; Ing, A. David

    2004-01-01

    Category number effects on rule-based and information-integration category learning were investigated. Category number affected accuracy and the distribution of best-fitting models in the rule-based task but had no effect on accuracy and little effect on the distribution of best-fining models in the information-integration task. In the 2 category…

  15. Semantics of interdisciplinary data and information integration

    NASA Astrophysics Data System (ADS)

    McGuinness, D. L.; Fox, P.; Raskin, R.; Sinha, A. K.

    2009-05-01

    We have developed an application of semantic web methods and technologies to address the integration of interdisciplinary earth-science datasets. The specific use case addresses seeking and using atmospheric chemistry and volcano geochemistry datasets. We have developed an integration framework based on semantic descriptions (ontologies) of the linking relations between the application domains. In doing this, we have extensively leveraged and existing ontology frameworks such as SWEET, VSTO and GEON as well as included extentions of them when needed. We present the components of this application, including the ontologies, the registration of datasets with ontologies at several levels of granularity, the data sources, and application results from the use case. We will also present the cur rent and near-future capabilities we are developing. This work arises from the Semantically-Enabled Science Data Integration (SESDI) project, which is an NASA/ESTO/ACCESS-funded project performed in part by Rensselaer Polytechnic Institute, the High Altitude Observatory at the National Center for Atmospheric Research (NCAR), McGuinness Associates, NASA/JPL and Virginia Polytechnic University.

  16. Semantic Web meets Integrative Biology: a survey.

    PubMed

    Chen, Huajun; Yu, Tong; Chen, Jake Y

    2013-01-01

    Integrative Biology (IB) uses experimental or computational quantitative technologies to characterize biological systems at the molecular, cellular, tissue and population levels. IB typically involves the integration of the data, knowledge and capabilities across disciplinary boundaries in order to solve complex problems. We identify a series of bioinformatics problems posed by interdisciplinary integration: (i) data integration that interconnects structured data across related biomedical domains; (ii) ontology integration that brings jargons, terminologies and taxonomies from various disciplines into a unified network of ontologies; (iii) knowledge integration that integrates disparate knowledge elements from multiple sources; (iv) service integration that build applications out of services provided by different vendors. We argue that IB can benefit significantly from the integration solutions enabled by Semantic Web (SW) technologies. The SW enables scientists to share content beyond the boundaries of applications and websites, resulting into a web of data that is meaningful and understandable to any computers. In this review, we provide insight into how SW technologies can be used to build open, standardized and interoperable solutions for interdisciplinary integration on a global basis. We present a rich set of case studies in system biology, integrative neuroscience, bio-pharmaceutics and translational medicine, to highlight the technical features and benefits of SW applications in IB. PMID:22492191

  17. Semantic integration of data on transcriptional regulation

    PubMed Central

    Baitaluk, Michael; Ponomarenko, Julia

    2010-01-01

    Motivation: Experimental and predicted data concerning gene transcriptional regulation are distributed among many heterogeneous sources. However, there are no resources to integrate these data automatically or to provide a ‘one-stop shop’ experience for users seeking information essential for deciphering and modeling gene regulatory networks. Results: IntegromeDB, a semantic graph-based ‘deep-web’ data integration system that automatically captures, integrates and manages publicly available data concerning transcriptional regulation, as well as other relevant biological information, is proposed in this article. The problems associated with data integration are addressed by ontology-driven data mapping, multiple data annotation and heterogeneous data querying, also enabling integration of the user's data. IntegromeDB integrates over 100 experimental and computational data sources relating to genomics, transcriptomics, genetics, and functional and interaction data concerning gene transcriptional regulation in eukaryotes and prokaryotes. Availability: IntegromeDB is accessible through the integrated research environment BiologicalNetworks at http://www.BiologicalNetworks.org Contact: baitaluk@sdsc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20427517

  18. Integrated Estimation of Seismic Physical Vulnerability of Tehran Using Rule Based Granular Computing

    NASA Astrophysics Data System (ADS)

    Sheikhian, H.; Delavar, M. R.; Stein, A.

    2015-08-01

    Tehran, the capital of Iran, is surrounded by the North Tehran fault, the Mosha fault and the Rey fault. This exposes the city to possibly huge earthquakes followed by dramatic human loss and physical damage, in particular as it contains a large number of non-standard constructions and aged buildings. Estimation of the likely consequences of an earthquake facilitates mitigation of these losses. Mitigation of the earthquake fatalities may be achieved by promoting awareness of earthquake vulnerability and implementation of seismic vulnerability reduction measures. In this research, granular computing using generality and absolute support for rule extraction is applied. It uses coverage and entropy for rule prioritization. These rules are combined to form a granule tree that shows the order and relation of the extracted rules. In this way the seismic physical vulnerability is assessed, integrating the effects of the three major known faults. Effective parameters considered in the physical seismic vulnerability assessment are slope, seismic intensity, height and age of the buildings. Experts were asked to predict seismic vulnerability for 100 randomly selected samples among more than 3000 statistical units in Tehran. The integrated experts' point of views serve as input into granular computing. Non-redundant covering rules preserve the consistency in the model, which resulted in 84% accuracy in the seismic vulnerability assessment based on the validation of the predicted test data against expected vulnerability degree. The study concluded that granular computing is a useful method to assess the effects of earthquakes in an earthquake prone area.

  19. Rule-based integration of RNA-Seq analyses tools for identification of novel transcripts.

    PubMed

    Inamdar, Harshal; Datta, Avik; Manjari, K Sunitha; Joshi, Rajendra

    2014-10-01

    Recent evidences suggest that a substantial amount of genome is transcribed more than that was anticipated, giving rise to a large number of unknown or novel transcripts. Identification of novel transcripts can provide key insights into understanding important cellular functions as well as molecular mechanisms underlying complex diseases like cancer. RNA-Seq has emerged as a powerful tool to detect novel transcripts, which previous profiling techniques failed to identify. A number of tools are available for enabling identification of novel transcripts at different levels. Read mappers such as TopHat, MapSplice, and SOAPsplice predict novel junctions, which are the indicators of novel transcripts. Cufflinks assembles novel transcripts based on alignment information and Oases performs de novo construction of transcripts. A common limitation of all these tools is prediction of sizable number of spurious or false positive (FP) novel transcripts. An approach that integrates information from all above sources and simultaneously scrutinizes FPs to correctly identify authentic novel transcripts of high confidence is proposed. PMID:25245144

  20. Semantic search integration to climate data

    SciTech Connect

    Devarakonda, Ranjeet; Palanisamy, Giri; Pouchard, Line Catherine; Shrestha, Biva

    2014-01-01

    In this paper we present how research projects at Oak Ridge National Laboratory are using Semantic Search capabilities to help scientists perform their research. We will discuss how the Mercury metadata search system, with the help of the semantic search capability, is being used to find, retrieve, and link climate change data. DOI: 10.1109/CTS.2014.6867639

  1. Category Number Impacts Rule-Based "and" Information-Integration Category Learning: A Reassessment of Evidence for Dissociable Category-Learning Systems

    ERIC Educational Resources Information Center

    Stanton, Roger D.; Nosofsky, Robert M.

    2013-01-01

    Researchers have proposed that an explicit reasoning system is responsible for learning rule-based category structures and that a separate implicit, procedural-learning system is responsible for learning information-integration category structures. As evidence for this multiple-system hypothesis, researchers report a dissociation based on…

  2. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-01

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features. PMID:26759193

  3. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention

    PubMed Central

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-01

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features. PMID:26759193

  4. The Balance-Scale Task Revisited: A Comparison of Statistical Models for Rule-Based and Information-Integration Theories of Proportional Reasoning

    PubMed Central

    Hofman, Abe D.; Visser, Ingmar; Jansen, Brenda R. J.; van der Maas, Han L. J.

    2015-01-01

    We propose and test three statistical models for the analysis of children’s responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM) following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779), and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808). For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development. PMID:26505905

  5. Chemical Entity Semantic Specification: Knowledge representation for efficient semantic cheminformatics and facile data integration

    PubMed Central

    2011-01-01

    Background Over the past several centuries, chemistry has permeated virtually every facet of human lifestyle, enriching fields as diverse as medicine, agriculture, manufacturing, warfare, and electronics, among numerous others. Unfortunately, application-specific, incompatible chemical information formats and representation strategies have emerged as a result of such diverse adoption of chemistry. Although a number of efforts have been dedicated to unifying the computational representation of chemical information, disparities between the various chemical databases still persist and stand in the way of cross-domain, interdisciplinary investigations. Through a common syntax and formal semantics, Semantic Web technology offers the ability to accurately represent, integrate, reason about and query across diverse chemical information. Results Here we specify and implement the Chemical Entity Semantic Specification (CHESS) for the representation of polyatomic chemical entities, their substructures, bonds, atoms, and reactions using Semantic Web technologies. CHESS provides means to capture aspects of their corresponding chemical descriptors, connectivity, functional composition, and geometric structure while specifying mechanisms for data provenance. We demonstrate that using our readily extensible specification, it is possible to efficiently integrate multiple disparate chemical data sources, while retaining appropriate correspondence of chemical descriptors, with very little additional effort. We demonstrate the impact of some of our representational decisions on the performance of chemically-aware knowledgebase searching and rudimentary reaction candidate selection. Finally, we provide access to the tools necessary to carry out chemical entity encoding in CHESS, along with a sample knowledgebase. Conclusions By harnessing the power of Semantic Web technologies with CHESS, it is possible to provide a means of facile cross-domain chemical knowledge integration with full

  6. Enriched Video Semantic Metadata: Authorization, Integration, and Presentation.

    ERIC Educational Resources Information Center

    Mu, Xiangming; Marchionini, Gary

    2003-01-01

    Presents an enriched video metadata framework including video authorization using the Video Annotation and Summarization Tool (VAST)-a video metadata authorization system that integrates both semantic and visual metadata-- metadata integration, and user level applications. Results demonstrated that the enriched metadata were seamlessly…

  7. Specification and Enforcement of Semantic Integrity Constraints in Microsoft Access

    ERIC Educational Resources Information Center

    Dadashzadeh, Mohammad

    2007-01-01

    Semantic integrity constraints are business-specific rules that limit the permissible values in a database. For example, a university rule dictating that an "incomplete" grade cannot be changed to an A constrains the possible states of the database. To maintain database integrity, business rules should be identified in the course of database…

  8. Ontology alignment architecture for semantic sensor Web integration.

    PubMed

    Fernandez, Susel; Marsa-Maestre, Ivan; Velasco, Juan R; Alarcos, Bernardo

    2013-01-01

    Sensor networks are a concept that has become very popular in data acquisition and processing for multiple applications in different fields such as industrial, medicine, home automation, environmental detection, etc. Today, with the proliferation of small communication devices with sensors that collect environmental data, semantic Web technologies are becoming closely related with sensor networks. The linking of elements from Semantic Web technologies with sensor networks has been called Semantic Sensor Web and has among its main features the use of ontologies. One of the key challenges of using ontologies in sensor networks is to provide mechanisms to integrate and exchange knowledge from heterogeneous sources (that is, dealing with semantic heterogeneity). Ontology alignment is the process of bringing ontologies into mutual agreement by the automatic discovery of mappings between related concepts. This paper presents a system for ontology alignment in the Semantic Sensor Web which uses fuzzy logic techniques to combine similarity measures between entities of different ontologies. The proposed approach focuses on two key elements: the terminological similarity, which takes into account the linguistic and semantic information of the context of the entity's names, and the structural similarity, based on both the internal and relational structure of the concepts. This work has been validated using sensor network ontologies and the Ontology Alignment Evaluation Initiative (OAEI) tests. The results show that the proposed techniques outperform previous approaches in terms of precision and recall. PMID:24051523

  9. Ontology Alignment Architecture for Semantic Sensor Web Integration

    PubMed Central

    Fernandez, Susel; Marsa-Maestre, Ivan; Velasco, Juan R.; Alarcos, Bernardo

    2013-01-01

    Sensor networks are a concept that has become very popular in data acquisition and processing for multiple applications in different fields such as industrial, medicine, home automation, environmental detection, etc. Today, with the proliferation of small communication devices with sensors that collect environmental data, semantic Web technologies are becoming closely related with sensor networks. The linking of elements from Semantic Web technologies with sensor networks has been called Semantic Sensor Web and has among its main features the use of ontologies. One of the key challenges of using ontologies in sensor networks is to provide mechanisms to integrate and exchange knowledge from heterogeneous sources (that is, dealing with semantic heterogeneity). Ontology alignment is the process of bringing ontologies into mutual agreement by the automatic discovery of mappings between related concepts. This paper presents a system for ontology alignment in the Semantic Sensor Web which uses fuzzy logic techniques to combine similarity measures between entities of different ontologies. The proposed approach focuses on two key elements: the terminological similarity, which takes into account the linguistic and semantic information of the context of the entity's names, and the structural similarity, based on both the internal and relational structure of the concepts. This work has been validated using sensor network ontologies and the Ontology Alignment Evaluation Initiative (OAEI) tests. The results show that the proposed techniques outperform previous approaches in terms of precision and recall. PMID:24051523

  10. A flexible integration framework for a Semantic Geospatial Web application

    NASA Astrophysics Data System (ADS)

    Yuan, Ying; Mei, Kun; Bian, Fuling

    2008-10-01

    With the growth of the World Wide Web technologies, the access to and use of geospatial information changed in the past decade radically. Previously, the data processed by a GIS as well as its methods had resided locally and contained information that was sufficiently unambiguous in the respective information community. Now, both data and methods may be retrieved and combined from anywhere in the world, escaping their local contexts. The last few years have seen a growing interest in the field of semantic geospatial web. With the development of semantic web technologies, we have seen the possibility of solving the heterogeneity/interoperation problem in the GIS community. The semantic geospatial web application can support a wide variety of tasks including data integration, interoperability, knowledge reuse, spatial reasoning and many others. This paper proposes a flexible framework called GeoSWF (short for Geospatial Semantic Web Framework), which supports the semantic integration of the distributed and heterogeneous geospatial information resources and also supports the semantic query and spatial relationship reasoning. We design the architecture of GeoSWF by extending the MVC Pattern. The GeoSWF use the geo-2007.owl proposed by W3C as the reference ontology of the geospatial information and design different application ontologies according to the situation of heterogeneous geospatial information resources. A Geospatial Ontology Creating Algorithm (GOCA) is designed for convert the geospatial information to the ontology instances represented by RDF/OWL. On the top of these ontology instances, the GeoSWF carry out the semantic reasoning by the rule set stored in the knowledge base to generate new system query. The query result will be ranking by ordering the Euclidean distance of each ontology instances. At last, the paper gives the conclusion and future work.

  11. Information Integration from Semantically Heterogeneous Biological Data Sources.

    PubMed

    Caragea, Doina; Bao, Jie; Pathak, Jyotishman; Silvescu, Adrian; Andorf, Carson; Dobbs, Drena; Honavar, Vasant

    2005-08-26

    We present the first prototype of INDUS (Intelligent Data Understanding System), a federated, query-centric system for information integration and knowledge acquisition from distributed, semantically heterogeneous data sources that can be viewed (conceptually) as tables. INDUS employs ontologies and inter-ontology mappings, to enable a user to view a collection of such data sources (regardless of location, internal structure and query interfaces) as though they were a collection of tables structured according to an ontology supplied by the user. This allows INDUS to answer user queries against distributed, semantically heterogeneous data sources without the need for a centralized data warehouse or a common global ontology. PMID:20802821

  12. Mining integrated semantic networks for drug repositioning opportunities

    PubMed Central

    Mullen, Joseph; Tipney, Hannah

    2016-01-01

    Current research and development approaches to drug discovery have become less fruitful and more costly. One alternative paradigm is that of drug repositioning. Many marketed examples of repositioned drugs have been identified through serendipitous or rational observations, highlighting the need for more systematic methodologies to tackle the problem. Systems level approaches have the potential to enable the development of novel methods to understand the action of therapeutic compounds, but requires an integrative approach to biological data. Integrated networks can facilitate systems level analyses by combining multiple sources of evidence to provide a rich description of drugs, their targets and their interactions. Classically, such networks can be mined manually where a skilled person is able to identify portions of the graph (semantic subgraphs) that are indicative of relationships between drugs and highlight possible repositioning opportunities. However, this approach is not scalable. Automated approaches are required to systematically mine integrated networks for these subgraphs and bring them to the attention of the user. We introduce a formal framework for the definition of integrated networks and their associated semantic subgraphs for drug interaction analysis and describe DReSMin, an algorithm for mining semantically-rich networks for occurrences of a given semantic subgraph. This algorithm allows instances of complex semantic subgraphs that contain data about putative drug repositioning opportunities to be identified in a computationally tractable fashion, scaling close to linearly with network data. We demonstrate the utility of our approach by mining an integrated drug interaction network built from 11 sources. This work identified and ranked 9,643,061 putative drug-target interactions, showing a strong correlation between highly scored associations and those supported by literature. We discuss the 20 top ranked associations in more detail, of which

  13. Mining integrated semantic networks for drug repositioning opportunities.

    PubMed

    Mullen, Joseph; Cockell, Simon J; Tipney, Hannah; Woollard, Peter M; Wipat, Anil

    2016-01-01

    Current research and development approaches to drug discovery have become less fruitful and more costly. One alternative paradigm is that of drug repositioning. Many marketed examples of repositioned drugs have been identified through serendipitous or rational observations, highlighting the need for more systematic methodologies to tackle the problem. Systems level approaches have the potential to enable the development of novel methods to understand the action of therapeutic compounds, but requires an integrative approach to biological data. Integrated networks can facilitate systems level analyses by combining multiple sources of evidence to provide a rich description of drugs, their targets and their interactions. Classically, such networks can be mined manually where a skilled person is able to identify portions of the graph (semantic subgraphs) that are indicative of relationships between drugs and highlight possible repositioning opportunities. However, this approach is not scalable. Automated approaches are required to systematically mine integrated networks for these subgraphs and bring them to the attention of the user. We introduce a formal framework for the definition of integrated networks and their associated semantic subgraphs for drug interaction analysis and describe DReSMin, an algorithm for mining semantically-rich networks for occurrences of a given semantic subgraph. This algorithm allows instances of complex semantic subgraphs that contain data about putative drug repositioning opportunities to be identified in a computationally tractable fashion, scaling close to linearly with network data. We demonstrate the utility of our approach by mining an integrated drug interaction network built from 11 sources. This work identified and ranked 9,643,061 putative drug-target interactions, showing a strong correlation between highly scored associations and those supported by literature. We discuss the 20 top ranked associations in more detail, of which

  14. Semantic Web-based integration of cancer pathways and allele frequency data.

    PubMed

    Holford, Matthew E; Rajeevan, Haseena; Zhao, Hongyu; Kidd, Kenneth K; Cheung, Kei-Hoi

    2009-01-01

    We demonstrate the use of Semantic Web technology to integrate the ALFRED allele frequency database and the Starpath pathway resource. The linking of population-specific genotype data with cancer-related pathway data is potentially useful given the growing interest in personalized medicine and the exploitation of pathway knowledge for cancer drug discovery. We model our data using the Web Ontology Language (OWL), drawing upon ideas from existing standard formats BioPAX for pathway data and PML for allele frequency data. We store our data within an Oracle database, using Oracle Semantic Technologies. We then query the data using Oracle's rule-based inference engine and SPARQL-like RDF query language. The ability to perform queries across the domains of population genetics and pathways offers the potential to answer a number of cancer-related research questions. Among the possibilities is the ability to identify genetic variants which are associated with cancer pathways and whose frequency varies significantly between ethnic groups. This sort of information could be useful for designing clinical studies and for providing background data in personalized medicine. It could also assist with the interpretation of genetic analysis results such as those from genome-wide association studies. PMID:19458791

  15. Semantic Web-Based Integration of Cancer Pathways and Allele Frequency Data

    PubMed Central

    Holford, Matthew E.; Rajeevan, Haseena; Zhao, Hongyu; Kidd, Kenneth K.; Cheung, Kei-Hoi

    2009-01-01

    We demonstrate the use of Semantic Web technology to integrate the ALFRED allele frequency database and the Starpath pathway resource. The linking of population-specific genotype data with cancer-related pathway data is potentially useful given the growing interest in personalized medicine and the exploitation of pathway knowledge for cancer drug discovery. We model our data using the Web Ontology Language (OWL), drawing upon ideas from existing standard formats BioPAX for pathway data and PML for allele frequency data. We store our data within an Oracle database, using Oracle Semantic Technologies. We then query the data using Oracle’s rule-based inference engine and SPARQL-like RDF query language. The ability to perform queries across the domains of population genetics and pathways offers the potential to answer a number of cancer-related research questions. Among the possibilities is the ability to identify genetic variants which are associated with cancer pathways and whose frequency varies significantly between ethnic groups. This sort of information could be useful for designing clinical studies and for providing background data in personalized medicine. It could also assist with the interpretation of genetic analysis results such as those from genome-wide association studies. PMID:19458791

  16. Project Integration Architecture: Formulation of Dimensionality in Semantic Parameters Outline

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    One of several key elements of the Project Integration Architecture (PIA) is the formulation of parameter objects which convey meaningful semantic information. The infusion of measurement dimensionality into such objects is an important part of that effort since it promises to automate the conversion of units between cooperating applications and, thereby, eliminate the mistakes that have occasionally beset other systems of information transport. This paper discusses the conceptualization of dimensionality developed as a result of that effort.

  17. Towards A Topological Framework for Integrating Semantic Information Sources

    SciTech Connect

    Joslyn, Cliff A.; Hogan, Emilie A.; Robinson, Michael

    2014-09-07

    In this position paper we argue for the role that topological modeling principles can play in providing a framework for sensor integration. While used successfully in standard (quantitative) sensors, we are developing this methodology in new directions to make it appropriate specifically for semantic information sources, including keyterms, ontology terms, and other general Boolean, categorical, ordinal, and partially-ordered data types. We illustrate the basics of the methodology in an extended use case/example, and discuss path forward.

  18. Predicting Protein Function via Semantic Integration of Multiple Networks.

    PubMed

    Yu, Guoxian; Fu, Guangyuan; Wang, Jun; Zhu, Hailong

    2016-01-01

    Determining the biological functions of proteins is one of the key challenges in the post-genomic era. The rapidly accumulated large volumes of proteomic and genomic data drives to develop computational models for automatically predicting protein function in large scale. Recent approaches focus on integrating multiple heterogeneous data sources and they often get better results than methods that use single data source alone. In this paper, we investigate how to integrate multiple biological data sources with the biological knowledge, i.e., Gene Ontology (GO), for protein function prediction. We propose a method, called SimNet, to Semantically i ntegrate multiple functional association Networks derived from heterogenous data sources. SimNet firstly utilizes GO annotations of proteins to capture the semantic similarity between proteins and introduces a semantic kernel based on the similarity. Next, SimNet constructs a composite network, obtained as a weighted summation of individual networks, and aligns the network with the kernel to get the weights assigned to individual networks. Then, it applies a network-based classifier on the composite network to predict protein function. Experiment results on heterogenous proteomic data sources of Yeast, Human, Mouse, and Fly show that, SimNet not only achieves better (or comparable) results than other related competitive approaches, but also takes much less time. The Matlab codes of SimNet are available at https://sites.google.com/site/guoxian85/simnet. PMID:26800544

  19. A computational modeling of semantic knowledge in reading comprehension: Integrating the landscape model with latent semantic analysis.

    PubMed

    Yeari, Menahem; van den Broek, Paul

    2016-09-01

    It is a well-accepted view that the prior semantic (general) knowledge that readers possess plays a central role in reading comprehension. Nevertheless, computational models of reading comprehension have not integrated the simulation of semantic knowledge and online comprehension processes under a unified mathematical algorithm. The present article introduces a computational model that integrates the landscape model of comprehension processes with latent semantic analysis representation of semantic knowledge. In three sets of simulations of previous behavioral findings, the integrated model successfully simulated the activation and attenuation of predictive and bridging inferences during reading, as well as centrality estimations and recall of textual information after reading. Analyses of the computational results revealed new theoretical insights regarding the underlying mechanisms of the various comprehension phenomena. PMID:27383752

  20. Mediator infrastructure for information integration and semantic data integration environment for biomedical research.

    PubMed

    Grethe, Jeffrey S; Ross, Edward; Little, David; Sanders, Brian; Gupta, Amarnath; Astakhov, Vadim

    2009-01-01

    This paper presents current progress in the development of semantic data integration environment which is a part of the Biomedical Informatics Research Network (BIRN; http://www.nbirn.net) project. BIRN is sponsored by the National Center for Research Resources (NCRR), a component of the National Institutes of Health (NIH). A goal is the development of a cyberinfrastructure for biomedical research that supports advance data acquisition, data storage, data management, data integration, data mining, data visualization, and other computing and information processing services over the Internet. Each participating institution maintains storage of their experimental or computationally derived data. Mediator-based data integration system performs semantic integration over the databases to enable researchers to perform analyses based on larger and broader datasets than would be available from any single institution's data. This paper describes recent revision of the system architecture, implementation, and capabilities of the semantically based data integration environment for BIRN. PMID:19623485

  1. Semantic Integrative Digital Pathology: Insights into Microsemiological Semantics and Image Analysis Scalability.

    PubMed

    Racoceanu, Daniel; Capron, Frédérique

    2016-01-01

    Being able to provide a traceable and dynamic second opinion has become an ethical priority for patients and health care professionals in modern computer-aided medicine. In this perspective, a semantic cognitive virtual microscopy approach has been recently initiated, the MICO project, by focusing on cognitive digital pathology. This approach supports the elaboration of pathology-compliant daily protocols dedicated to breast cancer grading, in particular mitotic counts and nuclear atypia. A proof of concept has thus been elaborated, and an extension of these approaches is now underway in a collaborative digital pathology framework, the FlexMIm project. As important milestones on the way to routine digital pathology, a series of pioneer international benchmarking initiatives have been launched for mitosis detection (MITOS), nuclear atypia grading (MITOS-ATYPIA) and glandular structure detection (GlaS), some of the fundamental grading components in diagnosis and prognosis. These initiatives allow envisaging a consolidated validation referential database for digital pathology in the very near future. This reference database will need coordinated efforts from all major teams working in this area worldwide, and it will certainly represent a critical bottleneck for the acceptance of all future imaging modules in clinical practice. In line with recent advances in molecular imaging and genetics, keeping the microscopic modality at the core of future digital systems in pathology is fundamental to insure the acceptance of these new technologies, as well as for a deeper systemic, structured comprehension of the pathologies. After all, at the scale of routine whole-slide imaging (WSI; ∼0.22 µm/pixel), the microscopic image represents a structured 'genomic cluster', enabling a naturally structured support for integrative digital pathology approaches. In order to accelerate and structure the integration of this heterogeneous information, a major effort is and will continue to

  2. Semantic Web integration of Cheminformatics resources with the SADI framework

    PubMed Central

    2011-01-01

    Background The diversity and the largely independent nature of chemical research efforts over the past half century are, most likely, the major contributors to the current poor state of chemical computational resource and database interoperability. While open software for chemical format interconversion and database entry cross-linking have partially addressed database interoperability, computational resource integration is hindered by the great diversity of software interfaces, languages, access methods, and platforms, among others. This has, in turn, translated into limited reproducibility of computational experiments and the need for application-specific computational workflow construction and semi-automated enactment by human experts, especially where emerging interdisciplinary fields, such as systems chemistry, are pursued. Fortunately, the advent of the Semantic Web, and the very recent introduction of RESTful Semantic Web Services (SWS) may present an opportunity to integrate all of the existing computational and database resources in chemistry into a machine-understandable, unified system that draws on the entirety of the Semantic Web. Results We have created a prototype framework of Semantic Automated Discovery and Integration (SADI) framework SWS that exposes the QSAR descriptor functionality of the Chemistry Development Kit. Since each of these services has formal ontology-defined input and output classes, and each service consumes and produces RDF graphs, clients can automatically reason about the services and available reference information necessary to complete a given overall computational task specified through a simple SPARQL query. We demonstrate this capability by carrying out QSAR analysis backed by a simple formal ontology to determine whether a given molecule is drug-like. Further, we discuss parameter-based control over the execution of SADI SWS. Finally, we demonstrate the value of computational resource envelopment as SADI services through

  3. Semantic integration and syntactic planning in language production.

    PubMed

    Solomon, Eric S; Pearlmutter, Neal J

    2004-08-01

    Five experiments, using a subject-verb agreement error elicitation procedure, investigated syntactic planning processes in production. The experiments examined the influence of semantic integration--the degree to which phrases are tightly linked at the conceptual level--and contrasted two accounts of planning: serial stack-based systems and parallel activation-based systems. Serial stack-based systems rely on memory-shifting processes to coordinate ongoing planning. Memory-shifting should be easier for more integrated phrases, resulting in fewer errors. Parallel, activation-based systems, on the other hand, maintain multiple representations simultaneously in memory. More integrated phrases will be more likely to be processed together, resulting in increased interference and more errors. Participants completed stimuli like The drawing of/with the flower(s), which varied local noun number (flower(s)) and the relationship between the head (drawing) and local noun. In some constructions, the nouns were tightly integrated (e.g., of), whereas in others the relationship was looser (e.g., with, specifying accompaniment). In addition to the well-established local noun mismatch effect (more errors for plural than for singular local nouns), all experiments revealed larger mismatch error effects following tightly integrated stimuli. These results are compatible with parallel activation-based accounts and cannot be explained by serial, memory-shift-based accounts. The experiments and three meta-analyses also ruled out alternative accounts based on plausibility, argumenthood, conceptual number, clause packaging, or hierarchical feature-passing, reinforcing the general finding that error rates increase with degree of semantic integration. PMID:15193971

  4. Simulation of operating rules and discretional decisions using a fuzzy rule-based system integrated into a water resources management model

    NASA Astrophysics Data System (ADS)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel

    2013-04-01

    Water resources systems are operated, mostly, using a set of pre-defined rules not regarding, usually, to an optimal allocation in terms of water use or economic benefits, but to historical and institutional reasons. These operating policies are reproduced, commonly, as hedging rules, pack rules or zone-based operations, and simulation models can be used to test their performance under a wide range of hydrological and/or socio-economic hypothesis. Despite the high degree of acceptation and testing that these models have achieved, the actual operation of water resources systems hardly follows all the time the pre-defined rules with the consequent uncertainty on the system performance. Real-world reservoir operation is very complex, affected by input uncertainty (imprecision in forecast inflow, seepage and evaporation losses, etc.), filtered by the reservoir operator's experience and natural risk-aversion, while considering the different physical and legal/institutional constraints in order to meet the different demands and system requirements. The aim of this work is to expose a fuzzy logic approach to derive and assess the historical operation of a system. This framework uses a fuzzy rule-based system to reproduce pre-defined rules and also to match as close as possible the actual decisions made by managers. After built up, the fuzzy rule-based system can be integrated in a water resources management model, making possible to assess the system performance at the basin scale. The case study of the Mijares basin (eastern Spain) is used to illustrate the method. A reservoir operating curve regulates the two main reservoir releases (operated in a conjunctive way) with the purpose of guaranteeing a high realiability of supply to the traditional irrigation districts with higher priority (more senior demands that funded the reservoir construction). A fuzzy rule-based system has been created to reproduce the operating curve's performance, defining the system state (total

  5. A Semantic Image Annotation Model to Enable Integrative Translational Research

    PubMed Central

    Rubin, Daniel L.; Mongkolwat, Pattanasak; Channin, David S.

    2009-01-01

    Integrating and relating images with clinical and molecular data is a crucial activity in translational research, but challenging because the information in images is not explicit in standard computer-accessible formats. We have developed an ontology-based representation of the semantic contents of radiology images called AIM (Annotation and Image Markup). AIM specifies the quantitative and qualitative content that researchers extract from images. The AIM ontology enables semantic image annotation and markup, specifying the entities and relations necessary to describe images. AIM annotations, represented as instances in the ontology, enable key use cases for images in translational research such as disease status assessment, query, and inter-observer variation analysis. AIM will enable ontology-based query and mining of images, and integration of images with data in other ontology-annotated bioinformatics databases. Our ultimate goal is to enable researchers to link images with related scientific data so they can learn the biological and physiological significance of the image content. PMID:21347180

  6. Automated integration of external databases: a knowledge-based approach to enhancing rule-based expert systems.

    PubMed Central

    Berman, L.; Cullen, M. R.; Miller, P. L.

    1992-01-01

    Expert system applications in the biomedical domain have long been hampered by the difficulty inherent in maintaining and extending large knowledge bases. We have developed a knowledge-based method for automatically augmenting such knowledge bases. The method consists of automatically integrating data contained in commercially available, external, on-line databases with data contained in an expert system's knowledge base. We have built a prototype system, named DBX, using this technique to augment an expert system's knowledge base as a decision support aid and as a bibliographic retrieval tool. In this paper, we describe this prototype system in detail, illustrate its use and discuss the lessons we have learned in its implementation. PMID:1482872

  7. Semantic Integration for Marine Science Interoperability Using Web Technologies

    NASA Astrophysics Data System (ADS)

    Rueda, C.; Bermudez, L.; Graybeal, J.; Isenor, A. W.

    2008-12-01

    The Marine Metadata Interoperability Project, MMI (http://marinemetadata.org) promotes the exchange, integration, and use of marine data through enhanced data publishing, discovery, documentation, and accessibility. A key effort is the definition of an Architectural Framework and Operational Concept for Semantic Interoperability (http://marinemetadata.org/sfc), which is complemented with the development of tools that realize critical use cases in semantic interoperability. In this presentation, we describe a set of such Semantic Web tools that allow performing important interoperability tasks, ranging from the creation of controlled vocabularies and the mapping of terms across multiple ontologies, to the online registration, storage, and search services needed to work with the ontologies (http://mmisw.org). This set of services uses Web standards and technologies, including Resource Description Framework (RDF), Web Ontology language (OWL), Web services, and toolkits for Rich Internet Application development. We will describe the following components: MMI Ontology Registry: The MMI Ontology Registry and Repository provides registry and storage services for ontologies. Entries in the registry are associated with projects defined by the registered users. Also, sophisticated search functions, for example according to metadata items and vocabulary terms, are provided. Client applications can submit search requests using the WC3 SPARQL Query Language for RDF. Voc2RDF: This component converts an ASCII comma-delimited set of terms and definitions into an RDF file. Voc2RDF facilitates the creation of controlled vocabularies by using a simple form-based user interface. Created vocabularies and their descriptive metadata can be submitted to the MMI Ontology Registry for versioning and community access. VINE: The Vocabulary Integration Environment component allows the user to map vocabulary terms across multiple ontologies. Various relationships can be established, for example

  8. Electrophysiological Evidence for Incremental Lexical-Semantic Integration in Auditory Compound Comprehension

    ERIC Educational Resources Information Center

    Koester, Dirk; Holle, Henning; Gunter, Thomas C.

    2009-01-01

    The present study investigated the time-course of semantic integration in auditory compound word processing. Compounding is a productive mechanism of word formation that is used frequently in many languages. Specifically, we examined whether semantic integration is incremental or is delayed until the head, the last constituent in German, is…

  9. Integrated semantics service platform for the Internet of Things: a case study of a smart office.

    PubMed

    Ryu, Minwoo; Kim, Jaeho; Yun, Jaeseok

    2015-01-01

    The Internet of Things (IoT) allows machines and devices in the world to connect with each other and generate a huge amount of data, which has a great potential to provide useful knowledge across service domains. Combining the context of IoT with semantic technologies, we can build integrated semantic systems to support semantic interoperability. In this paper, we propose an integrated semantic service platform (ISSP) to support ontological models in various IoT-based service domains of a smart city. In particular, we address three main problems for providing integrated semantic services together with IoT systems: semantic discovery, dynamic semantic representation, and semantic data repository for IoT resources. To show the feasibility of the ISSP, we develop a prototype service for a smart office using the ISSP, which can provide a preset, personalized office environment by interpreting user text input via a smartphone. We also discuss a scenario to show how the ISSP-based method would help build a smart city, where services in each service domain can discover and exploit IoT resources that are wanted across domains. We expect that our method could eventually contribute to providing people in a smart city with more integrated, comprehensive services based on semantic interoperability. PMID:25608216

  10. Integrated Semantics Service Platform for the Internet of Things: A Case Study of a Smart Office

    PubMed Central

    Ryu, Minwoo; Kim, Jaeho; Yun, Jaeseok

    2015-01-01

    The Internet of Things (IoT) allows machines and devices in the world to connect with each other and generate a huge amount of data, which has a great potential to provide useful knowledge across service domains. Combining the context of IoT with semantic technologies, we can build integrated semantic systems to support semantic interoperability. In this paper, we propose an integrated semantic service platform (ISSP) to support ontological models in various IoT-based service domains of a smart city. In particular, we address three main problems for providing integrated semantic services together with IoT systems: semantic discovery, dynamic semantic representation, and semantic data repository for IoT resources. To show the feasibility of the ISSP, we develop a prototype service for a smart office using the ISSP, which can provide a preset, personalized office environment by interpreting user text input via a smartphone. We also discuss a scenario to show how the ISSP-based method would help build a smart city, where services in each service domain can discover and exploit IoT resources that are wanted across domains. We expect that our method could eventually contribute to providing people in a smart city with more integrated, comprehensive services based on semantic interoperability. PMID:25608216

  11. A Semantic Web Management Model for Integrative Biomedical Informatics

    PubMed Central

    Deus, Helena F.; Stanislaus, Romesh; Veiga, Diogo F.; Behrens, Carmen; Wistuba, Ignacio I.; Minna, John D.; Garner, Harold R.; Swisher, Stephen G.; Roth, Jack A.; Correa, Arlene M.; Broom, Bradley; Coombes, Kevin; Chang, Allen; Vogel, Lynn H.; Almeida, Jonas S.

    2008-01-01

    Background Data, data everywhere. The diversity and magnitude of the data generated in the Life Sciences defies automated articulation among complementary efforts. The additional need in this field for managing property and access permissions compounds the difficulty very significantly. This is particularly the case when the integration involves multiple domains and disciplines, even more so when it includes clinical and high throughput molecular data. Methodology/Principal Findings The emergence of Semantic Web technologies brings the promise of meaningful interoperation between data and analysis resources. In this report we identify a core model for biomedical Knowledge Engineering applications and demonstrate how this new technology can be used to weave a management model where multiple intertwined data structures can be hosted and managed by multiple authorities in a distributed management infrastructure. Specifically, the demonstration is performed by linking data sources associated with the Lung Cancer SPORE awarded to The University of Texas MDAnderson Cancer Center at Houston and the Southwestern Medical Center at Dallas. A software prototype, available with open source at www.s3db.org, was developed and its proposed design has been made publicly available as an open source instrument for shared, distributed data management. Conclusions/Significance The Semantic Web technologies have the potential to addresses the need for distributed and evolvable representations that are critical for systems Biology and translational biomedical research. As this technology is incorporated into application development we can expect that both general purpose productivity software and domain specific software installed on our personal computers will become increasingly integrated with the relevant remote resources. In this scenario, the acquisition of a new dataset should automatically trigger the delegation of its analysis. PMID:18698353

  12. Distributed semantic networks and CLIPS

    NASA Technical Reports Server (NTRS)

    Snyder, James; Rodriguez, Tony

    1991-01-01

    Semantic networks of frames are commonly used as a method of reasoning in many problems. In most of these applications the semantic network exists as a single entity in a single process environment. Advances in workstation hardware provide support for more sophisticated applications involving multiple processes, interacting in a distributed environment. In these applications the semantic network may well be distributed over several concurrently executing tasks. This paper describes the design and implementation of a frame based, distributed semantic network in which frames are accessed both through C Language Integrated Production System (CLIPS) expert systems and procedural C++ language programs. The application area is a knowledge based, cooperative decision making model utilizing both rule based and procedural experts.

  13. Altered semantic integration in autism beyond language: a cross-modal event-related potentials study.

    PubMed

    Ribeiro, Tatiane C; Valasek, Claudia A; Minati, Ludovico; Boggio, Paulo S

    2013-05-29

    Autism spectrum disorders (ASDs) are characterized by impaired communication, particularly pragmatic and semantic language, resulting in verbal comprehension deficits. Semantic processing in these conditions has been studied extensively, but mostly limited only to linguistic material. Emerging evidence, however, suggests that semantic integration deficits may extend beyond the verbal domain. Here, we explored cross-modal semantic integration using visual targets preceded by musical and linguistic cues. Particularly, we have recorded the event-related potentials to evaluate whether the N400 and late positive potential (LPP) components, two widely studied electrophysiological markers of semantic processing, are differently sensitive to congruence with respect to typically developing children. Seven ASD patients and seven neurotypical participants matched by age, education and intelligence quotient provided usable data. Neuroelectric activity was recorded in response to visual targets that were related or unrelated to a preceding spoken sentence or musical excerpt. The N400 was sensitive to semantic congruence in the controls but not the patients, whereas the LPP showed a complementary pattern. These results suggest that semantic processing in ASD children is also altered in the context of musical and visual stimuli, and point to a functional decoupling between the generators of the N400 and LPP, which may indicate delayed semantic processing. These novel findings underline the importance of exploring semantic integration across multiple modalities in ASDs and provide motivation for further investigation in large clinical samples. PMID:23629689

  14. Disease Ontology: a backbone for disease semantic integration

    PubMed Central

    Schriml, Lynn Marie; Arze, Cesar; Nadendla, Suvarna; Chang, Yu-Wei Wayne; Mazaitis, Mark; Felix, Victor; Feng, Gang; Kibbe, Warren Alden

    2012-01-01

    The Disease Ontology (DO) database (http://disease-ontology.org) represents a comprehensive knowledge base of 8043 inherited, developmental and acquired human diseases (DO version 3, revision 2510). The DO web browser has been designed for speed, efficiency and robustness through the use of a graph database. Full-text contextual searching functionality using Lucene allows the querying of name, synonym, definition, DOID and cross-reference (xrefs) with complex Boolean search strings. The DO semantically integrates disease and medical vocabularies through extensive cross mapping and integration of MeSH, ICD, NCI's thesaurus, SNOMED CT and OMIM disease-specific terms and identifiers. The DO is utilized for disease annotation by major biomedical databases (e.g. Array Express, NIF, IEDB), as a standard representation of human disease in biomedical ontologies (e.g. IDO, Cell line ontology, NIFSTD ontology, Experimental Factor Ontology, Influenza Ontology), and as an ontological cross mappings resource between DO, MeSH and OMIM (e.g. GeneWiki). The DO project (http://diseaseontology.sf.net) has been incorporated into open source tools (e.g. Gene Answers, FunDO) to connect gene and disease biomedical data through the lens of human disease. The next iteration of the DO web browser will integrate DO's extended relations and logical definition representation along with these biomedical resource cross-mappings. PMID:22080554

  15. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases.

    PubMed

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-07-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org. PMID:21632604

  16. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases

    PubMed Central

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-01-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org. PMID:21632604

  17. Separate Brain Circuits Support Integrative and Semantic Priming in the Human Language System.

    PubMed

    Feng, Gangyi; Chen, Qi; Zhu, Zude; Wang, Suiping

    2016-07-01

    Semantic priming is a crucial phenomenon to study the organization of semantic memory. A novel type of priming effect, integrative priming, has been identified behaviorally, whereby a prime word facilitates recognition of a target word when the 2 concepts can be combined to form a unitary representation. We used both functional and anatomical imaging approaches to investigate the neural substrates supporting such integrative priming, and compare them with those in semantic priming. Similar behavioral priming effects for both semantic (Bread-Cake) and integrative conditions (Cherry-Cake) were observed when compared with an unrelated condition. However, a clearly dissociated brain response was observed between these 2 types of priming. The semantic-priming effect was localized to the posterior superior temporal and middle temporal gyrus. In contrast, the integrative-priming effect localized to the left anterior inferior frontal gyrus and left anterior temporal cortices. Furthermore, fiber tractography showed that the integrative-priming regions were connected via uncinate fasciculus fiber bundle forming an integrative circuit, whereas the semantic-priming regions connected to the posterior frontal cortex via separated pathways. The results point to dissociable neural pathways underlying the 2 distinct types of priming, illuminating the neural circuitry organization of semantic representation and integration. PMID:26209843

  18. Semantic Integration and Age of Acquisition Effects in Code-Blend Comprehension.

    PubMed

    Giezen, Marcel R; Emmorey, Karen

    2016-04-01

    Semantic and lexical decision tasks were used to investigate the mechanisms underlying code-blend facilitation: the finding that hearing bimodal bilinguals comprehend signs in American Sign Language (ASL) and spoken English words more quickly when they are presented together simultaneously than when each is presented alone. More robust facilitation effects were observed for semantic decision than for lexical decision, suggesting that lexical integration of signs and words within a code-blend occurs primarily at the semantic level, rather than at the level of form. Early bilinguals exhibited greater facilitation effects than late bilinguals for English (the dominant language) in the semantic decision task, possibly because early bilinguals are better able to process early visual cues from ASL signs and use these to constrain English word recognition. Comprehension facilitation via semantic integration of words and signs is consistent with co-speech gesture research demonstrating facilitative effects of gesture integration on language comprehension. PMID:26657077

  19. Linked Metadata - lightweight semantics for data integration (Invited)

    NASA Astrophysics Data System (ADS)

    Hendler, J. A.

    2013-12-01

    The "Linked Open Data" cloud (http://linkeddata.org) is currently used to show how the linking of datasets, supported by SPARQL endpoints, is creating a growing set of linked data assets. This linked data space has been growing rapidly, and the last version collected is estimated to have had over 35 billion 'triples.' As impressive as this may sound, there is an inherent flaw in the way the linked data story is conceived. The idea is that all of the data is represented in a linked format (generally RDF) and applications will essentially query this cloud and provide mashup capabilities between the various kinds of data that are found. The view of linking in the cloud is fairly simple -links are provided by either shared URIs or by URIs that are asserted to be owl:sameAs. This view of the linking, which primarily focuses on shared objects and subjects in RDF's subject-predicate-object representation, misses a critical aspect of Semantic Web technology. Given triples such as * A:person1 foaf:knows A:person2 * B:person3 foaf:knows B:person4 * C:person5 foaf:name 'John Doe' this view would not consider them linked (barring other assertions) even though they share a common vocabulary. In fact, we get significant clues that there are commonalities in these data items from the shared namespaces and predicates, even if the traditional 'graph' view of RDF doesn't appear to join on these. Thus, it is the linking of the data descriptions, whether as metadata or other vocabularies, that provides the linking in these cases. This observation is crucial to scientific data integration where the size of the datasets, or even the individual relationships within them, can be quite large. (Note that this is not restricted to scientific data - search engines, social networks, and massive multiuser games also create huge amounts of data.) To convert all the triples into RDF and provide individual links is often unnecessary, and is both time and space intensive. Those looking to do on the

  20. SemantEco: a semantically powered modular architecture for integrating distributed environmental and ecological data

    USGS Publications Warehouse

    Patton, Evan W.; Seyed, Patrice; Wang, Ping; Fu, Linyun; Dein, F. Joshua; Bristol, R. Sky; McGuinness, Deborah L.

    2014-01-01

    We aim to inform the development of decision support tools for resource managers who need to examine large complex ecosystems and make recommendations in the face of many tradeoffs and conflicting drivers. We take a semantic technology approach, leveraging background ontologies and the growing body of linked open data. In previous work, we designed and implemented a semantically enabled environmental monitoring framework called SemantEco and used it to build a water quality portal named SemantAqua. Our previous system included foundational ontologies to support environmental regulation violations and relevant human health effects. In this work, we discuss SemantEco’s new architecture that supports modular extensions and makes it easier to support additional domains. Our enhanced framework includes foundational ontologies to support modeling of wildlife observation and wildlife health impacts, thereby enabling deeper and broader support for more holistically examining the effects of environmental pollution on ecosystems. We conclude with a discussion of how, through the application of semantic technologies, modular designs will make it easier for resource managers to bring in new sources of data to support more complex use cases.

  1. An Integrated Model in E-Government Based on Semantic Web, Web Service and Intelligent Agent

    NASA Astrophysics Data System (ADS)

    Zhu, Hongtao; Su, Fangli

    One urgent problem in E-government service is to improve service efficiency through breaking information islands while constructing integrated service systems. Web Service provides a set of standards for the provision of functionality over the Web, and Web Service descriptions are pure syntactic instead of semantic content. Semantic Web provides interoperability from syntactic level to semantic one not only for human users but also for software agents. Semantic Web and Intelligent Agent are highly complementary, and the existing technologies have made their unification quite feasible, which brings about a good opportunity to the development of E-government. Based on Semantic Web and Intelligent Agent technologies an integrated service model of E-government is suggested in this paper.

  2. Integration of Sentence-Level Semantic Information in Parafovea: Evidence from the RSVP-Flanker Paradigm.

    PubMed

    Zhang, Wenjia; Li, Nan; Wang, Xiaoyue; Wang, Suiping

    2015-01-01

    During text reading, the parafoveal word was usually presented between 2° and 5° from the point of fixation. Whether semantic information of parafoveal words can be processed during sentence reading is a critical and long-standing issue. Recently, studies using the RSVP-flanker paradigm have shown that the incongruent parafoveal word, presented as right flanker, elicited a more negative N400 compared with the congruent parafoveal word. This suggests that the semantic information of parafoveal words can be extracted and integrated during sentence reading, because the N400 effect is a classical index of semantic integration. However, as most previous studies did not control the word-pair congruency of the parafoveal and the foveal words that were presented in the critical triad, it is still unclear whether such integration happened at the sentence level or just at the word-pair level. The present study addressed this question by manipulating verbs in Chinese sentences to yield either a semantically congruent or semantically incongruent context for the critical noun. In particular, the interval between the critical nouns and verbs was controlled to be 4 or 5 characters. Thus, to detect the incongruence of the parafoveal noun, participants had to integrate it with the global sentential context. The results revealed that the N400 time-locked to the critical triads was more negative in incongruent than in congruent sentences, suggesting that parafoveal semantic information can be integrated at the sentence level during Chinese reading. PMID:26418230

  3. A semantic web framework to integrate cancer omics data with biological knowledge

    PubMed Central

    2012-01-01

    Background The RDF triple provides a simple linguistic means of describing limitless types of information. Triples can be flexibly combined into a unified data source we call a semantic model. Semantic models open new possibilities for the integration of variegated biological data. We use Semantic Web technology to explicate high throughput clinical data in the context of fundamental biological knowledge. We have extended Corvus, a data warehouse which provides a uniform interface to various forms of Omics data, by providing a SPARQL endpoint. With the querying and reasoning tools made possible by the Semantic Web, we were able to explore quantitative semantic models retrieved from Corvus in the light of systematic biological knowledge. Results For this paper, we merged semantic models containing genomic, transcriptomic and epigenomic data from melanoma samples with two semantic models of functional data - one containing Gene Ontology (GO) data, the other, regulatory networks constructed from transcription factor binding information. These two semantic models were created in an ad hoc manner but support a common interface for integration with the quantitative semantic models. Such combined semantic models allow us to pose significant translational medicine questions. Here, we study the interplay between a cell's molecular state and its response to anti-cancer therapy by exploring the resistance of cancer cells to Decitabine, a demethylating agent. Conclusions We were able to generate a testable hypothesis to explain how Decitabine fights cancer - namely, that it targets apoptosis-related gene promoters predominantly in Decitabine-sensitive cell lines, thus conveying its cytotoxic effect by activating the apoptosis pathway. Our research provides a framework whereby similar hypotheses can be developed easily. PMID:22373303

  4. ResearchIQ: Design of a Semantically Anchored Integrative Query Tool

    PubMed Central

    Lele, Omkar; Raje, Satyajeet; Yen, Po-Yin; Payne, Philip

    2015-01-01

    An important factor influencing the pace of research activity is the ability of researchers to discover and leverage heterogeneous resources. Usually, researcher profiles, laboratory equipment, data samples, clinical trials, and other research resources are stored in heterogeneous datasets in large organizations. Emergent semantic web technologies provide novel approaches to discover, annotate and consequently link such resources. In this manuscript, we describe the design of Research Integrative Query (ResearchIQ) tool, a semantically anchored resource discovery platform that facilitates semantic discovery of local and publically available data through a single web portal designed for researchers in the biomedical informatics domain within The Ohio State University. PMID:26306248

  5. Life-span differences in semantic integration of pictures and sentences in memory.

    PubMed

    Pezdek, K

    1980-09-01

    This study examined life-span developmental differences in spontaneous integration of semantically relevant material presented in pictures and sentences. 45 third graders, 45 sixth graders, 45 high school students, and 30 adults over 60 were presented a sequence of 24 pictures and sentences, followed by 24 intervening items. Each intervening item corresponded to, but was in the opposite modality from, one of the original items and was either semantically relevant or irrelevant to the corresponding original. In a "same-different" recognition test, data suggested that the sixth-grade and high school subjects semantically integrated original items with relevant intervening items that were in the opposite modality and made subsequent recognition responses on the basis of the integrated memory. Third graders and older adults, however, showed no evidence of spontaneous, cross-modality semantic integration. Further, increasing the temporal delay between presenting the to-be-integrated items, from 5 min to 1 day, decreased overall response sensitivity but did not alter the patterns of integration results. The findings are discussed in terms of age differences in the spontaneous use of strategies for effective memory processing, with the extreme age groups processing more formal characteristics of the stimuli in memory, and the middle 2 groups processing deeper, more semantic information. PMID:7418508

  6. The semantic metadatabase (SEMEDA): ontology based integration of federated molecular biological data sources.

    PubMed

    Köhler, Jacob; Schulze-Kremer, Steffen

    2002-01-01

    A system for "intelligent" semantic integration and querying of federated databases is being implemented by using three main components: A component which enables SQL access to integrated databases by database federation (MARGBench), an ontology based semantic metadatabase (SEMEDA) and an ontology based query interface (SEMEDA-query). In this publication we explain and demonstrate the principles, architecture and the use of SEMEDA. Since SEMEDA is implemented as 3 tiered web application database providers can enter all relevant semantic and technical information about their databases by themselves via a web browser. SEMEDA' s collaborative ontology editing feature is not restricted to database integration, and might also be useful for ongoing ontology developments, such as the "Gene Ontology" [2]. SEMEDA can be found at http://www-bm.cs.uni-magdeburg.de/semeda/. We explain how this ontologically structured information can be used for semantic database integration. In addition, requirements to ontologies for molecular biological database integration are discussed and relevant existing ontologies are evaluated. We further discuss how ontologies and structured knowledge sources can be used in SEMEDA and whether they can be merged supplemented or updated to meet the requirements for semantic database integration. PMID:12542408

  7. Integrative and semantic relations equally alleviate age-related associative memory deficits.

    PubMed

    Badham, Stephen P; Estes, Zachary; Maylor, Elizabeth A

    2012-03-01

    Two experiments compared effects of integrative and semantic relations between pairs of words on lexical and memory processes in old age. Integrative relations occur when two dissimilar and unassociated words are linked together to form a coherent phrase (e.g., horse-doctor). In Experiment 1, older adults completed a lexical-decision task where prime and target words were related either integratively or semantically. The two types of relation both facilitated responses compared to a baseline condition, demonstrating that priming can occur in older adults with minimal preexisting associations between primes and targets. In Experiment 2, young and older adults completed a cued recall task with integrative, semantic, and unrelated word pairs. Both integrative and semantic pairs showed significantly smaller age differences in associative memory compared to unrelated pairs. Integrative relations facilitated older adults' memory to a similar extent as semantic relations despite having few preexisting associations in memory. Integratability of stimuli is therefore a new factor that reduces associative deficits in older adults, most likely by supporting encoding and retrieval mechanisms. PMID:21639644

  8. Integrating Experiential and Distributional Data to Learn Semantic Representations

    ERIC Educational Resources Information Center

    Andrews, Mark; Vigliocco, Gabriella; Vinson, David

    2009-01-01

    The authors identify 2 major types of statistical data from which semantic representations can be learned. These are denoted as "experiential data" and "distributional data". Experiential data are derived by way of experience with the physical world and comprise the sensory-motor data obtained through sense receptors. Distributional data, by…

  9. Towards virtual knowledge broker services for semantic integration of life science literature and data sources.

    PubMed

    Harrow, Ian; Filsell, Wendy; Woollard, Peter; Dix, Ian; Braxenthaler, Michael; Gedye, Richard; Hoole, David; Kidd, Richard; Wilson, Jabe; Rebholz-Schuhmann, Dietrich

    2013-05-01

    Research in the life sciences requires ready access to primary data, derived information and relevant knowledge from a multitude of sources. Integration and interoperability of such resources are crucial for sharing content across research domains relevant to the life sciences. In this article we present a perspective review of data integration with emphasis on a semantics driven approach to data integration that pushes content into a shared infrastructure, reduces data redundancy and clarifies any inconsistencies. This enables much improved access to life science data from numerous primary sources. The Semantic Enrichment of the Scientific Literature (SESL) pilot project demonstrates feasibility for using already available open semantic web standards and technologies to integrate public and proprietary data resources, which span structured and unstructured content. This has been accomplished through a precompetitive consortium, which provides a cost effective approach for numerous stakeholders to work together to solve common problems. PMID:23247259

  10. Electrophysiological correlates of cross-linguistic semantic integration in hearing signers: N400 and LPC.

    PubMed

    Zachau, Swantje; Korpilahti, Pirjo; Hämäläinen, Jarmo A; Ervast, Leena; Heinänen, Kaisu; Suominen, Kalervo; Lehtihalmes, Matti; Leppänen, Paavo H T

    2014-07-01

    We explored semantic integration mechanisms in native and non-native hearing users of sign language and non-signing controls. Event-related brain potentials (ERPs) were recorded while participants performed a semantic decision task for priming lexeme pairs. Pairs were presented either within speech or across speech and sign language. Target-related ERP responses were subjected to principal component analyses (PCA), and neurocognitive basis of semantic integration processes were assessed by analyzing the N400 and the late positive complex (LPC) components in response to spoken (auditory) and signed (visual) antonymic and unrelated targets. Semantically-related effects triggered across modalities would indicate a similar tight interconnection between the signers׳ two languages like that described for spoken language bilinguals. Remarkable structural similarity of the N400 and LPC components with varying group differences between the spoken and signed targets were found. The LPC was the dominant response. The controls׳ LPC differed from the LPC of the two signing groups. It was reduced to the auditory unrelated targets and was less frontal for all the visual targets. The visual LPC was more broadly distributed in native than non-native signers and was left-lateralized for the unrelated targets in the native hearing signers only. Semantic priming effects were found for the auditory N400 in all groups, but only native hearing signers revealed a clear N400 effect to the visual targets. Surprisingly, the non-native signers revealed no semantically-related processing effect to the visual targets reflected in the N400 or the LPC; instead they appeared to rely more on visual post-lexical analyzing stages than native signers. We conclude that native and non-native signers employed different processing strategies to integrate signed and spoken semantic content. It appeared that the signers׳ semantic processing system was affected by group-specific factors like language

  11. Multimodal Feature Integration in the Angular Gyrus during Episodic and Semantic Retrieval

    PubMed Central

    Bonnici, Heidi M.; Richter, Franziska R.; Yazar, Yasemin

    2016-01-01

    Much evidence from distinct lines of investigation indicates the involvement of angular gyrus (AnG) in the retrieval of both episodic and semantic information, but the region's precise function and whether that function differs across episodic and semantic retrieval have yet to be determined. We used univariate and multivariate fMRI analysis methods to examine the role of AnG in multimodal feature integration during episodic and semantic retrieval. Human participants completed episodic and semantic memory tasks involving unimodal (auditory or visual) and multimodal (audio-visual) stimuli. Univariate analyses revealed the recruitment of functionally distinct AnG subregions during the retrieval of episodic and semantic information. Consistent with a role in multimodal feature integration during episodic retrieval, significantly greater AnG activity was observed during retrieval of integrated multimodal episodic memories compared with unimodal episodic memories. Multivariate classification analyses revealed that individual multimodal episodic memories could be differentiated in AnG, with classification accuracy tracking the vividness of participants' reported recollections, whereas distinct unimodal memories were represented in sensory association areas only. In contrast to episodic retrieval, AnG was engaged to a statistically equivalent degree during retrieval of unimodal and multimodal semantic memories, suggesting a distinct role for AnG during semantic retrieval. Modality-specific sensory association areas exhibited corresponding activity during both episodic and semantic retrieval, which mirrored the functional specialization of these regions during perception. The results offer new insights into the integrative processes subserved by AnG and its contribution to our subjective experience of remembering. SIGNIFICANCE STATEMENT Using univariate and multivariate fMRI analyses, we provide evidence that functionally distinct subregions of angular gyrus (An

  12. Semantic Elaboration through Integration: Hints Both Facilitate and Inform the Process

    ERIC Educational Resources Information Center

    Bauer, Patricia J.; Varga, Nicole L.; King, Jessica E.; Nolen, Ayla M.; White, Elizabeth A.

    2015-01-01

    Semantic knowledge can be extended in a variety of ways, including self-generation of new facts through integration of separate yet related episodes. We sought to promote integration and self-generation by providing "hints" to help 6-year-olds (Experiment 1) and 4-year-olds (Experiment 2) see the relevance of separate episodes to one…

  13. An MEG Study of Temporal Characteristics of Semantic Integration in Japanese Noun Phrases

    NASA Astrophysics Data System (ADS)

    Kiguchi, Hirohisa; Asakura, Nobuhiko

    Many studies of on-line comprehension of semantic violations have shown that the human sentence processor rapidly constructs a higher-order semantic interpretation of the sentence. What remains unclear, however, is the amount of time required to detect semantic anomalies while concatenating two words to form a phrase with very rapid stimuli presentation. We aimed to examine the time course of semantic integration in concatenating two words in phrase structure building, using magnetoencephalography (MEG). In the MEG experiment, subjects decided whether two words (a classifier and its corresponding noun), presented each for 66ms, form a semantically correct noun phrase. Half of the stimuli were matched pairs of classifiers and nouns. The other half were mismatched pairs of classifiers and nouns. In the analysis of MEG data, there were three primary peaks found at approximately 25ms (M1), 170ms (M2) and 250ms (M3) after the presentation of the target words. As a result, only the M3 latencies were significantly affected by the stimulus conditions. Thus, the present results indicate that the semantic integration in concatenating two words starts from approximately 250ms.

  14. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  15. Exploration of SWRL Rule Bases through Visualization, Paraphrasing, and Categorization of Rules

    NASA Astrophysics Data System (ADS)

    Hassanpour, Saeed; O'Connor, Martin J.; Das, Amar K.

    Rule bases are increasingly being used as repositories of knowledge content on the Semantic Web. As the size and complexity of these rule bases increases, developers and end users need methods of rule abstraction to facilitate rule management. In this paper, we describe a rule abstraction method for Semantic Web Rule Language (SWRL) rules that is based on lexical analysis and a set of heuristics. Our method results in a tree data structure that we exploit in creating techniques to visualize, paraphrase, and categorize SWRL rules. We evaluate our approach by applying it to several biomedical ontologies that contain SWRL rules, and show how the results reveal rule patterns within the rule base. We have implemented our method as a plug-in tool for Protégé-OWL, the most widely used ontology modeling software for the Semantic Web. Our tool can allow users to rapidly explore content and patterns in SWRL rule bases, enabling their acquisition and management.

  16. The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation

    PubMed Central

    2011-01-01

    Background The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. Description SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. Conclusions SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner

  17. Response-time optimization of rule-based expert systems

    NASA Astrophysics Data System (ADS)

    Zupan, Blaz; Cheng, Albert M. K.

    1994-03-01

    Real-time rule-based decision systems are embedded AI systems and must make critical decisions within stringent timing constraints. In the case where the response time of the rule- based system is not acceptable, it has to be optimized to meet both timing and integrity constraints. This paper describes a novel approach to reduce the response time of rule-based expert systems. Our optimization method is twofold: the first phase constructs the reduced cycle-free finite state transition system corresponding to the input rule-based system, and the second phase further refines the constructed transition system using the simulated annealing approach. The method makes use of rule-base system decomposition, concurrency, and state- equivalency. The new and optimized system is synthesized from the derived transition system. Compared with the original system, the synthesized system has fewer number of rule firings to reach the fixed point, is inherently stable, and has no redundant rules.

  18. Modulation of semantic integration as a function of syntactic expectations: event-related brain potential evidence.

    PubMed

    Isel, Frédéric; Shen, Weilin

    2011-03-30

    This study investigated syntax-semantics interactions during spoken sentence comprehension. We showed that expectations of phrase-structure incongruencies, which were induced by the experimental instructions, although not actually present in the sentences, were able to block the process of semantic integration. Although this process is usually associated with an N400 event-related brain potential component, here we found a P600, that is, an event-related brain potential component that is thought to reflect syntactic revision. This finding lends support to neurophysiological models of sentence interpretation, which postulates that the lexical-semantic integration of a given word can take place only when syntactic analysis has been successfully completed. PMID:21358555

  19. Novel word integration in the mental lexicon: evidence from unmasked and masked semantic priming.

    PubMed

    Tamminen, Jakke; Gaskell, M Gareth

    2013-01-01

    We sought to establish whether novel words can become integrated into existing semantic networks by teaching participants new meaningful words and then using these new words as primes in two semantic priming experiments, in which participants carried out a lexical decision task to familiar words. Importantly, at no point in training did the novel words co-occur with the familiar words that served as targets in the primed lexical decision task, allowing us to evaluate semantic priming in the absence of direct association. We found that familiar words were primed by the newly related novel words, both when the novel word prime was unmasked (experiment 1) and when it was masked (experiment 2), suggesting that the new words had been integrated into semantic memory. Furthermore, this integration was strongest after a 1-week delay and was independent of explicit recall of the novel word meanings: Forgetting of meanings did not attenuate priming. We argue that even after brief training, newly learned words become an integrated part of the adult mental lexicon rather than being episodically represented separately from the lexicon. PMID:23035665

  20. Enhancing Vocabulary Intervention for Kindergarten Students: Strategic Integration of Semantically Related and Embedded Word Review

    ERIC Educational Resources Information Center

    Zipoli, Richard P., Jr.; Coyne, Michael D.; McCoach, D. Betsy

    2011-01-01

    Two approaches to systematic word review were integrated into an 18-week program of extended vocabulary instruction with kindergarten students from three high-need urban schools. Words in the embedded and semantically related review conditions received systematic and distributed review. In the embedded review condition, brief word definitions were…

  1. Semantic Integration Processes at Different Levels of Syntactic Hierarchy during Sentence Comprehension: An ERP Study

    ERIC Educational Resources Information Center

    Zhou, Xiaolin; Jiang, Xiaoming; Ye, Zheng; Zhang, Yaxu; Lou, Kaiyang; Zhan, Weidong

    2010-01-01

    An event-related potential (ERP) study was conducted to investigate the temporal neural dynamics of semantic integration processes at different levels of syntactic hierarchy during Chinese sentence reading. In a hierarchical structure, "subject noun" + "verb" + "numeral" + "classifier" + "object noun," the object noun is constrained by selectional…

  2. An Approach to Formalizing Ontology Driven Semantic Integration: Concepts, Dimensions and Framework

    ERIC Educational Resources Information Center

    Gao, Wenlong

    2012-01-01

    The ontology approach has been accepted as a very promising approach to semantic integration today. However, because of the diversity of focuses and its various connections to other research domains, the core concepts, theoretical and technical approaches, and research areas of this domain still remain unclear. Such ambiguity makes it difficult to…

  3. Semantic integration of gene expression analysis tools and data sources using software connectors

    PubMed Central

    2013-01-01

    Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools

  4. Integration and Querying of Genomic and Proteomic Semantic Annotations for Biomedical Knowledge Extraction.

    PubMed

    Masseroli, Marco; Canakoglu, Arif; Ceri, Stefano

    2016-01-01

    Understanding complex biological phenomena involves answering complex biomedical questions on multiple biomolecular information simultaneously, which are expressed through multiple genomic and proteomic semantic annotations scattered in many distributed and heterogeneous data sources; such heterogeneity and dispersion hamper the biologists' ability of asking global queries and performing global evaluations. To overcome this problem, we developed a software architecture to create and maintain a Genomic and Proteomic Knowledge Base (GPKB), which integrates several of the most relevant sources of such dispersed information (including Entrez Gene, UniProt, IntAct, Expasy Enzyme, GO, GOA, BioCyc, KEGG, Reactome, and OMIM). Our solution is general, as it uses a flexible, modular, and multilevel global data schema based on abstraction and generalization of integrated data features, and a set of automatic procedures for easing data integration and maintenance, also when the integrated data sources evolve in data content, structure, and number. These procedures also assure consistency, quality, and provenance tracking of all integrated data, and perform the semantic closure of the hierarchical relationships of the integrated biomedical ontologies. At http://www.bioinformatics.deib.polimi.it/GPKB/, a Web interface allows graphical easy composition of queries, although complex, on the knowledge base, supporting also semantic query expansion and comprehensive explorative search of the integrated data to better sustain biomedical knowledge extraction. PMID:27045824

  5. Integrating semantic information into multiple kernels for protein-protein interaction extraction from biomedical literatures.

    PubMed

    Li, Lishuang; Zhang, Panpan; Zheng, Tianfu; Zhang, Hongying; Jiang, Zhenchao; Huang, Degen

    2014-01-01

    Protein-Protein Interaction (PPI) extraction is an important task in the biomedical information extraction. Presently, many machine learning methods for PPI extraction have achieved promising results. However, the performance is still not satisfactory. One reason is that the semantic resources were basically ignored. In this paper, we propose a multiple-kernel learning-based approach to extract PPIs, combining the feature-based kernel, tree kernel and semantic kernel. Particularly, we extend the shortest path-enclosed tree kernel (SPT) by a dynamic extended strategy to retrieve the richer syntactic information. Our semantic kernel calculates the protein-protein pair similarity and the context similarity based on two semantic resources: WordNet and Medical Subject Heading (MeSH). We evaluate our method with Support Vector Machine (SVM) and achieve an F-score of 69.40% and an AUC of 92.00%, which show that our method outperforms most of the state-of-the-art systems by integrating semantic information. PMID:24622773

  6. Integrating Semantic Information into Multiple Kernels for Protein-Protein Interaction Extraction from Biomedical Literatures

    PubMed Central

    Li, Lishuang; Zhang, Panpan; Zheng, Tianfu; Zhang, Hongying; Jiang, Zhenchao; Huang, Degen

    2014-01-01

    Protein-Protein Interaction (PPI) extraction is an important task in the biomedical information extraction. Presently, many machine learning methods for PPI extraction have achieved promising results. However, the performance is still not satisfactory. One reason is that the semantic resources were basically ignored. In this paper, we propose a multiple-kernel learning-based approach to extract PPIs, combining the feature-based kernel, tree kernel and semantic kernel. Particularly, we extend the shortest path-enclosed tree kernel (SPT) by a dynamic extended strategy to retrieve the richer syntactic information. Our semantic kernel calculates the protein-protein pair similarity and the context similarity based on two semantic resources: WordNet and Medical Subject Heading (MeSH). We evaluate our method with Support Vector Machine (SVM) and achieve an F-score of 69.40% and an AUC of 92.00%, which show that our method outperforms most of the state-of-the-art systems by integrating semantic information. PMID:24622773

  7. Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web

    PubMed Central

    Sigüenza, Álvaro; Díaz-Pardo, David; Bernat, Jesús; Vancea, Vasile; Blanco, José Luis; Conejero, David; Gómez, Luis Hernández

    2012-01-01

    Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C's Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers' observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound. PMID:22778643

  8. Sharing human-generated observations by integrating HMI and the Semantic Sensor Web.

    PubMed

    Sigüenza, Alvaro; Díaz-Pardo, David; Bernat, Jesús; Vancea, Vasile; Blanco, José Luis; Conejero, David; Gómez, Luis Hernández

    2012-01-01

    Current "Internet of Things" concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C's Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers' observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound. PMID:22778643

  9. Rule-Based Runtime Verification

    NASA Technical Reports Server (NTRS)

    Barringer, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik

    2003-01-01

    We present a rule-based framework for defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time logics, interval logics, forms of quantified temporal logics, and so on. Our logic, EAGLE, is implemented as a Java library and involves novel techniques for rule definition, manipulation and execution. Monitoring is done on a state-by-state basis, without storing the execution trace.

  10. Clinical evaluation of using semantic searching engine for radiological imaging services in RIS-integrated PACS

    NASA Astrophysics Data System (ADS)

    Ling, Tonghui; Zhang, Kai; Yang, Yuanyuan; Hua, Yanqing; Zhang, Jianguo

    2015-03-01

    We had designed a semantic searching engine (SSE) for radiological imaging to search both reports and images in RIS-integrated PACS environment. In this presentation, we present evaluation results of this SSE about how it impacting the radiologists' behaviors in reporting for different kinds of examinations, and how it improving the performance of retrieval and usage of historical images in RIS-integrated PACS.

  11. Addressing the Challenges of Multi-Domain Data Integration with the SemantEco Framework

    NASA Astrophysics Data System (ADS)

    Patton, E. W.; Seyed, P.; McGuinness, D. L.

    2013-12-01

    Data integration across multiple domains will continue to be a challenge with the proliferation of big data in the sciences. Data origination issues and how data are manipulated are critical to enable scientists to understand and consume disparate datasets as research becomes more multidisciplinary. We present the SemantEco framework as an exemplar for designing an integrative portal for data discovery, exploration, and interpretation that uses best practice W3C Recommendations. We use the Resource Description Framework (RDF) with extensible ontologies described in the Web Ontology Language (OWL) to provide graph-based data representation. Furthermore, SemantEco ingests data via the software package csv2rdf4lod, which generates data provenance using the W3C provenance recommendation (PROV). Our presentation will discuss benefits and challenges of semantic integration, their effect on runtime performance, and how the SemantEco framework assisted in identifying performance issues and improved query performance across multiple domains by an order of magnitude. SemantEco benefits from a semantic approach that provides an 'open world', which allows data to incrementally change just as it does in the real world. SemantEco modules may load new ontologies and data using the W3C's SPARQL Protocol and RDF Query Language via HTTP. Modules may also provide user interface elements for applications and query capabilities to support new use cases. Modules can associate with domains, which are first-class objects in SemantEco. This enables SemantEco to perform integration and reasoning both within and across domains on module-provided data. The SemantEco framework has been used to construct a web portal for environmental and ecological data. The portal includes water and air quality data from the U.S. Geological Survey (USGS) and Environmental Protection Agency (EPA) and species observation counts for birds and fish from the Avian Knowledge Network and the Santa Barbara Long Term

  12. An architecture for rule based system explanation

    NASA Technical Reports Server (NTRS)

    Fennel, T. R.; Johannes, James D.

    1990-01-01

    A system architecture is presented which incorporate both graphics and text into explanations provided by rule based expert systems. This architecture facilitates explanation of the knowledge base content, the control strategies employed by the system, and the conclusions made by the system. The suggested approach combines hypermedia and inference engine capabilities. Advantages include: closer integration of user interface, explanation system, and knowledge base; the ability to embed links to deeper knowledge underlying the compiled knowledge used in the knowledge base; and allowing for more direct control of explanation depth and duration by the user. User models are suggested to control the type, amount, and order of information presented.

  13. Applying Semantic Web Services and Wireless Sensor Networks for System Integration

    NASA Astrophysics Data System (ADS)

    Berkenbrock, Gian Ricardo; Hirata, Celso Massaki; de Oliveira Júnior, Frederico Guilherme Álvares; de Oliveira, José Maria Parente

    In environments like factories, buildings, and homes automation services tend to often change during their lifetime. Changes are concerned to business rules, process optimization, cost reduction, and so on. It is important to provide a smooth and straightforward way to deal with these changes so that could be handled in a faster and low cost manner. Some prominent solutions use the flexibility of Wireless Sensor Networks and the meaningful description of Semantic Web Services to provide service integration. In this work, we give an overview of current solutions for machinery integration that combine both technologies as well as a discussion about some perspectives and open issues when applying Wireless Sensor Networks and Semantic Web Services for automation services integration.

  14. Exploring the role of the posterior middle temporal gyrus in semantic cognition: Integration of anterior temporal lobe with executive processes.

    PubMed

    Davey, James; Thompson, Hannah E; Hallam, Glyn; Karapanagiotidis, Theodoros; Murphy, Charlotte; De Caso, Irene; Krieger-Redwood, Katya; Bernhardt, Boris C; Smallwood, Jonathan; Jefferies, Elizabeth

    2016-08-15

    Making sense of the world around us depends upon selectively retrieving information relevant to our current goal or context. However, it is unclear whether selective semantic retrieval relies exclusively on general control mechanisms recruited in demanding non-semantic tasks, or instead on systems specialised for the control of meaning. One hypothesis is that the left posterior middle temporal gyrus (pMTG) is important in the controlled retrieval of semantic (not non-semantic) information; however this view remains controversial since a parallel literature links this site to event and relational semantics. In a functional neuroimaging study, we demonstrated that an area of pMTG implicated in semantic control by a recent meta-analysis was activated in a conjunction of (i) semantic association over size judgements and (ii) action over colour feature matching. Under these circumstances the same region showed functional coupling with the inferior frontal gyrus - another crucial site for semantic control. Structural and functional connectivity analyses demonstrated that this site is at the nexus of networks recruited in automatic semantic processing (the default mode network) and executively demanding tasks (the multiple-demand network). Moreover, in both task and task-free contexts, pMTG exhibited functional properties that were more similar to ventral parts of inferior frontal cortex, implicated in controlled semantic retrieval, than more dorsal inferior frontal sulcus, implicated in domain-general control. Finally, the pMTG region was functionally correlated at rest with other regions implicated in control-demanding semantic tasks, including inferior frontal gyrus and intraparietal sulcus. We suggest that pMTG may play a crucial role within a large-scale network that allows the integration of automatic retrieval in the default mode network with executively-demanding goal-oriented cognition, and that this could support our ability to understand actions and non

  15. Famous face identification in temporal lobe epilepsy: support for a multimodal integration model of semantic memory.

    PubMed

    Drane, Daniel L; Ojemann, Jeffrey G; Phatak, Vaishali; Loring, David W; Gross, Robert E; Hebb, Adam O; Silbergeld, Daniel L; Miller, John W; Voets, Natalie L; Saindane, Amit M; Barsalou, Lawrence; Meador, Kimford J; Ojemann, George A; Tranel, Daniel

    2013-06-01

    This study aims to demonstrate that the left and right anterior temporal lobes (ATLs) perform critical but unique roles in famous face identification, with damage to either leading to differing deficit patterns reflecting decreased access to lexical or semantic concepts but not their degradation. Famous face identification was studied in 22 presurgical and 14 postsurgical temporal lobe epilepsy (TLE) patients and 20 healthy comparison subjects using free recall and multiple choice (MC) paradigms. Right TLE patients exhibited presurgical deficits in famous face recognition, and postsurgical deficits in both famous face recognition and familiarity judgments. However, they did not exhibit any problems with naming before or after surgery. In contrast, left TLE patients demonstrated both pre- and postsurgical deficits in famous face naming but no significant deficits in recognition or familiarity. Double dissociations in performance between groups were alleviated by altering task demands. Postsurgical right TLE patients provided with MC options correctly identified greater than 70% of famous faces they initially rated as unfamiliar. Left TLE patients accurately chose the name for nearly all famous faces they recognized (based on their verbal description) but initially failed to name, although they tended to rapidly lose access to this name. We believe alterations in task demands activate alternative routes to semantic and lexical networks, demonstrating that unique pathways to such stored information exist, and suggesting a different role for each ATL in identifying visually presented famous faces. The right ATL appears to play a fundamental role in accessing semantic information from a visual route, with the left ATL serving to link semantic information to the language system to produce a specific name. These findings challenge several assumptions underlying amodal models of semantic memory, and provide support for the integrated multimodal theories of semantic memory

  16. Famous face identification in temporal lobe epilepsy: Support for a multimodal integration model of semantic memory

    PubMed Central

    Drane, Daniel L.; Ojemann, Jeffrey G.; Phatak, Vaishali; Loring, David W.; Gross, Robert E.; Hebb, Adam O.; Silbergeld, Daniel L.; Miller, John W.; Voets, Natalie L.; Saindane, Amit M.; Barsalou, Lawrence; Meador, Kimford J.; Ojemann, George A.; Tranel, Daniel

    2012-01-01

    This study aims to demonstrate that the left and right anterior temporal lobes (ATLs) perform critical but unique roles in famous face identification, with damage to either leading to differing deficit patterns reflecting decreased access to lexical or semantic concepts but not their degradation. Famous face identification was studied in 22 presurgical and 14 postsurgical temporal lobe epilepsy (TLE) patients and 20 healthy comparison subjects using free recall and multiple choice (MC) paradigms. Right TLE patients exhibited presurgical deficits in famous face recognition, and postsurgical deficits in both famous face recognition and familiarity judgments. However, they did not exhibit any problems with naming before or after surgery. In contrast, left TLE patients demonstrated both pre-and postsurgical deficits in famous face naming but no significant deficits in recognition or familiarity. Double dissociations in performance between groups were alleviated by altering task demands. Postsurgical right TLE patients provided with MC options correctly identified greater than 70% of famous faces they initially rated as unfamiliar. Left TLE patients accurately chose the name for nearly all famous faces they recognized (based on their verbal description) but initially failed to name, although they tended to rapidly lose access to this name. We believe alterations in task demands activate alternative routes to semantic and lexical networks, demonstrating that unique pathways to such stored information exist, and suggesting a different role for each ATL in identifying visually presented famous faces. The right ATL appears to play a fundamental role in accessing semantic information from a visual route, with the left ATL serving to link semantic information to the language system to produce a specific name. These findings challenge several assumptions underlying amodal models of semantic memory, and provide support for the integrated multimodal theories of semantic memory

  17. Construction of an Ortholog Database Using the Semantic Web Technology for Integrative Analysis of Genomic Data

    PubMed Central

    Chiba, Hirokazu; Nishide, Hiroyo; Uchiyama, Ikuo

    2015-01-01

    Recently, various types of biological data, including genomic sequences, have been rapidly accumulating. To discover biological knowledge from such growing heterogeneous data, a flexible framework for data integration is necessary. Ortholog information is a central resource for interlinking corresponding genes among different organisms, and the Semantic Web provides a key technology for the flexible integration of heterogeneous data. We have constructed an ortholog database using the Semantic Web technology, aiming at the integration of numerous genomic data and various types of biological information. To formalize the structure of the ortholog information in the Semantic Web, we have constructed the Ortholog Ontology (OrthO). While the OrthO is a compact ontology for general use, it is designed to be extended to the description of database-specific concepts. On the basis of OrthO, we described the ortholog information from our Microbial Genome Database for Comparative Analysis (MBGD) in the form of Resource Description Framework (RDF) and made it available through the SPARQL endpoint, which accepts arbitrary queries specified by users. In this framework based on the OrthO, the biological data of different organisms can be integrated using the ortholog information as a hub. Besides, the ortholog information from different data sources can be compared with each other using the OrthO as a shared ontology. Here we show some examples demonstrating that the ortholog information described in RDF can be used to link various biological data such as taxonomy information and Gene Ontology. Thus, the ortholog database using the Semantic Web technology can contribute to biological knowledge discovery through integrative data analysis. PMID:25875762

  18. Construction of an ortholog database using the semantic web technology for integrative analysis of genomic data.

    PubMed

    Chiba, Hirokazu; Nishide, Hiroyo; Uchiyama, Ikuo

    2015-01-01

    Recently, various types of biological data, including genomic sequences, have been rapidly accumulating. To discover biological knowledge from such growing heterogeneous data, a flexible framework for data integration is necessary. Ortholog information is a central resource for interlinking corresponding genes among different organisms, and the Semantic Web provides a key technology for the flexible integration of heterogeneous data. We have constructed an ortholog database using the Semantic Web technology, aiming at the integration of numerous genomic data and various types of biological information. To formalize the structure of the ortholog information in the Semantic Web, we have constructed the Ortholog Ontology (OrthO). While the OrthO is a compact ontology for general use, it is designed to be extended to the description of database-specific concepts. On the basis of OrthO, we described the ortholog information from our Microbial Genome Database for Comparative Analysis (MBGD) in the form of Resource Description Framework (RDF) and made it available through the SPARQL endpoint, which accepts arbitrary queries specified by users. In this framework based on the OrthO, the biological data of different organisms can be integrated using the ortholog information as a hub. Besides, the ortholog information from different data sources can be compared with each other using the OrthO as a shared ontology. Here we show some examples demonstrating that the ortholog information described in RDF can be used to link various biological data such as taxonomy information and Gene Ontology. Thus, the ortholog database using the Semantic Web technology can contribute to biological knowledge discovery through integrative data analysis. PMID:25875762

  19. Bim-Gis Integrated Geospatial Information Model Using Semantic Web and Rdf Graphs

    NASA Astrophysics Data System (ADS)

    Hor, A.-H.; Jadidi, A.; Sohn, G.

    2016-06-01

    In recent years, 3D virtual indoor/outdoor urban modelling becomes a key spatial information framework for many civil and engineering applications such as evacuation planning, emergency and facility management. For accomplishing such sophisticate decision tasks, there is a large demands for building multi-scale and multi-sourced 3D urban models. Currently, Building Information Model (BIM) and Geographical Information Systems (GIS) are broadly used as the modelling sources. However, data sharing and exchanging information between two modelling domains is still a huge challenge; while the syntactic or semantic approaches do not fully provide exchanging of rich semantic and geometric information of BIM into GIS or vice-versa. This paper proposes a novel approach for integrating BIM and GIS using semantic web technologies and Resources Description Framework (RDF) graphs. The novelty of the proposed solution comes from the benefits of integrating BIM and GIS technologies into one unified model, so-called Integrated Geospatial Information Model (IGIM). The proposed approach consists of three main modules: BIM-RDF and GIS-RDF graphs construction, integrating of two RDF graphs, and query of information through IGIM-RDF graph using SPARQL. The IGIM generates queries from both the BIM and GIS RDF graphs resulting a semantically integrated model with entities representing both BIM classes and GIS feature objects with respect to the target-client application. The linkage between BIM-RDF and GIS-RDF is achieved through SPARQL endpoints and defined by a query using set of datasets and entity classes with complementary properties, relationships and geometries. To validate the proposed approach and its performance, a case study was also tested using IGIM system design.

  20. Toward Open Science at the European Scale: Geospatial Semantic Array Programming for Integrated Environmental Modelling

    NASA Astrophysics Data System (ADS)

    de Rigo, Daniele; Corti, Paolo; Caudullo, Giovanni; McInerney, Daniel; Di Leo, Margherita; San-Miguel-Ayanz, Jesús

    2013-04-01

    of the science-policy interface, INRMM should be able to provide citizens and policy-makers with a clear, accurate understanding of the implications of the technical apparatus on collective environmental decision-making [1]. Complexity of course should not be intended as an excuse for obscurity [27-29]. Geospatial Semantic Array Programming. Concise array-based mathematical formulation and implementation (with array programming tools, see (b) ) have proved helpful in supporting and mitigating the complexity of WSTMe [40-47] when complemented with generalized modularization and terse array-oriented semantic constraints. This defines the paradigm of Semantic Array Programming (SemAP) [35,36] where semantic transparency also implies free software use (although black-boxes [12] - e.g. legacy code - might easily be semantically interfaced). A new approach for WSTMe has emerged by formalizing unorganized best practices and experience-driven informal patterns. The approach introduces a lightweight (non-intrusive) integration of SemAP and geospatial tools (c) - called Geospatial Semantic Array Programming (GeoSemAP). GeoSemAP (d) exploits the joint semantics provided by SemAP and geospatial tools to split a complex D- TM into logical blocks which are easier to check by means of mathematical array-based and geospatial constraints. Those constraints take the form of precondition, invariant and postcondition semantic checks. This way, even complex WSTMe may be described as the composition of simpler GeoSemAP blocks, each of them structured as (d). GeoSemAP allows intermediate data and information layers to be more easily an formally semantically described so as to increase fault-tolerance [17], transparency and reproducibility of WSTMe. This might also help to better communicate part of the policy-relevant knowledge, often difficult to transfer from technical WSTMe to the science-policy interface [1,15]. References de Rigo, D., 2013. Behind the horizon of reproducible

  1. An integrative approach for measuring semantic similarities using gene ontology

    PubMed Central

    2014-01-01

    Background Gene Ontology (GO) provides rich information and a convenient way to study gene functional similarity, which has been successfully used in various applications. However, the existing GO based similarity measurements have limited functions for only a subset of GO information is considered in each measure. An appropriate integration of the existing measures to take into account more information in GO is demanding. Results We propose a novel integrative measure called InteGO2 to automatically select appropriate seed measures and then to integrate them using a metaheuristic search method. The experiment results show that InteGO2 significantly improves the performance of gene similarity in human, Arabidopsis and yeast on both molecular function and biological process GO categories. Conclusions InteGO2 computes gene-to-gene similarities more accurately than tested existing measures and has high robustness. The supplementary document and software are available at http://mlg.hit.edu.cn:8082/. PMID:25559943

  2. A case study of data integration for aquatic resources using semantic web technologies

    USGS Publications Warehouse

    Gordon, Janice M.; Chkhenkeli, Nina; Govoni, David L.; Lightsom, Frances L.; Ostroff, Andrea; Schweitzer, Peter N.; Thongsavanh, Phethala; Varanka, Dalia E.; Zednik, Stephan

    2015-01-01

    Use cases, information modeling, and linked data techniques are Semantic Web technologies used to develop a prototype system that integrates scientific observations from four independent USGS and cooperator data systems. The techniques were tested with a use case goal of creating a data set for use in exploring potential relationships among freshwater fish populations and environmental factors. The resulting prototype extracts data from the BioData Retrieval System, the Multistate Aquatic Resource Information System, the National Geochemical Survey, and the National Hydrography Dataset. A prototype user interface allows a scientist to select observations from these data systems and combine them into a single data set in RDF format that includes explicitly defined relationships and data definitions. The project was funded by the USGS Community for Data Integration and undertaken by the Community for Data Integration Semantic Web Working Group in order to demonstrate use of Semantic Web technologies by scientists. This allows scientists to simultaneously explore data that are available in multiple, disparate systems beyond those they traditionally have used.

  3. Scaling the walls of discovery: using semantic metadata for integrative problem solving.

    PubMed

    Manning, Maurice; Aggarwal, Amit; Gao, Kevin; Tucker-Kellogg, Greg

    2009-03-01

    Current data integration approaches by bioinformaticians frequently involve extracting data from a wide variety of public and private data repositories, each with a unique vocabulary and schema, via scripts. These separate data sets must then be normalized through the tedious and lengthy process of resolving naming differences and collecting information into a single view. Attempts to consolidate such diverse data using data warehouses or federated queries add significant complexity and have shown limitations in flexibility. The alternative of complete semantic integration of data requires a massive, sustained effort in mapping data types and maintaining ontologies. We focused instead on creating a data architecture that leverages semantic mapping of experimental metadata, to support the rapid prototyping of scientific discovery applications with the twin goals of reducing architectural complexity while still leveraging semantic technologies to provide flexibility, efficiency and more fully characterized data relationships. A metadata ontology was developed to describe our discovery process. A metadata repository was then created by mapping metadata from existing data sources into this ontology, generating RDF triples to describe the entities. Finally an interface to the repository was designed which provided not only search and browse capabilities but complex query templates that aggregate data from both RDF and RDBMS sources. We describe how this approach (i) allows scientists to discover and link relevant data across diverse data sources and (ii) provides a platform for development of integrative informatics applications. PMID:19304872

  4. A Semantic Problem Solving Environment for Integrative Parasite Research: Identification of Intervention Targets for Trypanosoma cruzi

    PubMed Central

    Parikh, Priti P.; Minning, Todd A.; Nguyen, Vinh; Lalithsena, Sarasi; Asiaee, Amir H.; Sahoo, Satya S.; Doshi, Prashant; Tarleton, Rick; Sheth, Amit P.

    2012-01-01

    Background Research on the biology of parasites requires a sophisticated and integrated computational platform to query and analyze large volumes of data, representing both unpublished (internal) and public (external) data sources. Effective analysis of an integrated data resource using knowledge discovery tools would significantly aid biologists in conducting their research, for example, through identifying various intervention targets in parasites and in deciding the future direction of ongoing as well as planned projects. A key challenge in achieving this objective is the heterogeneity between the internal lab data, usually stored as flat files, Excel spreadsheets or custom-built databases, and the external databases. Reconciling the different forms of heterogeneity and effectively integrating data from disparate sources is a nontrivial task for biologists and requires a dedicated informatics infrastructure. Thus, we developed an integrated environment using Semantic Web technologies that may provide biologists the tools for managing and analyzing their data, without the need for acquiring in-depth computer science knowledge. Methodology/Principal Findings We developed a semantic problem-solving environment (SPSE) that uses ontologies to integrate internal lab data with external resources in a Parasite Knowledge Base (PKB), which has the ability to query across these resources in a unified manner. The SPSE includes Web Ontology Language (OWL)-based ontologies, experimental data with its provenance information represented using the Resource Description Format (RDF), and a visual querying tool, Cuebee, that features integrated use of Web services. We demonstrate the use and benefit of SPSE using example queries for identifying gene knockout targets of Trypanosoma cruzi for vaccine development. Answers to these queries involve looking up multiple sources of data, linking them together and presenting the results. Conclusion/Significance The SPSE facilitates

  5. A semantic data dictionary method for database schema integration in CIESIN

    NASA Astrophysics Data System (ADS)

    Hinds, N.; Huang, Y.; Ravishankar, C.

    1993-08-01

    CIESIN (Consortium for International Earth Science Information Network) is funded by NASA to investigate the technology necessary to integrate and facilitate the interdisciplinary use of Global Change information. A clear of this mission includes providing a link between the various global change data sets, in particular the physical sciences and the human (social) sciences. The typical scientist using the CIESIN system will want to know how phenomena in an outside field affects his/her work. For example, a medical researcher might ask: how does air-quality effect emphysema? This and many similar questions will require sophisticated semantic data integration. The researcher who raised the question may be familiar with medical data sets containing emphysema occurrences. But this same investigator may know little, if anything, about the existance or location of air-quality data. It is easy to envision a system which would allow that investigator to locate and perform a ``join'' on two data sets, one containing emphysema cases and the other containing air-quality levels. No such system exists today. One major obstacle to providing such a system will be overcoming the heterogeneity which falls into two broad categories. ``Database system'' heterogeneity involves differences in data models and packages. ``Data semantic'' heterogeneity involves differences in terminology between disciplines which translates into data semantic issues, and varying levels of data refinement, from raw to summary. Our work investigates a global data dictionary mechanism to facilitate a merged data service. Specially, we propose using a semantic tree during schema definition to aid in locating and integrating heterogeneous databases.

  6. Entrez Neuron RDFa: a pragmatic semantic web application for data integration in neuroscience research.

    PubMed

    Samwald, Matthias; Lim, Ernest; Masiar, Peter; Marenco, Luis; Chen, Huajun; Morse, Thomas; Mutalik, Pradeep; Shepherd, Gordon; Miller, Perry; Cheung, Kei-Hoi

    2009-01-01

    The amount of biomedical data available in Semantic Web formats has been rapidly growing in recent years. While these formats are machine-friendly, user-friendly web interfaces allowing easy querying of these data are typically lacking. We present "Entrez Neuron", a pilot neuron-centric interface that allows for keyword-based queries against a coherent repository of OWL ontologies. These ontologies describe neuronal structures, physiology, mathematical models and microscopy images. The returned query results are organized hierarchically according to brain architecture. Where possible, the application makes use of entities from the Open Biomedical Ontologies (OBO) and the 'HCLS knowledgebase' developed by the W3C Interest Group for Health Care and Life Science. It makes use of the emerging RDFa standard to embed ontology fragments and semantic annotations within its HTML-based user interface. The application and underlying ontologies demonstrate how Semantic Web technologies can be used for information integration within a curated information repository and between curated information repositories. It also demonstrates how information integration can be accomplished on the client side, through simple copying and pasting of portions of documents that contain RDFa markup. PMID:19745321

  7. Entrez Neuron RDFa: a pragmatic Semantic Web application for data integration in neuroscience research

    PubMed Central

    Samwald, Matthias; Lim, Ernest; Masiar, Peter; Marenco, Luis; Chen, Huajun; Morse, Thomas; Mutalik, Pradeep; Shepherd, Gordon; Miller, Perry; Cheung, Kei-Hoi

    2013-01-01

    The amount of biomedical data available in Semantic Web formats has been rapidly growing in recent years. While these formats are machine-friendly, user-friendly web interfaces allowing easy querying of these data are typically lacking. We present “Entrez Neuron”, a pilot neuron-centric interface that allows for keyword-based queries against a coherent repository of OWL ontologies. These ontologies describe neuronal structures, physiology, mathematical models and microscopy images. The returned query results are organized hierarchically according to brain architecture. Where possible, the application makes use of entities from the Open Biomedical Ontologies (OBO) and the ‘HCLS knowledgebase’ developed by the W3C Interest Group for Health Care and Life Science. It makes use of the emerging RDFa standard to embed ontology fragments and semantic annotations within its HTML-based user interface. The application and underlying ontologies demonstrates how Semantic Web technologies can be used for information integration within a curated information repository and between curated information repositories. It also demonstrates how information integration can be accomplished on the client side, through simple copying and pasting of portions of documents that contain RDFa markup. PMID:19745321

  8. Semantic integration of information about orthologs and diseases: the OGO system.

    PubMed

    Miñarro-Gimenez, Jose Antonio; Egaña Aranguren, Mikel; Martínez Béjar, Rodrigo; Fernández-Breis, Jesualdo Tomás; Madrid, Marisa

    2011-12-01

    Semantic Web technologies like RDF and OWL are currently applied in life sciences to improve knowledge management by integrating disparate information. Many of the systems that perform such task, however, only offer a SPARQL query interface, which is difficult to use for life scientists. We present the OGO system, which consists of a knowledge base that integrates information of orthologous sequences and genetic diseases, providing an easy to use ontology-constrain driven query interface. Such interface allows the users to define SPARQL queries through a graphical process, therefore not requiring SPARQL expertise. PMID:21864715

  9. Semantic integration of verbal information into a visual memory.

    PubMed

    Loftus, E F; Miller, D G; Burns, H J

    1978-01-01

    A total of 1,242 subjects, in five experiments plus a pilot study, saw a series of slides depicting a single auto-pedestrian accident. The purpose of these experiments was to investigate how information supplied after an event influences a witness's memory for that event. Subjects were exposed to either consistent, misleading, or irrelevant information after the accident event. Misleading information produced less accurate responding on both a yes-no and a two-alternative forced-choice recognition test. Further, misleading information had a larger impact if introduced just prior to a final test rather than immediately after the initial event. The effects of misleading information cannot be accounted for by a simple demand-characteristics explanation. Overall, the results suggest that information to which a witness is exposed after an event, whether that information is consistent or misleading, is integrated into the witness's memory of the event. PMID:621467

  10. Delineating the Effect of Semantic Congruency on Episodic Memory: The Role of Integration and Relatedness

    PubMed Central

    Bein, Oded; Livneh, Neta; Reggev, Niv; Gilead, Michael; Goshen-Gottstein, Yonatan; Maril, Anat

    2015-01-01

    A fundamental challenge in the study of learning and memory is to understand the role of existing knowledge in the encoding and retrieval of new episodic information. The importance of prior knowledge in memory is demonstrated in the congruency effect—the robust finding wherein participants display better memory for items that are compatible, rather than incompatible, with their pre-existing semantic knowledge. Despite its robustness, the mechanism underlying this effect is not well understood. In four studies, we provide evidence that demonstrates the privileged explanatory power of the elaboration-integration account over alternative hypotheses. Furthermore, we question the implicit assumption that the congruency effect pertains to the truthfulness/sensibility of a subject-predicate proposition, and show that congruency is a function of semantic relatedness between item and context words. PMID:25695759

  11. Mixing positive and negative valence: Affective-semantic integration of bivalent words.

    PubMed

    Kuhlmann, Michael; Hofmann, Markus J; Briesemeister, Benny B; Jacobs, Arthur M

    2016-01-01

    Single words have affective and aesthetic properties that influence their processing. Here we investigated the processing of a special case of word stimuli that are extremely difficult to evaluate, bivalent noun-noun-compounds (NNCs), i.e. novel words that mix a positive and negative noun, e.g. 'Bombensex' (bomb-sex). In a functional magnetic resonance imaging (fMRI) experiment we compared their processing with easier-to-evaluate non-bivalent NNCs in a valence decision task (VDT). Bivalent NNCs produced longer reaction times and elicited greater activation in the left inferior frontal gyrus (LIFG) than non-bivalent words, especially in contrast to words of negative valence. We attribute this effect to a LIFG-grounded process of semantic integration that requires greater effort for processing converse information, supporting the notion of a valence representation based on associations in semantic networks. PMID:27491491

  12. Mixing positive and negative valence: Affective-semantic integration of bivalent words

    PubMed Central

    Kuhlmann, Michael; Hofmann, Markus J.; Briesemeister, Benny B.; Jacobs, Arthur M.

    2016-01-01

    Single words have affective and aesthetic properties that influence their processing. Here we investigated the processing of a special case of word stimuli that are extremely difficult to evaluate, bivalent noun-noun-compounds (NNCs), i.e. novel words that mix a positive and negative noun, e.g. ‘Bombensex’ (bomb-sex). In a functional magnetic resonance imaging (fMRI) experiment we compared their processing with easier-to-evaluate non-bivalent NNCs in a valence decision task (VDT). Bivalent NNCs produced longer reaction times and elicited greater activation in the left inferior frontal gyrus (LIFG) than non-bivalent words, especially in contrast to words of negative valence. We attribute this effect to a LIFG-grounded process of semantic integration that requires greater effort for processing converse information, supporting the notion of a valence representation based on associations in semantic networks. PMID:27491491

  13. How Distance Affects Semantic Integration in Discourse: Evidence from Event-Related Potentials

    PubMed Central

    Yang, Xiaohong; Chen, Shuang; Chen, Xuhai; Yang, Yufang

    2015-01-01

    Event-related potentials were used to investigate whether semantic integration in discourse is influenced by the number of intervening sentences between the endpoints of integration. Readers read discourses in which the last sentence contained a critical word that was either congruent or incongruent with the information introduced in the first sentence. Furthermore, for the short discourses, the first and last sentence were intervened by only one sentence while for the long discourses, they were intervened by three sentences. We found that the incongruent words elicited an N400 effect for both the short and long discourses. However, a P600 effect was only observed for the long discourses, but not for the short ones. These results suggest that although readers can successfully integrate upcoming words into the existing discourse representation, the effort required for this integration process is modulated by the number of intervening sentences. Thus, discourse distance as measured by the number of intervening sentences should be taken as an important factor for semantic integration in discourse. PMID:26569606

  14. Using the Semantic Web for Rapid Integration of WikiPathways with Other Biological Online Data Resources.

    PubMed

    Waagmeester, Andra; Kutmon, Martina; Riutta, Anders; Miller, Ryan; Willighagen, Egon L; Evelo, Chris T; Pico, Alexander R

    2016-06-01

    The diversity of online resources storing biological data in different formats provides a challenge for bioinformaticians to integrate and analyse their biological data. The semantic web provides a standard to facilitate knowledge integration using statements built as triples describing a relation between two objects. WikiPathways, an online collaborative pathway resource, is now available in the semantic web through a SPARQL endpoint at http://sparql.wikipathways.org. Having biological pathways in the semantic web allows rapid integration with data from other resources that contain information about elements present in pathways using SPARQL queries. In order to convert WikiPathways content into meaningful triples we developed two new vocabularies that capture the graphical representation and the pathway logic, respectively. Each gene, protein, and metabolite in a given pathway is defined with a standard set of identifiers to support linking to several other biological resources in the semantic web. WikiPathways triples were loaded into the Open PHACTS discovery platform and are available through its Web API (https://dev.openphacts.org/docs) to be used in various tools for drug development. We combined various semantic web resources with the newly converted WikiPathways content using a variety of SPARQL query types and third-party resources, such as the Open PHACTS API. The ability to use pathway information to form new links across diverse biological data highlights the utility of integrating WikiPathways in the semantic web. PMID:27336457

  15. Using the Semantic Web for Rapid Integration of WikiPathways with Other Biological Online Data Resources

    PubMed Central

    Waagmeester, Andra; Pico, Alexander R.

    2016-01-01

    The diversity of online resources storing biological data in different formats provides a challenge for bioinformaticians to integrate and analyse their biological data. The semantic web provides a standard to facilitate knowledge integration using statements built as triples describing a relation between two objects. WikiPathways, an online collaborative pathway resource, is now available in the semantic web through a SPARQL endpoint at http://sparql.wikipathways.org. Having biological pathways in the semantic web allows rapid integration with data from other resources that contain information about elements present in pathways using SPARQL queries. In order to convert WikiPathways content into meaningful triples we developed two new vocabularies that capture the graphical representation and the pathway logic, respectively. Each gene, protein, and metabolite in a given pathway is defined with a standard set of identifiers to support linking to several other biological resources in the semantic web. WikiPathways triples were loaded into the Open PHACTS discovery platform and are available through its Web API (https://dev.openphacts.org/docs) to be used in various tools for drug development. We combined various semantic web resources with the newly converted WikiPathways content using a variety of SPARQL query types and third-party resources, such as the Open PHACTS API. The ability to use pathway information to form new links across diverse biological data highlights the utility of integrating WikiPathways in the semantic web. PMID:27336457

  16. A ubiquitous sensor network platform for integrating smart devices into the semantic sensor web.

    PubMed

    de Vera, David Díaz Pardo; Izquierdo, Alvaro Sigüenza; Vercher, Jesús Bernat; Hernández Gómez, Luis Alfonso

    2014-01-01

    Ongoing Sensor Web developments make a growing amount of heterogeneous sensor data available to smart devices. This is generating an increasing demand for homogeneous mechanisms to access, publish and share real-world information. This paper discusses, first, an architectural solution based on Next Generation Networks: a pilot Telco Ubiquitous Sensor Network (USN) Platform that embeds several OGC® Sensor Web services. This platform has already been deployed in large scale projects. Second, the USN-Platform is extended to explore a first approach to Semantic Sensor Web principles and technologies, so that smart devices can access Sensor Web data, allowing them also to share richer (semantically interpreted) information. An experimental scenario is presented: a smart car that consumes and produces real-world information which is integrated into the Semantic Sensor Web through a Telco USN-Platform. Performance tests revealed that observation publishing times with our experimental system were well within limits compatible with the adequate operation of smart safety assistance systems in vehicles. On the other hand, response times for complex queries on large repositories may be inappropriate for rapid reaction needs. PMID:24945678

  17. Electrophysiological differentiation of phonological and semantic integration in word and sentence contexts

    PubMed Central

    Diaz, Michele T.; Swaab, Tamara Y.

    2006-01-01

    During auditory language comprehension, listeners need to rapidly extract meaning from the continuous speech-stream. It is a matter of debate when and how contextual information constrains the activation of lexical representations in meaningful contexts. Electrophysiological studies of spoken language comprehension have identified an event-related potential (ERP) that was sensitive to phonological properties of speech, which was termed the phonological mismatch negativity (PMN). With the PMN, early lexical processing could potentially be distinguished from processes of semantic integration in spoken language comprehension. However, the sensitivity of the PMN to phonological processing per se has been questioned, and it has additionally been suggested that the “PMN” is not separable from the N400, an ERP that is sensitive to semantic aspects of the input. Here, we investigated whether or not a separable PMN exists and if it reflects purely phonological aspects of the speech input. In the present experiment, ERPs were recorded from healthy young adults (N =24) while they listened to sentences and word lists, in which we manipulated semantic and phonological expectation and congruency of the final word. ERPs sensitive to phonological processing were elicited only when phonological expectancy was violated in lists of words, but not during normal sentential processing. This suggests a differential role of phonological processing in more or less meaningful contexts and indicates a very early influence of the overall context on lexical processing in sentences. PMID:16952338

  18. A Ubiquitous Sensor Network Platform for Integrating Smart Devices into the Semantic Sensor Web

    PubMed Central

    de Vera, David Díaz Pardo; Izquierdo, Álvaro Sigüenza; Vercher, Jesús Bernat; Gómez, Luis Alfonso Hernández

    2014-01-01

    Ongoing Sensor Web developments make a growing amount of heterogeneous sensor data available to smart devices. This is generating an increasing demand for homogeneous mechanisms to access, publish and share real-world information. This paper discusses, first, an architectural solution based on Next Generation Networks: a pilot Telco Ubiquitous Sensor Network (USN) Platform that embeds several OGC® Sensor Web services. This platform has already been deployed in large scale projects. Second, the USN-Platform is extended to explore a first approach to Semantic Sensor Web principles and technologies, so that smart devices can access Sensor Web data, allowing them also to share richer (semantically interpreted) information. An experimental scenario is presented: a smart car that consumes and produces real-world information which is integrated into the Semantic Sensor Web through a Telco USN-Platform. Performance tests revealed that observation publishing times with our experimental system were well within limits compatible with the adequate operation of smart safety assistance systems in vehicles. On the other hand, response times for complex queries on large repositories may be inappropriate for rapid reaction needs. PMID:24945678

  19. Hands typing what hands do: Action-semantic integration dynamics throughout written verb production.

    PubMed

    García, Adolfo M; Ibáñez, Agustín

    2016-04-01

    Processing action verbs, in general, and manual action verbs, in particular, involves activations in gross and hand-specific motor networks, respectively. While this is well established for receptive language processes, no study has explored action-semantic integration during written production. Moreover, little is known about how such crosstalk unfolds from motor planning to execution. Here we address both issues through our novel "action semantics in typing" paradigm, which allows to time keystroke operations during word typing. Specifically, we created a primed-verb-copying task involving manual action verbs, non-manual action verbs, and non-action verbs. Motor planning processes were indexed by first-letter lag (the lapse between target onset and first keystroke), whereas execution dynamics were assessed considering whole-word lag (the lapse between first and last keystroke). Each phase was differently delayed by action verbs. When these were processed for over one second, interference was strong and magnified by effector compatibility during programming, but weak and effector-blind during execution. Instead, when they were processed for less than 900ms, interference was reduced by effector compatibility during programming and it faded during execution. Finally, typing was facilitated by prime-target congruency, irrespective of the verbs' motor content. Thus, action-verb semantics seems to extend beyond its embodied foundations, involving conceptual dynamics not tapped by classical reaction-time measures. These findings are compatible with non-radical models of language embodiment and with predictions of event coding theory. PMID:26803393

  20. Automated revision of CLIPS rule-bases

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick M.; Pazzani, Michael J.

    1994-01-01

    This paper describes CLIPS-R, a theory revision system for the revision of CLIPS rule-bases. CLIPS-R may be used for a variety of knowledge-base revision tasks, such as refining a prototype system, adapting an existing system to slightly different operating conditions, or improving an operational system that makes occasional errors. We present a description of how CLIPS-R revises rule-bases, and an evaluation of the system on three rule-bases.

  1. Integration and publication of heterogeneous text-mined relationships on the Semantic Web

    PubMed Central

    2011-01-01

    Background Advances in Natural Language Processing (NLP) techniques enable the extraction of fine-grained relationships mentioned in biomedical text. The variability and the complexity of natural language in expressing similar relationships causes the extracted relationships to be highly heterogeneous, which makes the construction of knowledge bases difficult and poses a challenge in using these for data mining or question answering. Results We report on the semi-automatic construction of the PHARE relationship ontology (the PHArmacogenomic RElationships Ontology) consisting of 200 curated relations from over 40,000 heterogeneous relationships extracted via text-mining. These heterogeneous relations are then mapped to the PHARE ontology using synonyms, entity descriptions and hierarchies of entities and roles. Once mapped, relationships can be normalized and compared using the structure of the ontology to identify relationships that have similar semantics but different syntax. We compare and contrast the manual procedure with a fully automated approach using WordNet to quantify the degree of integration enabled by iterative curation and refinement of the PHARE ontology. The result of such integration is a repository of normalized biomedical relationships, named PHARE-KB, which can be queried using Semantic Web technologies such as SPARQL and can be visualized in the form of a biological network. Conclusions The PHARE ontology serves as a common semantic framework to integrate more than 40,000 relationships pertinent to pharmacogenomics. The PHARE ontology forms the foundation of a knowledge base named PHARE-KB. Once populated with relationships, PHARE-KB (i) can be visualized in the form of a biological network to guide human tasks such as database curation and (ii) can be queried programmatically to guide bioinformatics applications such as the prediction of molecular interactions. PHARE is available at http://purl.bioontology.org/ontology/PHARE. PMID:21624156

  2. Concurrent validity of a rule-based system.

    PubMed

    Hirsch, M; Chang, B L; Jensen, K

    1993-01-01

    The purpose of this project was to test the concurrent validity of the diagnoses recommended by a rule-based expert system. Concurrent validity was determined first by comparing the expert system's computerized diagnostic recommendations with that of a Clinical Nurse Specialist (CNS Assessor) who assessed the patient, and secondly by comparing the expert system's candidate diagnoses with those of a panel of 10 clinical nurse specialists (CNS Panel). The expert system rule base for generating diagnoses was programmed for some of the most common nursing diagnoses (Metzger & Hiltunen, 1987) including: alteration in comfort, acute pain; impaired physical mobility; sleep pattern disturbance; impairment of skin integrity and self-care deficit (bathing, feeding, toileting, and dressing). Activity intolerance and potential for infection were also programmed as diagnostic possibilities in the rule base. PMID:8069751

  3. Early Stages of Sensory Processing, but Not Semantic Integration, Are Altered in Dyslexic Adults

    PubMed Central

    Silva, Patrícia B.; Ueki, Karen; Oliveira, Darlene G.; Boggio, Paulo S.; Macedo, Elizeu C.

    2016-01-01

    The aim of this study was to verify which stages of language processing are impaired in individuals with dyslexia. For this, a visual-auditory crossmodal task with semantic judgment was used. The P100 potentials were chosen, related to visual processing and initial integration, and N400 potentials related to semantic processing. Based on visual-auditory crossmodal studies, it is understood that dyslexic individuals present impairments in the integration of these two types of tasks and impairments in processing spoken and musical auditory information. The present study sought to investigate and compare the performance of 32 adult participants (14 individuals with dyslexia), in semantic processing tasks in two situations with auditory stimuli: sentences and music, with integrated visual stimuli (pictures). From the analysis of the accuracy, both the sentence and the music blocks showed significant effects on the congruency variable, with both groups having higher scores for the incongruent items than for the congruent ones. Furthermore, there was also a group effect when the priming was music, with the dyslexic group showing an inferior performance to the control group, demonstrating greater impairments in processing when the priming was music. Regarding the reaction time variable, a group effect in music and sentence priming was found, with the dyslexic group being slower than the control group. The N400 and P100 components were analyzed. In items with judgment and music priming, a group effect was observed for the amplitude of the P100, with higher means produced by individuals with dyslexia, corroborating the literature that individuals with dyslexia have difficulties in early information processing. A congruency effect was observed in the items with music priming, with greater P100 amplitudes found in incongruous situations. Analyses of the N400 component showed the congruency effect for amplitude in both types of priming, with the mean amplitude for incongruent

  4. Early Stages of Sensory Processing, but Not Semantic Integration, Are Altered in Dyslexic Adults.

    PubMed

    Silva, Patrícia B; Ueki, Karen; Oliveira, Darlene G; Boggio, Paulo S; Macedo, Elizeu C

    2016-01-01

    The aim of this study was to verify which stages of language processing are impaired in individuals with dyslexia. For this, a visual-auditory crossmodal task with semantic judgment was used. The P100 potentials were chosen, related to visual processing and initial integration, and N400 potentials related to semantic processing. Based on visual-auditory crossmodal studies, it is understood that dyslexic individuals present impairments in the integration of these two types of tasks and impairments in processing spoken and musical auditory information. The present study sought to investigate and compare the performance of 32 adult participants (14 individuals with dyslexia), in semantic processing tasks in two situations with auditory stimuli: sentences and music, with integrated visual stimuli (pictures). From the analysis of the accuracy, both the sentence and the music blocks showed significant effects on the congruency variable, with both groups having higher scores for the incongruent items than for the congruent ones. Furthermore, there was also a group effect when the priming was music, with the dyslexic group showing an inferior performance to the control group, demonstrating greater impairments in processing when the priming was music. Regarding the reaction time variable, a group effect in music and sentence priming was found, with the dyslexic group being slower than the control group. The N400 and P100 components were analyzed. In items with judgment and music priming, a group effect was observed for the amplitude of the P100, with higher means produced by individuals with dyslexia, corroborating the literature that individuals with dyslexia have difficulties in early information processing. A congruency effect was observed in the items with music priming, with greater P100 amplitudes found in incongruous situations. Analyses of the N400 component showed the congruency effect for amplitude in both types of priming, with the mean amplitude for incongruent

  5. Semantic Representation and Scale-Up of Integrated Air Traffic Management Data

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Ranjan, Shubha; Wei, Mie; Eshow, Michelle

    2016-01-01

    Each day, the global air transportation industry generates a vast amount of heterogeneous data from air carriers, air traffic control providers, and secondary aviation entities handling baggage, ticketing, catering, fuel delivery, and other services. Generally, these data are stored in isolated data systems, separated from each other by significant political, regulatory, economic, and technological divides. These realities aside, integrating aviation data into a single, queryable, big data store could enable insights leading to major efficiency, safety, and cost advantages. In this paper, we describe an implemented system for combining heterogeneous air traffic management data using semantic integration techniques. The system transforms data from its original disparate source formats into a unified semantic representation within an ontology-based triple store. Our initial prototype stores only a small sliver of air traffic data covering one day of operations at a major airport. The paper also describes our analysis of difficulties ahead as we prepare to scale up data storage to accommodate successively larger quantities of data -- eventually covering all US commercial domestic flights over an extended multi-year timeframe. We review several approaches to mitigating scale-up related query performance concerns.

  6. Integrating atmospheric and volcanic gas data in support of climate impact studies using semantic technologies

    NASA Astrophysics Data System (ADS)

    Sinha, K.; Fox, P.; Raskin, R.; McGuinness, D.

    2008-05-01

    In support of a NASA-funded scientific application (SESDI; Semantically Enabled Science Data Integration Project; (http://sesdi.hao.ucar.edu/) that needs to share volcano and climate data to investigate statistical (e.g. height of the tropopause) relationships between volcanism and global climate, we have generated a volcano and plate tectonic ontologies and leveraged and re-factored the existing SWEET (Semantic Web for Earth and Environmental Terminology) ontoloy. To fulfil several goals we have developed set of packages for integrating the relevant ontologies (which are shared and reused by a broad community of users) to provide access to the key solid-earth (volcano) and atmospheric related databases. We present details on how ontologies are used in this science application setting, the methodologies employed to create the ontologies, register them to the underlying data and the implementation for use by scientists. SESDI is an NASA/ESTO/ACCESS-funded project involving the High Altitude Observatory at the National Center for Atmospheric Research (NCAR), McGuinness Associates Consulting, NASA/JPL and Virginia Polytechnic University.

  7. Francisella tularensis novicida proteomic and transcriptomic data integration and annotation based on semantic web technologies

    PubMed Central

    Anwar, Nadia; Hunt, Ela

    2009-01-01

    Background This paper summarises the lessons and experiences gained from a case study of the application of semantic web technologies to the integration of data from the bacterial species Francisella tularensis novicida (Fn). Fn data sources are disparate and heterogeneous, as multiple laboratories across the world, using multiple technologies, perform experiments to understand the mechanism of virulence. It is hard to integrate these data sources in a flexible manner that allows new experimental data to be added and compared when required. Results Public domain data sources were combined in RDF. Using this connected graph of database cross references, we extended the annotations of an experimental data set by superimposing onto it the annotation graph. Identifiers used in the experimental data automatically resolved and the data acquired annotations in the rest of the RDF graph. This happened without the expensive manual annotation that would normally be required to produce these links. This graph of resolved identifiers was then used to combine two experimental data sets, a proteomics experiment and a transcriptomic experiment studying the mechanism of virulence through the comparison of wildtype Fn with an avirulent mutant strain. Conclusion We produced a graph of Fn cross references which enabled the combination of two experimental datasets. Through combination of these data we are able to perform queries that compare the results of the two experiments. We found that data are easily combined in RDF and that experimental results are easily compared when the data are integrated. We conclude that semantic data integration offers a convenient, simple and flexible solution to the integration of published and unpublished experimental data. PMID:19796400

  8. Semantic Web Technologies for the Integration of Learning Tools and Context-Aware Educational Services

    NASA Astrophysics Data System (ADS)

    Jeremić, Zoran; Jovanović, Jelena; Gašević, Dragan

    One of the main software engineers' competencies, solving software problems, is most effectively acquired through an active examination of learning resources and work on real-world examples in small development teams. This obviously indicates a need for an integration of several existing learning tools and systems in a common collaborative learning environment, as well as advanced educational services that provide students with right in time advice about learning resources and possible collaboration partners. In this paper, we present how we developed and applied a common ontological foundation for the integration of different existing learning tools and systems in a common learning environment called DEPTHS (Design Patterns Teaching Help System). In addition, we present a set of educational services that leverages semantic rich representation of learning resources and students' interaction data to recommend resource relevant for students' current learning context.

  9. A Case Study in Integrating Multiple E-commerce Standards via Semantic Web Technology

    NASA Astrophysics Data System (ADS)

    Yu, Yang; Hillman, Donald; Setio, Basuki; Heflin, Jeff

    Internet business-to-business transactions present great challenges in merging information from different sources. In this paper we describe a project to integrate four representative commercial classification systems with the Federal Cataloging System (FCS). The FCS is used by the US Defense Logistics Agency to name, describe and classify all items under inventory control by the DoD. Our approach uses the ECCMA Open Technical Dictionary (eOTD) as a common vocabulary to accommodate all different classifications. We create a semantic bridging ontology between each classification and the eOTD to describe their logical relationships in OWL DL. The essential idea is that since each classification has formal definitions in a common vocabulary, we can use subsumption to automatically integrate them, thus mitigating the need for pairwise mappings. Furthermore our system provides an interactive interface to let users choose and browse the results and more importantly it can translate catalogs that commit to these classifications using compiled mapping results.

  10. Semantic Web Ontology and Data Integration: a Case Study in Aiding Psychiatric Drug Repurposing.

    PubMed

    Liang, Chen; Sun, Jingchun; Tao, Cui

    2016-01-01

    Despite ongoing progress towards treating mental illness, there remain significant difficulties in selecting probable candidate drugs from the existing database. We describe an ontology - oriented approach aims to represent the nexus between genes, drugs, phenotypes, symptoms, and diseases from multiple information sources. Along with this approach, we report a case study in which we attempted to explore the candidate drugs that effective for both bipolar disorder and epilepsy. We constructed an ontology that incorporates the knowledge between the two diseases and performed semantic reasoning task on the ontology. The reasoning results suggested 48 candidate drugs that hold promise for a further breakthrough. The evaluation was performed and demonstrated the validity of the proposed ontology. The overarching goal of this research is to build a framework of ontology - based data integration underpinning psychiatric drug repurposing. This approach prioritizes the candidate drugs that have potential associations among genes, phenotypes and symptoms, and thus facilitates the data integration and drug repurposing in psychiatric disorders. PMID:27570661

  11. SENHANCE: A Semantic Web framework for integrating social and hardware sensors in e-Health.

    PubMed

    Pagkalos, Ioannis; Petrou, Loukas

    2016-09-01

    Self-reported data are very important in Healthcare, especially when combined with data from sensors. Social Networking Sites, such as Facebook, are a promising source of not only self-reported data but also social data, which are otherwise difficult to obtain. Due to their unstructured nature, providing information that is meaningful to health professionals from this source is a daunting task. To this end, we employ Social Network Applications as Social Sensors that gather structured data and use Semantic Web technologies to fuse them with hardware sensor data, effectively integrating both sources. We show that this combination of social and hardware sensor observations creates a novel space that can be used for a variety of feature-rich e-Health applications. We present the design of our prototype framework, SENHANCE, and our findings from its pilot application in the NutriHeAl project, where a Facebook app is integrated with Fitbit digital pedometers for Lifestyle monitoring. PMID:25759065

  12. Once is Enough: N400 Indexes Semantic Integration of Novel Word Meanings from a Single Exposure in Context

    PubMed Central

    Borovsky, Arielle; Elman, Jeffrey L.; Kutas, Marta

    2012-01-01

    We investigated the impact of contextual constraint on the integration of novel word meanings into semantic memory. Adults read strongly or weakly constraining sentences ending in known or unknown (novel) words as scalp-recorded electrical brain activity was recorded. Word knowledge was assessed via a lexical decision task in which recently seen known and unknown word sentence endings served as primes for semantically related, unrelated, and synonym/identical target words. As expected, N400 amplitudes to target words preceded by known word primes were reduced by prime-target relatedness. Critically, N400 amplitudes to targets preceded by novel primes also varied with prime-target relatedness, but only when they had initially appeared in highly constraining sentences. This demonstrates for the first time that fast-mapped word representations can develop strong associations with semantically related word meanings and reveals a rapid neural process that can integrate information about word meanings into the mental lexicon of young adults. PMID:23125559

  13. A Semantic Big Data Platform for Integrating Heterogeneous Wearable Data in Healthcare.

    PubMed

    Mezghani, Emna; Exposito, Ernesto; Drira, Khalil; Da Silveira, Marcos; Pruski, Cédric

    2015-12-01

    Advances supported by emerging wearable technologies in healthcare promise patients a provision of high quality of care. Wearable computing systems represent one of the most thrust areas used to transform traditional healthcare systems into active systems able to continuously monitor and control the patients' health in order to manage their care at an early stage. However, their proliferation creates challenges related to data management and integration. The diversity and variety of wearable data related to healthcare, their huge volume and their distribution make data processing and analytics more difficult. In this paper, we propose a generic semantic big data architecture based on the "Knowledge as a Service" approach to cope with heterogeneity and scalability challenges. Our main contribution focuses on enriching the NIST Big Data model with semantics in order to smartly understand the collected data, and generate more accurate and valuable information by correlating scattered medical data stemming from multiple wearable devices or/and from other distributed data sources. We have implemented and evaluated a Wearable KaaS platform to smartly manage heterogeneous data coming from wearable devices in order to assist the physicians in supervising the patient health evolution and keep the patient up-to-date about his/her status. PMID:26490143

  14. Semantic Integration of Cervical Cancer Data Repositories to Facilitate Multicenter Association Studies: The ASSIST Approach

    PubMed Central

    Agorastos, Theodoros; Koutkias, Vassilis; Falelakis, Manolis; Lekka, Irini; Mikos, Themistoklis; Delopoulos, Anastasios; Mitkas, Pericles A.; Tantsis, Antonios; Weyers, Steven; Coorevits, Pascal; Kaufmann, Andreas M.; Kurzeja, Roberto; Maglaveras, Nicos

    2009-01-01

    The current work addresses the unification of Electronic Health Records related to cervical cancer into a single medical knowledge source, in the context of the EU-funded ASSIST research project. The project aims to facilitate the research for cervical precancer and cancer through a system that virtually unifies multiple patient record repositories, physically located in different medical centers/hospitals, thus, increasing flexibility by allowing the formation of study groups “on demand” and by recycling patient records in new studies. To this end, ASSIST uses semantic technologies to translate all medical entities (such as patient examination results, history, habits, genetic profile) and represent them in a common form, encoded in the ASSIST Cervical Cancer Ontology. The current paper presents the knowledge elicitation approach followed, towards the definition and representation of the disease’s medical concepts and rules that constitute the basis for the ASSIST Cervical Cancer Ontology. The proposed approach constitutes a paradigm for semantic integration of heterogeneous clinical data that may be applicable to other biomedical application domains. PMID:19458792

  15. Software Uncertainty in Integrated Environmental Modelling: the role of Semantics and Open Science

    NASA Astrophysics Data System (ADS)

    de Rigo, Daniele

    2013-04-01

    Computational aspects increasingly shape environmental sciences [1]. Actually, transdisciplinary modelling of complex and uncertain environmental systems is challenging computational science (CS) and also the science-policy interface [2-7]. Large spatial-scale problems falling within this category - i.e. wide-scale transdisciplinary modelling for environment (WSTMe) [8-10] - often deal with factors (a) for which deep-uncertainty [2,11-13] may prevent usual statistical analysis of modelled quantities and need different ways for providing policy-making with science-based support. Here, practical recommendations are proposed for tempering a peculiar - not infrequently underestimated - source of uncertainty. Software errors in complex WSTMe may subtly affect the outcomes with possible consequences even on collective environmental decision-making. Semantic transparency in CS [2,8,10,14,15] and free software [16,17] are discussed as possible mitigations (b) . Software uncertainty, black-boxes and free software. Integrated natural resources modelling and management (INRMM) [29] frequently exploits chains of nontrivial data-transformation models (D- TM), each of them affected by uncertainties and errors. Those D-TM chains may be packaged as monolithic specialized models, maybe only accessible as black-box executables (if accessible at all) [50]. For end-users, black-boxes merely transform inputs in the final outputs, relying on classical peer-reviewed publications for describing the internal mechanism. While software tautologically plays a vital role in CS, it is often neglected in favour of more theoretical aspects. This paradox has been provocatively described as "the invisibility of software in published science. Almost all published papers required some coding, but almost none mention software, let alone include or link to source code" [51]. Recently, this primacy of theory over reality [52-54] has been challenged by new emerging hybrid approaches [55] and by the

  16. Towards an open-source semantic data infrastructure for integrating clinical and scientific data in cognition-guided surgery

    NASA Astrophysics Data System (ADS)

    Fetzer, Andreas; Metzger, Jasmin; Katic, Darko; März, Keno; Wagner, Martin; Philipp, Patrick; Engelhardt, Sandy; Weller, Tobias; Zelzer, Sascha; Franz, Alfred M.; Schoch, Nicolai; Heuveline, Vincent; Maleshkova, Maria; Rettinger, Achim; Speidel, Stefanie; Wolf, Ivo; Kenngott, Hannes; Mehrabi, Arianeb; Müller-Stich, Beat P.; Maier-Hein, Lena; Meinzer, Hans-Peter; Nolden, Marco

    2016-03-01

    In the surgical domain, individual clinical experience, which is derived in large part from past clinical cases, plays an important role in the treatment decision process. Simultaneously the surgeon has to keep track of a large amount of clinical data, emerging from a number of heterogeneous systems during all phases of surgical treatment. This is complemented with the constantly growing knowledge derived from clinical studies and literature. To recall this vast amount of information at the right moment poses a growing challenge that should be supported by adequate technology. While many tools and projects aim at sharing or integrating data from various sources or even provide knowledge-based decision support - to our knowledge - no concept has been proposed that addresses the entire surgical pathway by accessing the entire information in order to provide context-aware cognitive assistance. Therefore a semantic representation and central storage of data and knowledge is a fundamental requirement. We present a semantic data infrastructure for integrating heterogeneous surgical data sources based on a common knowledge representation. A combination of the Extensible Neuroimaging Archive Toolkit (XNAT) with semantic web technologies, standardized interfaces and a common application platform enables applications to access and semantically annotate data, perform semantic reasoning and eventually create individual context-aware surgical assistance. The infrastructure meets the requirements of a cognitive surgical assistant system and has been successfully applied in various use cases. The system is based completely on free technologies and is available to the community as an open-source package.

  17. Once Is Enough: N400 Indexes Semantic Integration of Novel Word Meanings from a Single Exposure in Context

    ERIC Educational Resources Information Center

    Borovsky, Arielle; Elman, Jeffrey L.; Kutas, Marta

    2012-01-01

    We investigated the impact of contextual constraint on the integration of novel word meanings into semantic memory. Adults read strongly or weakly constraining sentences ending in known or unknown (novel) words as scalp-recorded electrical brain activity was recorded. Word knowledge was assessed via a lexical decision task in which recently seen…

  18. Semantic Web Ontology and Data Integration: a Case Study in Aiding Psychiatric Drug Repurposing.

    PubMed

    Liang, Chen; Sun, Jingchun; Tao, Cui

    2015-01-01

    There remain significant difficulties selecting probable candidate drugs from existing databases. We describe an ontology-oriented approach to represent the nexus between genes, drugs, phenotypes, symptoms, and diseases from multiple information sources. We also report a case study in which we attempted to explore candidate drugs effective for bipolar disorder and epilepsy. We constructed an ontology incorporating knowledge between the two diseases and performed semantic reasoning tasks with the ontology. The results suggested 48 candidate drugs that hold promise for further breakthrough. The evaluation demonstrated the validity our approach. Our approach prioritizes the candidate drugs that have potential associations among genes, phenotypes and symptoms, and thus facilitates the data integration and drug repurposing in psychiatric disorders. PMID:26262350

  19. HUNTER-GATHERER: Three search techniques integrated for natural language semantics

    SciTech Connect

    Beale, S.; Nirenburg, S.; Mahesh, K.

    1996-12-31

    This work integrates three related Al search techniques - constraint satisfaction, branch-and-bound and solution synthesis - and applies the result to semantic processing in natural language (NL). We summarize the approach as {open_quote}Hunter-Gatherer:{close_quotes} (1) branch-and-bound and constraint satisfaction allow us to {open_quote}hunt down{close_quotes} non-optimal and impossible solutions and prune them from the search space. (2) solution synthesis methods then {open_quote}gather{close_quotes} all optimal solutions avoiding exponential complexity. Each of the three techniques is briefly described, as well as their extensions and combinations used in our system. We focus on the combination of solution synthesis and branch-and-bound methods which has enabled near-linear-time processing in our applications. Finally, we illustrate how the use of our technique in a large-scale MT project allowed a drastic reduction in search space.

  20. The holistic rhizosphere: integrating zones, processes, and semantics in the soil influenced by roots.

    PubMed

    York, Larry M; Carminati, Andrea; Mooney, Sacha J; Ritz, Karl; Bennett, Malcolm J

    2016-06-01

    Despite often being conceptualized as a thin layer of soil around roots, the rhizosphere is actually a dynamic system of interacting processes. Hiltner originally defined the rhizosphere as the soil influenced by plant roots. However, soil physicists, chemists, microbiologists, and plant physiologists have studied the rhizosphere independently, and therefore conceptualized the rhizosphere in different ways and using contrasting terminology. Rather than research-specific conceptions of the rhizosphere, the authors propose a holistic rhizosphere encapsulating the following components: microbial community gradients, macroorganisms, mucigel, volumes of soil structure modification, and depletion or accumulation zones of nutrients, water, root exudates, volatiles, and gases. These rhizosphere components are the result of dynamic processes and understanding the integration of these processes will be necessary for future contributions to rhizosphere science based upon interdisciplinary collaborations. In this review, current knowledge of the rhizosphere is synthesized using this holistic perspective with a focus on integrating traditionally separated rhizosphere studies. The temporal dynamics of rhizosphere activities will also be considered, from annual fine root turnover to diurnal fluctuations of water and nutrient uptake. The latest empirical and computational methods are discussed in the context of rhizosphere integration. Clarification of rhizosphere semantics, a holistic model of the rhizosphere, examples of integration of rhizosphere studies across disciplines, and review of the latest rhizosphere methods will empower rhizosphere scientists from different disciplines to engage in the interdisciplinary collaborations needed to break new ground in truly understanding the rhizosphere and to apply this knowledge for practical guidance. PMID:26980751

  1. Semantic Repositories for eGovernment Initiatives: Integrating Knowledge and Services

    NASA Astrophysics Data System (ADS)

    Palmonari, Matteo; Viscusi, Gianluigi

    In recent years, public sector investments in eGovernment initiatives have depended on making more reliable existing governmental ICT systems and infrastructures. Furthermore, we assist at a change in the focus of public sector management, from the disaggregation, competition and performance measurements typical of the New Public Management (NPM), to new models of governance, aiming for the reintegration of services under a new perspective in bureaucracy, namely a holistic approach to policy making which exploits the extensive digitalization of administrative operations. In this scenario, major challenges are related to support effective access to information both at the front-end level, by means of highly modular and customizable content provision, and at the back-end level, by means of information integration initiatives. Repositories of information about data and services that exploit semantic models and technologies can support these goals by bridging the gap between the data-level representations and the human-level knowledge involved in accessing information and in searching for services. Moreover, semantic repository technologies can reach a new level of automation for different tasks involved in interoperability programs, both related to data integration techniques and service-oriented computing approaches. In this chapter, we discuss the above topics by referring to techniques and experiences where repositories based on conceptual models and ontologies are used at different levels in eGovernment initiatives: at the back-end level to produce a comprehensive view of the information managed in the public administrations' (PA) information systems, and at the front-end level to support effective service delivery.

  2. When zebras become painted donkeys: Grammatical gender and semantic priming interact during picture integration in a spoken Spanish sentence

    PubMed Central

    Wicha, Nicole Y. Y.; Orozco-Figueroa, Araceli; Reyes, Iliana; Hernandez, Arturo; de Barreto, Lourdes Gavaldón; Bates, Elizabeth A.

    2012-01-01

    This study investigates the contribution of grammatical gender to integrating depicted nouns into sentences during on-line comprehension, and whether semantic congruity and gender agreement interact using two tasks: naming and semantic judgement of pictures. Native Spanish speakers comprehended spoken Spanish sentences with an embedded line drawing, which replaced a noun that either made sense or not with the preceding sentence context and either matched or mismatched the gender of the preceding article. In Experiment 1a (picture naming) slower naming times were found for gender mismatching pictures than matches, as well as for semantically incongruous pictures than congruous ones. In addition, the effects of gender agreement and semantic congruity interacted; specifically, pictures that were both semantically incongruous and gender mismatching were named slowest, but not as slow as if adding independent delays from both violations. Compared with a neutral baseline, with pictures embedded in simple command sentences like “Now please say ____”, both facilitative and inhibitory effects were observed. Experiment 1b replicated these results with low-cloze gender-neutral sentences, more similar in structure and processing demands to the experimental sentences. In Experiment 2, participants judged a picture’s semantic fit within a sentence by button-press; gender agreement and semantic congruity again interacted, with gender agreement having an effect on congruous but not incongruous pictures. Two distinct effects of gender are hypothesised: a “global” predictive effect (observed with and without overt noun production), and a “local” inhibitory effect (observed only with production of gender-discordant nouns). PMID:22773871

  3. Lowering the Barriers to Integrative Aquatic Ecosystem Science: Semantic Provenance, Open Linked Data, and Workflows

    NASA Astrophysics Data System (ADS)

    Harmon, T.; Hofmann, A. F.; Utz, R.; Deelman, E.; Hanson, P. C.; Szekely, P.; Villamizar, S. R.; Knoblock, C.; Guo, Q.; Crichton, D. J.; McCann, M. P.; Gil, Y.

    2011-12-01

    Environmental cyber-observatory (ECO) planning and implementation has been ongoing for more than a decade now, and several major efforts have recently come online or will soon. Some investigators in the relevant research communities will use ECO data, traditionally by developing their own client-side services to acquire data and then manually create custom tools to integrate and analyze it. However, a significant portion of the aquatic ecosystem science community will need more custom services to manage locally collected data. The latter group represents enormous intellectual capacity when one envisions thousands of ecosystems scientists supplementing ECO baseline data by sharing their own locally intensive observational efforts. This poster summarizes the outcomes of the June 2011 Workshop for Aquatic Ecosystem Sustainability (WAES) which focused on the needs of aquatic ecosystem research on inland waters and oceans. Here we advocate new approaches to support scientists to model, integrate, and analyze data based on: 1) a new breed of software tools in which semantic provenance is automatically created and used by the system, 2) the use of open standards based on RDF and Linked Data Principles to facilitate sharing of data and provenance annotations, 3) the use of workflows to represent explicitly all data preparation, integration, and processing steps in a way that is automatically repeatable. Aquatic ecosystems workflow exemplars are provided and discussed in terms of their potential broaden data sharing, analysis and synthesis thereby increasing the impact of aquatic ecosystem research.

  4. Closed-Loop Lifecycle Management of Service and Product in the Internet of Things: Semantic Framework for Knowledge Integration.

    PubMed

    Yoo, Min-Jung; Grozel, Clément; Kiritsis, Dimitris

    2016-01-01

    This paper describes our conceptual framework of closed-loop lifecycle information sharing for product-service in the Internet of Things (IoT). The framework is based on the ontology model of product-service and a type of IoT message standard, Open Messaging Interface (O-MI) and Open Data Format (O-DF), which ensures data communication. (1) BACKGROUND: Based on an existing product lifecycle management (PLM) methodology, we enhanced the ontology model for the purpose of integrating efficiently the product-service ontology model that was newly developed; (2) METHODS: The IoT message transfer layer is vertically integrated into a semantic knowledge framework inside which a Semantic Info-Node Agent (SINA) uses the message format as a common protocol of product-service lifecycle data transfer; (3) RESULTS: The product-service ontology model facilitates information retrieval and knowledge extraction during the product lifecycle, while making more information available for the sake of service business creation. The vertical integration of IoT message transfer, encompassing all semantic layers, helps achieve a more flexible and modular approach to knowledge sharing in an IoT environment; (4) Contribution: A semantic data annotation applied to IoT can contribute to enhancing collected data types, which entails a richer knowledge extraction. The ontology-based PLM model enables as well the horizontal integration of heterogeneous PLM data while breaking traditional vertical information silos; (5) CONCLUSION: The framework was applied to a fictive case study with an electric car service for the purpose of demonstration. For the purpose of demonstrating the feasibility of the approach, the semantic model is implemented in Sesame APIs, which play the role of an Internet-connected Resource Description Framework (RDF) database. PMID:27399717

  5. Closed-Loop Lifecycle Management of Service and Product in the Internet of Things: Semantic Framework for Knowledge Integration

    PubMed Central

    Yoo, Min-Jung; Grozel, Clément; Kiritsis, Dimitris

    2016-01-01

    This paper describes our conceptual framework of closed-loop lifecycle information sharing for product-service in the Internet of Things (IoT). The framework is based on the ontology model of product-service and a type of IoT message standard, Open Messaging Interface (O-MI) and Open Data Format (O-DF), which ensures data communication. (1) Background: Based on an existing product lifecycle management (PLM) methodology, we enhanced the ontology model for the purpose of integrating efficiently the product-service ontology model that was newly developed; (2) Methods: The IoT message transfer layer is vertically integrated into a semantic knowledge framework inside which a Semantic Info-Node Agent (SINA) uses the message format as a common protocol of product-service lifecycle data transfer; (3) Results: The product-service ontology model facilitates information retrieval and knowledge extraction during the product lifecycle, while making more information available for the sake of service business creation. The vertical integration of IoT message transfer, encompassing all semantic layers, helps achieve a more flexible and modular approach to knowledge sharing in an IoT environment; (4) Contribution: A semantic data annotation applied to IoT can contribute to enhancing collected data types, which entails a richer knowledge extraction. The ontology-based PLM model enables as well the horizontal integration of heterogeneous PLM data while breaking traditional vertical information silos; (5) Conclusion: The framework was applied to a fictive case study with an electric car service for the purpose of demonstration. For the purpose of demonstrating the feasibility of the approach, the semantic model is implemented in Sesame APIs, which play the role of an Internet-connected Resource Description Framework (RDF) database. PMID:27399717

  6. Semantic Web Ontology and Data Integration: a Case Study in Aiding Psychiatric Drug Repurposing

    PubMed Central

    Liang, Chen; Sun, Jingchun; Tao, Cui

    2016-01-01

    Despite ongoing progress towards treating mental illness, there remain significant difficulties in selecting probable candidate drugs from the existing database. We describe an ontology — oriented approach aims to represent the nexus between genes, drugs, phenotypes, symptoms, and diseases from multiple information sources. Along with this approach, we report a case study in which we attempted to explore the candidate drugs that effective for both bipolar disorder and epilepsy. We constructed an ontology that incorporates the knowledge between the two diseases and performed semantic reasoning task on the ontology. The reasoning results suggested 48 candidate drugs that hold promise for a further breakthrough. The evaluation was performed and demonstrated the validity of the proposed ontology. The overarching goal of this research is to build a framework of ontology — based data integration underpinning psychiatric drug repurposing. This approach prioritizes the candidate drugs that have potential associations among genes, phenotypes and symptoms, and thus facilitates the data integration and drug repurposing in psychiatric disorders. PMID:27570661

  7. Using Linked Open Data and Semantic Integration to Search Across Geoscience Repositories

    NASA Astrophysics Data System (ADS)

    Mickle, A.; Raymond, L. M.; Shepherd, A.; Arko, R. A.; Carbotte, S. M.; Chandler, C. L.; Cheatham, M.; Fils, D.; Hitzler, P.; Janowicz, K.; Jones, M.; Krisnadhi, A.; Lehnert, K. A.; Narock, T.; Schildhauer, M.; Wiebe, P. H.

    2014-12-01

    The MBLWHOI Library is a partner in the OceanLink project, an NSF EarthCube Building Block, applying semantic technologies to enable knowledge discovery, sharing and integration. OceanLink is testing ontology design patterns that link together: two data repositories, Rolling Deck to Repository (R2R), Biological and Chemical Oceanography Data Management Office (BCO-DMO); the MBLWHOI Library Institutional Repository (IR) Woods Hole Open Access Server (WHOAS); National Science Foundation (NSF) funded awards; and American Geophysical Union (AGU) conference presentations. The Library is collaborating with scientific users, data managers, DSpace engineers, experts in ontology design patterns, and user interface developers to make WHOAS, a DSpace repository, linked open data enabled. The goal is to allow searching across repositories without any of the information providers having to change how they manage their collections. The tools developed for DSpace will be made available to the community of users. There are 257 registered DSpace repositories in the United Stated and over 1700 worldwide. Outcomes include: Integration of DSpace with OpenRDF Sesame triple store to provide SPARQL endpoint for the storage and query of RDF representation of DSpace resources, Mapping of DSpace resources to OceanLink ontology, and DSpace "data" add on to provide resolvable linked open data representation of DSpace resources.

  8. Software Uncertainty in Integrated Environmental Modelling: the role of Semantics and Open Science

    NASA Astrophysics Data System (ADS)

    de Rigo, Daniele

    2013-04-01

    Computational aspects increasingly shape environmental sciences [1]. Actually, transdisciplinary modelling of complex and uncertain environmental systems is challenging computational science (CS) and also the science-policy interface [2-7]. Large spatial-scale problems falling within this category - i.e. wide-scale transdisciplinary modelling for environment (WSTMe) [8-10] - often deal with factors (a) for which deep-uncertainty [2,11-13] may prevent usual statistical analysis of modelled quantities and need different ways for providing policy-making with science-based support. Here, practical recommendations are proposed for tempering a peculiar - not infrequently underestimated - source of uncertainty. Software errors in complex WSTMe may subtly affect the outcomes with possible consequences even on collective environmental decision-making. Semantic transparency in CS [2,8,10,14,15] and free software [16,17] are discussed as possible mitigations (b) . Software uncertainty, black-boxes and free software. Integrated natural resources modelling and management (INRMM) [29] frequently exploits chains of nontrivial data-transformation models (D- TM), each of them affected by uncertainties and errors. Those D-TM chains may be packaged as monolithic specialized models, maybe only accessible as black-box executables (if accessible at all) [50]. For end-users, black-boxes merely transform inputs in the final outputs, relying on classical peer-reviewed publications for describing the internal mechanism. While software tautologically plays a vital role in CS, it is often neglected in favour of more theoretical aspects. This paradox has been provocatively described as "the invisibility of software in published science. Almost all published papers required some coding, but almost none mention software, let alone include or link to source code" [51]. Recently, this primacy of theory over reality [52-54] has been challenged by new emerging hybrid approaches [55] and by the

  9. An Embedded Rule-Based Diagnostic Expert System in Ada

    NASA Technical Reports Server (NTRS)

    Jones, Robert E.; Liberman, Eugene M.

    1992-01-01

    Ada is becoming an increasingly popular programming language for large Government-funded software projects. Ada with it portability, transportability, and maintainability lends itself well to today's complex programming environment. In addition, expert systems have also assumed a growing role in providing human-like reasoning capability expertise for computer systems. The integration is discussed of expert system technology with Ada programming language, especially a rule-based expert system using an ART-Ada (Automated Reasoning Tool for Ada) system shell. NASA Lewis was chosen as a beta test site for ART-Ada. The test was conducted by implementing the existing Autonomous Power EXpert System (APEX), a Lisp-based power expert system, in ART-Ada. Three components, the rule-based expert systems, a graphics user interface, and communications software make up SMART-Ada (Systems fault Management with ART-Ada). The rules were written in the ART-Ada development environment and converted to Ada source code. The graphics interface was developed with the Transportable Application Environment (TAE) Plus, which generates Ada source code to control graphics images. SMART-Ada communicates with a remote host to obtain either simulated or real data. The Ada source code generated with ART-Ada, TAE Plus, and communications code was incorporated into an Ada expert system that reads the data from a power distribution test bed, applies the rule to determine a fault, if one exists, and graphically displays it on the screen. The main objective, to conduct a beta test on the ART-Ada rule-based expert system shell, was achieved. The system is operational. New Ada tools will assist in future successful projects. ART-Ada is one such tool and is a viable alternative to the straight Ada code when an application requires a rule-based or knowledge-based approach.

  10. Parallelism In Rule-Based Systems

    NASA Astrophysics Data System (ADS)

    Sabharwal, Arvind; Iyengar, S. Sitharama; de Saussure, G.; Weisbin, C. R.

    1988-03-01

    Rule-based systems, which have proven to be extremely useful for several Artificial Intelligence and Expert Systems applications, currently face severe limitations due to the slow speed of their execution. To achieve the desired speed-up, this paper addresses the problem of parallelization of production systems and explores the various architectural and algorithmic possibilities. The inherent sources of parallelism in the production system structure are analyzed and the trade-offs, limitations and feasibility of exploitation of these sources of parallelism are presented. Based on this analysis, we propose a dedicated, coarse-grained, n-ary tree multiprocessor architecture for the parallel implementation of rule-based systems and then present algorithms for partitioning of rules in this architecture.

  11. Integrating semantic web technologies and geospatial catalog services for geospatial information discovery and processing in cyberinfrastructure

    SciTech Connect

    Yue, Peng; Gong, Jianya; Di, Liping; He, Lianlian; Wei, Yaxing

    2011-04-01

    Abstract A geospatial catalogue service provides a network-based meta-information repository and interface for advertising and discovering shared geospatial data and services. Descriptive information (i.e., metadata) for geospatial data and services is structured and organized in catalogue services. The approaches currently available for searching and using that information are often inadequate. Semantic Web technologies show promise for better discovery methods by exploiting the underlying semantics. Such development needs special attention from the Cyberinfrastructure perspective, so that the traditional focus on discovery of and access to geospatial data can be expanded to support the increased demand for processing of geospatial information and discovery of knowledge. Semantic descriptions for geospatial data, services, and geoprocessing service chains are structured, organized, and registered through extending elements in the ebXML Registry Information Model (ebRIM) of a geospatial catalogue service, which follows the interface specifications of the Open Geospatial Consortium (OGC) Catalogue Services for the Web (CSW). The process models for geoprocessing service chains, as a type of geospatial knowledge, are captured, registered, and discoverable. Semantics-enhanced discovery for geospatial data, services/service chains, and process models is described. Semantic search middleware that can support virtual data product materialization is developed for the geospatial catalogue service. The creation of such a semantics-enhanced geospatial catalogue service is important in meeting the demands for geospatial information discovery and analysis in Cyberinfrastructure.

  12. Rule-based analysis of pilot decisions

    NASA Technical Reports Server (NTRS)

    Lewis, C. M.

    1985-01-01

    The application of the rule identification technique to the analysis of human performance data is proposed. The relation between the language and identifiable consistencies is discussed. The advantages of production system models for the description of complex human behavior are studied. The use of a Monte Carlo significance testing procedure to assure the validity of the rule identification is examined. An example of the rule-based analysis of Palmer's (1983) data is presented.

  13. Toward Open Science at the European Scale: Geospatial Semantic Array Programming for Integrated Environmental Modelling

    NASA Astrophysics Data System (ADS)

    de Rigo, Daniele; Corti, Paolo; Caudullo, Giovanni; McInerney, Daniel; Di Leo, Margherita; San-Miguel-Ayanz, Jesús

    2013-04-01

    of the science-policy interface, INRMM should be able to provide citizens and policy-makers with a clear, accurate understanding of the implications of the technical apparatus on collective environmental decision-making [1]. Complexity of course should not be intended as an excuse for obscurity [27-29]. Geospatial Semantic Array Programming. Concise array-based mathematical formulation and implementation (with array programming tools, see (b) ) have proved helpful in supporting and mitigating the complexity of WSTMe [40-47] when complemented with generalized modularization and terse array-oriented semantic constraints. This defines the paradigm of Semantic Array Programming (SemAP) [35,36] where semantic transparency also implies free software use (although black-boxes [12] - e.g. legacy code - might easily be semantically interfaced). A new approach for WSTMe has emerged by formalizing unorganized best practices and experience-driven informal patterns. The approach introduces a lightweight (non-intrusive) integration of SemAP and geospatial tools (c) - called Geospatial Semantic Array Programming (GeoSemAP). GeoSemAP (d) exploits the joint semantics provided by SemAP and geospatial tools to split a complex D- TM into logical blocks which are easier to check by means of mathematical array-based and geospatial constraints. Those constraints take the form of precondition, invariant and postcondition semantic checks. This way, even complex WSTMe may be described as the composition of simpler GeoSemAP blocks, each of them structured as (d). GeoSemAP allows intermediate data and information layers to be more easily an formally semantically described so as to increase fault-tolerance [17], transparency and reproducibility of WSTMe. This might also help to better communicate part of the policy-relevant knowledge, often difficult to transfer from technical WSTMe to the science-policy interface [1,15]. References de Rigo, D., 2013. Behind the horizon of reproducible

  14. Semantically Interoperable XML Data.

    PubMed

    Vergara-Niedermayr, Cristobal; Wang, Fusheng; Pan, Tony; Kurc, Tahsin; Saltz, Joel

    2013-09-01

    XML is ubiquitously used as an information exchange platform for web-based applications in healthcare, life sciences, and many other domains. Proliferating XML data are now managed through latest native XML database technologies. XML data sources conforming to common XML schemas could be shared and integrated with syntactic interoperability. Semantic interoperability can be achieved through semantic annotations of data models using common data elements linked to concepts from ontologies. In this paper, we present a framework and software system to support the development of semantic interoperable XML based data sources that can be shared through a Grid infrastructure. We also present our work on supporting semantic validated XML data through semantic annotations for XML Schema, semantic validation and semantic authoring of XML data. We demonstrate the use of the system for a biomedical database of medical image annotations and markups. PMID:25298789

  15. Semantically Interoperable XML Data

    PubMed Central

    Vergara-Niedermayr, Cristobal; Wang, Fusheng; Pan, Tony; Kurc, Tahsin; Saltz, Joel

    2013-01-01

    XML is ubiquitously used as an information exchange platform for web-based applications in healthcare, life sciences, and many other domains. Proliferating XML data are now managed through latest native XML database technologies. XML data sources conforming to common XML schemas could be shared and integrated with syntactic interoperability. Semantic interoperability can be achieved through semantic annotations of data models using common data elements linked to concepts from ontologies. In this paper, we present a framework and software system to support the development of semantic interoperable XML based data sources that can be shared through a Grid infrastructure. We also present our work on supporting semantic validated XML data through semantic annotations for XML Schema, semantic validation and semantic authoring of XML data. We demonstrate the use of the system for a biomedical database of medical image annotations and markups. PMID:25298789

  16. Automation and integration of components for generalized semantic markup of electronic medical texts.

    PubMed Central

    Dugan, J. M.; Berrios, D. C.; Liu, X.; Kim, D. K.; Kaizer, H.; Fagan, L. M.

    1999-01-01

    Our group has built an information retrieval system based on a complex semantic markup of medical textbooks. We describe the construction of a set of web-based knowledge-acquisition tools that expedites the collection and maintenance of the concepts required for text markup and the search interface required for information retrieval from the marked text. In the text markup system, domain experts (DEs) identify sections of text that contain one or more elements from a finite set of concepts. End users can then query the text using a predefined set of questions, each of which identifies a subset of complementary concepts. The search process matches that subset of concepts to relevant points in the text. The current process requires that the DE invest significant time to generate the required concepts and questions. We propose a new system--called ACQUIRE (Acquisition of Concepts and Queries in an Integrated Retrieval Environment)--that assists a DE in two essential tasks in the text-markup process. First, it helps her to develop, edit, and maintain the concept model: the set of concepts with which she marks the text. Second, ACQUIRE helps her to develop a query model: the set of specific questions that end users can later use to search the marked text. The DE incorporates concepts from the concept model when she creates the questions in the query model. The major benefit of the ACQUIRE system is a reduction in the time and effort required for the text-markup process. We compared the process of concept- and query-model creation using ACQUIRE to the process used in previous work by rebuilding two existing models that we previously constructed manually. We observed a significant decrease in the time required to build and maintain the concept and query models. Images Figure 1 Figure 2 Figure 4 Figure 5 PMID:10566457

  17. The integration of geophysical and enhanced Moderate Resolution Imaging Spectroradiometer Normalized Difference Vegetation Index data into a rule-based, piecewise regression-tree model to estimate cheatgrass beginning of spring growth

    USGS Publications Warehouse

    Boyte, Stephen P.; Wylie, Bruce K.; Major, Donald J.; Brown, Jesslyn F.

    2015-01-01

    Cheatgrass exhibits spatial and temporal phenological variability across the Great Basin as described by ecological models formed using remote sensing and other spatial data-sets. We developed a rule-based, piecewise regression-tree model trained on 99 points that used three data-sets – latitude, elevation, and start of season time based on remote sensing input data – to estimate cheatgrass beginning of spring growth (BOSG) in the northern Great Basin. The model was then applied to map the location and timing of cheatgrass spring growth for the entire area. The model was strong (R2 = 0.85) and predicted an average cheatgrass BOSG across the study area of 29 March–4 April. Of early cheatgrass BOSG areas, 65% occurred at elevations below 1452 m. The highest proportion of cheatgrass BOSG occurred between mid-April and late May. Predicted cheatgrass BOSG in this study matched well with previous Great Basin cheatgrass green-up studies.

  18. The 3rd DBCLS BioHackathon: improving life science data integration with Semantic Web technologies

    PubMed Central

    2013-01-01

    Background BioHackathon 2010 was the third in a series of meetings hosted by the Database Center for Life Sciences (DBCLS) in Tokyo, Japan. The overall goal of the BioHackathon series is to improve the quality and accessibility of life science research data on the Web by bringing together representatives from public databases, analytical tool providers, and cyber-infrastructure researchers to jointly tackle important challenges in the area of in silico biological research. Results The theme of BioHackathon 2010 was the 'Semantic Web', and all attendees gathered with the shared goal of producing Semantic Web data from their respective resources, and/or consuming or interacting those data using their tools and interfaces. We discussed on topics including guidelines for designing semantic data and interoperability of resources. We consequently developed tools and clients for analysis and visualization. Conclusion We provide a meeting report from BioHackathon 2010, in which we describe the discussions, decisions, and breakthroughs made as we moved towards compliance with Semantic Web technologies - from source provider, through middleware, to the end-consumer. PMID:23398680

  19. Local and global semantic integration in an argument structure: ERP evidence from Korean.

    PubMed

    Nam, Yunju; Hong, Upyong

    2016-07-01

    The neural responses of Korean speakers were recorded while they read sentences that included local semantic mismatch between adjectives (A) and nouns (N) or/and global semantic mismatch between object nouns (N) and verbs (V), as well as the corresponding control sentences without any semantic anomalies. In Experiment 1 using verb-final declarative sentences (Nsubject [A-N]object V), the local A-N incongruence yielded an N400 effect at the object noun and a combination of N400 and a late negativity effect at the sentence final verb, whereas the global N-V incongruence yielded a biphasic N400 and P600 ERP pattern at the verb compared with the ERPs of same words in the control sentences respectively; in Experiment 2 using verb-initial object relative clause constructions ([Nsubject _V]rel [A-N]object …..) derived from the materials of Experiment 1, the effect of local incongruence changed notably such that not only an N400 but also an additional P600 effect was observed at the object noun, whereas the effect of the global incongruence remained largely the same (N400 and P600). Our theoretical interpretation of these results specifically focused on the reason for the P600 effects observed across different experiment conditions, which turned out to be attributable to (i) coordination of a semantic conflict, (ii) prediction disconfirmation, or (iii) argument structure processing breakdown. PMID:27095512

  20. Neural Correlates of Verbal and Nonverbal Semantic Integration in Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    McCleery, Joseph P.; Ceponiene, Rita; Burner, Karen M.; Townsend, Jeanne; Kinnear, Mikaela; Schreibman, Laura

    2010-01-01

    Background: Autism is a pervasive developmental disorder characterized by deficits in social-emotional, social-communicative, and language skills. Behavioral and neuroimaging studies have found that children with autism spectrum disorders (ASD) evidence abnormalities in semantic processing, with particular difficulties in verbal comprehension.…

  1. Semantic Integration as a Boundary Condition on Inhibitory Processes in Episodic Retrieval

    ERIC Educational Resources Information Center

    Goodmon, Leilani B.; Anderson, Michael C.

    2011-01-01

    Recalling an experience often impairs the later retention of related traces, a phenomenon known as retrieval-induced forgetting (RIF). Research has shown that episodic associations protect competing memories from RIF (Anderson & McCulloch, 1999). We report 4 experiments that examined whether semantic associations also protect against RIF. In all…

  2. Semantics and image content integration for pulmonary nodule interpretation in thoracic computed tomography

    NASA Astrophysics Data System (ADS)

    Raicu, Daniela S.; Varutbangkul, Ekarin; Cisneros, Janie G.; Furst, Jacob D.; Channin, David S.; Armato, Samuel G., III

    2007-03-01

    Useful diagnosis of lung lesions in computed tomography (CT) depends on many factors including the ability of radiologists to detect and correctly interpret the lesions. Computer-aided Diagnosis (CAD) systems can be used to increase the accuracy of radiologists in this task. CAD systems are, however, trained against ground truth and the mechanisms employed by the CAD algorithms may be distinctly different from the visual perception and analysis tasks of the radiologist. In this paper, we present a framework for finding the mappings between human descriptions and characteristics and computed image features. The data in our study were generated from 29 thoracic CT scans collected by the Lung Image Database Consortium (LIDC). Every case was annotated by up to 4 radiologists by marking the contour of nodules and assigning nine semantic terms to each identified nodule; fifty-nine image features were extracted from each segmented nodule. Correlation analysis and stepwise multiple regression were applied to find correlations among semantic characteristics and image features and to generate prediction models for each characteristic based on image features. From our preliminary experimental results, we found high correlations between different semantic terms (margin, texture), and promising mappings from image features to certain semantic terms (texture, lobulation, spiculation, malignancy). While the framework is presented with respect to the interpretation of pulmonary nodules in CT images, it can be easily extended to find mappings for other modalities in other anatomical structures and for other image features.

  3. Semantic Integration and Age of Acquisition Effects in Code-Blend Comprehension

    ERIC Educational Resources Information Center

    Giezen, Marcel R.; Emmorey, Karen

    2016-01-01

    Semantic and lexical decision tasks were used to investigate the mechanisms underlying code-blend facilitation: the finding that hearing bimodal bilinguals comprehend signs in American Sign Language (ASL) and spoken English words more quickly when they are presented together simultaneously than when each is presented alone. More robust…

  4. CI-Miner: A Semantic Methodology to Integrate Scientists, Data and Documents through the Use of Cyber-Infrastructure

    NASA Astrophysics Data System (ADS)

    Pinheiro da Silva, P.; CyberShARE Center of Excellence

    2011-12-01

    Scientists today face the challenge of rethinking the manner in which they document and make available their processes and data in an international cyber-infrastructure of shared resources. Some relevant examples of new scientific practices in the realm of computational and data extraction sciences include: large scale data discovery; data integration; data sharing across distinct scientific domains, systematic management of trust and uncertainty; and comprehensive support for explaining processes and results. This talk introduces CI-Miner - an innovative hands-on, open-source, community-driven methodology to integrate these new scientific practices. It has been developed in collaboration with scientists, with the purpose of capturing, storing and retrieving knowledge about scientific processes and their products, thereby further supporting a new generation of science techniques based on data exploration. CI-Miner uses semantic annotations in the form of W3C Ontology Web Language-based ontologies and Proof Markup Language (PML)-based provenance to represent knowledge. This methodology specializes in general-purpose ontologies, projected into workflow-driven ontologies(WDOs) and into semantic abstract workflows (SAWs). Provenance in PML is CI-Miner's integrative component, which allows scientists to retrieve and reason with the knowledge represented in these new semantic documents. It serves additionally as a platform to share such collected knowledge with the scientific community participating in the international cyber-infrastructure. The integrated semantic documents that are tailored for the use of human epistemic agents may also be utilized by machine epistemic agents, since the documents are based on W3C Resource Description Framework (RDF) notation. This talk is grounded upon interdisciplinary lessons learned through the use of CI-Miner in support of government-funded national and international cyber-infrastructure initiatives in the areas of geo

  5. Using constraints to model disjunctions in rule-based reasoning

    SciTech Connect

    Liu, Bing; Jaffar, Joxan

    1996-12-31

    Rule-based systems have long been widely used for building expert systems to perform practical knowledge intensive tasks. One important issue that has not been addressed satisfactorily is the disjunction, and this significantly limits their problem solving power. In this paper, we show that some important types of disjunction can be modeled with Constraint Satisfaction Problem (CSP) techniques, employing their simple representation schemes and efficient algorithms. A key idea is that disjunctions are represented as constraint variables, relations among disjunctions are represented as constraints, and rule chaining is integrated with constraint solving. In this integration, a constraint variable or a constraint is regarded as a special fact, and rules can be written with constraints, and information about constraints. Chaining of rules may trigger constraint propagation, and constraint propagation may cause firing of rules. A prototype system (called CFR) based on this idea has been implemented.

  6. SemFunSim: A New Method for Measuring Disease Similarity by Integrating Semantic and Gene Functional Association

    PubMed Central

    Ju, Peng; Peng, Jiajie; Wang, Yadong

    2014-01-01

    Background Measuring similarity between diseases plays an important role in disease-related molecular function research. Functional associations between disease-related genes and semantic associations between diseases are often used to identify pairs of similar diseases from different perspectives. Currently, it is still a challenge to exploit both of them to calculate disease similarity. Therefore, a new method (SemFunSim) that integrates semantic and functional association is proposed to address the issue. Methods SemFunSim is designed as follows. First of all, FunSim (Functional similarity) is proposed to calculate disease similarity using disease-related gene sets in a weighted network of human gene function. Next, SemSim (Semantic Similarity) is devised to calculate disease similarity using the relationship between two diseases from Disease Ontology. Finally, FunSim and SemSim are integrated to measure disease similarity. Results The high average AUC (area under the receiver operating characteristic curve) (96.37%) shows that SemFunSim achieves a high true positive rate and a low false positive rate. 79 of the top 100 pairs of similar diseases identified by SemFunSim are annotated in the Comparative Toxicogenomics Database (CTD) as being targeted by the same therapeutic compounds, while other methods we compared could identify 35 or less such pairs among the top 100. Moreover, when using our method on diseases without annotated compounds in CTD, we could confirm many of our predicted candidate compounds from literature. This indicates that SemFunSim is an effective method for drug repositioning. PMID:24932637

  7. A service-oriented distributed semantic mediator: integrating multiscale biomedical information.

    PubMed

    Mora, Oscar; Engelbrecht, Gerhard; Bisbal, Jesus

    2012-11-01

    Biomedical research continuously generates large amounts of heterogeneous and multimodal data spread over multiple data sources. These data, if appropriately shared and exploited, could dramatically improve the research practice itself, and ultimately the quality of health care delivered. This paper presents DISMED (DIstributed Semantic MEDiator), an open source semantic mediator that provides a unified view of a federated environment of multiscale biomedical data sources. DISMED is a Web-based software application to query and retrieve information distributed over a set of registered data sources, using semantic technologies. It also offers a userfriendly interface specifically designed to simplify the usage of these technologies by non-expert users. Although the architecture of the software mediator is generic and domain independent, in the context of this paper, DISMED has been evaluated for managing biomedical environments and facilitating research with respect to the handling of scientific data distributed in multiple heterogeneous data sources. As part of this contribution, a quantitative evaluation framework has been developed. It consist of a benchmarking scenario and the definition of five realistic use-cases. This framework, created entirely with public datasets, has been used to compare the performance of DISMED against other available mediators. It is also available to the scientific community in order to evaluate progress in the domain of semantic mediation, in a systematic and comparable manner. The results show an average improvement in the execution time by DISMED of 55% compared to the second best alternative in four out of the five use-cases of the experimental evaluation. PMID:22929464

  8. Semantics-Based Composition of Integrated Cardiomyocyte Models Motivated by Real-World Use Cases

    PubMed Central

    Neal, Maxwell L.; Carlson, Brian E.; Thompson, Christopher T.; James, Ryan C.; Kim, Karam G.; Tran, Kenneth; Crampin, Edmund J.; Cook, Daniel L.; Gennari, John H.

    2015-01-01

    Semantics-based model composition is an approach for generating complex biosimulation models from existing components that relies on capturing the biological meaning of model elements in a machine-readable fashion. This approach allows the user to work at the biological rather than computational level of abstraction and helps minimize the amount of manual effort required for model composition. To support this compositional approach, we have developed the SemGen software, and here report on SemGen’s semantics-based merging capabilities using real-world modeling use cases. We successfully reproduced a large, manually-encoded, multi-model merge: the “Pandit-Hinch-Niederer” (PHN) cardiomyocyte excitation-contraction model, previously developed using CellML. We describe our approach for annotating the three component models used in the PHN composition and for merging them at the biological level of abstraction within SemGen. We demonstrate that we were able to reproduce the original PHN model results in a semi-automated, semantics-based fashion and also rapidly generate a second, novel cardiomyocyte model composed using an alternative, independently-developed tension generation component. We discuss the time-saving features of our compositional approach in the context of these merging exercises, the limitations we encountered, and potential solutions for enhancing the approach. PMID:26716837

  9. STAX: a turbo prolog rule-based system for soil taxonomy

    NASA Astrophysics Data System (ADS)

    Fisher, Peter F.; Balachandran, Chandra S.

    This paper and the accompanying listing document a rule-based system which allocates soils according to the scheme of the USDA's Soil Taxonomy. This program goes to only the first, or order, level of the hierarchical system, but further work is extending it to lower levels in the classification system. The program is written in Borland International's Turbo Prolog, version 1.1, and operates on any IBM PC or compatible. The program mimics the eliminative classification process of Soil Taxonomy which is implemented as a semantic network, giving a depth-first search through soils and properties.

  10. An Unsupervised Rule-Based Method to Populate Ontologies from Text

    NASA Astrophysics Data System (ADS)

    Motta, Eduardo; Siqueira, Sean; Andreatta, Alexandre

    An increasing amount of information is available on the web and usually is expressed as text. Semantic information is implicit in these texts, since they are mainly intended for human consumption and interpretation. Because unstructured information is not easily handled automatically, an information extraction process has to be used to identify concepts and establish relations among them. Ontologies are an appropriate way to represent structured knowledge bases, enabling sharing, reuse and inference. In this paper, an information extraction process is used for populating a domain ontology. It targets Brazilian Portuguese texts from a biographical dictionary of music, which requires specific tools due to some language unique aspects. An unsupervised rule-based method is proposed. Through this process, latent concepts and relations expressed in natural language can be extracted and represented as an ontology, allowing new uses and visualizations of the content, such as semantically browsing and inferring new knowledge.

  11. Ontology Design Patterns: Bridging the Gap Between Local Semantic Use Cases and Large-Scale, Long-Term Data Integration

    NASA Astrophysics Data System (ADS)

    Shepherd, Adam; Arko, Robert; Krisnadhi, Adila; Hitzler, Pascal; Janowicz, Krzysztof; Chandler, Cyndy; Narock, Tom; Cheatham, Michelle; Schildhauer, Mark; Jones, Matt; Raymond, Lisa; Mickle, Audrey; Finin, Tim; Fils, Doug; Carbotte, Suzanne; Lehnert, Kerstin

    2015-04-01

    Integrating datasets for new use cases is one of the common drivers for adopting semantic web technologies. Even though linked data principles enables this type of activity over time, the task of reconciling new ontological commitments for newer use cases can be daunting. This situation was faced by the Biological and Chemical Oceanography Data Management Office (BCO-DMO) as it sought to integrate its existing linked data with other data repositories to address newer scientific use cases as a partner in the GeoLink Project. To achieve a successful integration with other GeoLink partners, BCO-DMO's metadata would need to be described using the new ontologies developed by the GeoLink partners - a situation that could impact semantic inferencing, pre-existing software and external users of BCO-DMO's linked data. This presentation describes the process of how GeoLink is bridging the gap between local, pre-existing ontologies to achieve scientific metadata integration for all its partners through the use of ontology design patterns. GeoLink, an NSF EarthCube Building Block, brings together experts from the geosciences, computer science, and library science in an effort to improve discovery and reuse of data and knowledge. Its participating repositories include content from field expeditions, laboratory analyses, journal publications, conference presentations, theses/reports, and funding awards that span scientific studies from marine geology to marine ecology and biogeochemistry to paleoclimatology. GeoLink's outcomes include a set of reusable ontology design patterns (ODPs) that describe core geoscience concepts, a network of Linked Data published by participating repositories using those ODPs, and tools to facilitate discovery of related content in multiple repositories.

  12. From ontology selection and semantic web to an integrated information system for food-borne diseases and food safety.

    PubMed

    Yan, Xianghe; Peng, Yun; Meng, Jianghong; Ruzante, Juliana; Fratamico, Pina M; Huang, Lihan; Juneja, Vijay; Needleman, David S

    2011-01-01

    Several factors have hindered effective use of information and resources related to food safety due to inconsistency among semantically heterogeneous data resources, lack of knowledge on profiling of food-borne pathogens, and knowledge gaps among research communities, government risk assessors/managers, and end-users of the information. This paper discusses technical aspects in the establishment of a comprehensive food safety information system consisting of the following steps: (a) computational collection and compiling publicly available information, including published pathogen genomic, proteomic, and metabolomic data; (b) development of ontology libraries on food-borne pathogens and design automatic algorithms with formal inference and fuzzy and probabilistic reasoning to address the consistency and accuracy of distributed information resources (e.g., PulseNet, FoodNet, OutbreakNet, PubMed, NCBI, EMBL, and other online genetic databases and information); (c) integration of collected pathogen profiling data, Foodrisk.org ( http://www.foodrisk.org ), PMP, Combase, and other relevant information into a user-friendly, searchable, "homogeneous" information system available to scientists in academia, the food industry, and government agencies; and (d) development of a computational model in semantic web for greater adaptability and robustness. PMID:21431616

  13. Applications and methods utilizing the Simple Semantic Web Architecture and Protocol (SSWAP) for bioinformatics resource discovery and disparate data and service integration

    PubMed Central

    2010-01-01

    Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap") offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS]) used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL), genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL), Resource Description Framework (RDF) and eXtensible Markup Language (XML) documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST). Conclusions The need for semantic integration technologies has preceded available solutions. We

  14. A rule-based software test data generator

    NASA Technical Reports Server (NTRS)

    Deason, William H.; Brown, David B.; Chang, Kai-Hsiung; Cross, James H., II

    1991-01-01

    Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rule base is developed further.

  15. A semantic Grid Infrastructure Enabling Integrated Access and Knowledge Discovery from Multilevel Data in Ppost-Genomic Clinical Trials

    NASA Astrophysics Data System (ADS)

    Tsiknakis, Manolis; Sfakianakis, Stelios; Potamias, George; Zacharioudakis, Giorgos; Kafetzopoulos, Dimitris

    This paper reports on original results of the ACGT integrated project focusing on the design and development of a European Biomedical Grid infrastructure in support of multicentric, post genomic clinical trials on Cancer. The paper provides a presentation of the needs of users involved in post-genomic CTs, and presents such needs in the form of scenarios which drive the requirements engineering phase of the project. Subsequently, the initial architecture specified by the project is presented and its services are classified and discussed. The Master Ontology on Cancer, been developed by the project, is presented and our approach to develop the required metadata registries, which provide semantically rich information about available data and computational services, is provided. Finally, a short discussion of the work lying ahead is included.

  16. A graphical, rule based robotic interface system

    NASA Technical Reports Server (NTRS)

    Mckee, James W.; Wolfsberger, John

    1988-01-01

    The ability of a human to take control of a robotic system is essential in any use of robots in space in order to handle unforeseen changes in the robot's work environment or scheduled tasks. But in cases in which the work environment is known, a human controlling a robot's every move by remote control is both time consuming and frustrating. A system is needed in which the user can give the robotic system commands to perform tasks but need not tell the system how. To be useful, this system should be able to plan and perform the tasks faster than a telerobotic system. The interface between the user and the robot system must be natural and meaningful to the user. A high level user interface program under development at the University of Alabama, Huntsville, is described. A graphical interface is proposed in which the user selects objects to be manipulated by selecting representations of the object on projections of a 3-D model of the work environment. The user may move in the work environment by changing the viewpoint of the projections. The interface uses a rule based program to transform user selection of items on a graphics display of the robot's work environment into commands for the robot. The program first determines if the desired task is possible given the abilities of the robot and any constraints on the object. If the task is possible, the program determines what movements the robot needs to make to perform the task. The movements are transformed into commands for the robot. The information defining the robot, the work environment, and how objects may be moved is stored in a set of data bases accessible to the program and displayable to the user.

  17. Integrated Syntactic/Semantic XML Data Validation with a Reusable Software Component

    ERIC Educational Resources Information Center

    Golikov, Steven

    2013-01-01

    Data integration is a critical component of enterprise system integration, and XML data validation is the foundation for sound data integration of XML-based information systems. Since B2B e-commerce relies on data validation as one of the critical components for enterprise integration, it is imperative for financial industries and e-commerce…

  18. S3QL: A distributed domain specific language for controlled semantic integration of life sciences data

    PubMed Central

    2011-01-01

    Background The value and usefulness of data increases when it is explicitly interlinked with related data. This is the core principle of Linked Data. For life sciences researchers, harnessing the power of Linked Data to improve biological discovery is still challenged by a need to keep pace with rapidly evolving domains and requirements for collaboration and control as well as with the reference semantic web ontologies and standards. Knowledge organization systems (KOSs) can provide an abstraction for publishing biological discoveries as Linked Data without complicating transactions with contextual minutia such as provenance and access control. We have previously described the Simple Sloppy Semantic Database (S3DB) as an efficient model for creating knowledge organization systems using Linked Data best practices with explicit distinction between domain and instantiation and support for a permission control mechanism that automatically migrates between the two. In this report we present a domain specific language, the S3DB query language (S3QL), to operate on its underlying core model and facilitate management of Linked Data. Results Reflecting the data driven nature of our approach, S3QL has been implemented as an application programming interface for S3DB systems hosting biomedical data, and its syntax was subsequently generalized beyond the S3DB core model. This achievement is illustrated with the assembly of an S3QL query to manage entities from the Simple Knowledge Organization System. The illustrative use cases include gastrointestinal clinical trials, genomic characterization of cancer by The Cancer Genome Atlas (TCGA) and molecular epidemiology of infectious diseases. Conclusions S3QL was found to provide a convenient mechanism to represent context for interoperation between public and private datasets hosted at biomedical research institutions and linked data formalisms. PMID:21756325

  19. Semantic Sensor Web

    NASA Astrophysics Data System (ADS)

    Sheth, A.; Henson, C.; Thirunarayan, K.

    2008-12-01

    Sensors are distributed across the globe leading to an avalanche of data about our environment. It is possible today to utilize networks of sensors to detect and identify a multitude of observations, from simple phenomena to complex events and situations. The lack of integration and communication between these networks, however, often isolates important data streams and intensifies the existing problem of too much data and not enough knowledge. With a view to addressing this problem, the Semantic Sensor Web (SSW) [1] proposes that sensor data be annotated with semantic metadata that will both increase interoperability and provide contextual information essential for situational knowledge. Kno.e.sis Center's approach to SSW is an evolutionary one. It adds semantic annotations to the existing standard sensor languages of the Sensor Web Enablement (SWE) defined by OGC. These annotations enhance primarily syntactic XML-based descriptions in OGC's SWE languages with microformats, and W3C's Semantic Web languages- RDF and OWL. In association with semantic annotation and semantic web capabilities including ontologies and rules, SSW supports interoperability, analysis and reasoning over heterogeneous multi-modal sensor data. In this presentation, we will also demonstrate a mashup with support for complex spatio-temporal-thematic queries [2] and semantic analysis that utilize semantic annotations, multiple ontologies and rules. It uses existing services (e.g., GoogleMap) and semantics enhanced SWE's Sensor Observation Service (SOS) over weather and road condition data from various sensors that are part of Ohio's transportation network. Our upcoming plans are to demonstrate end to end (heterogeneous sensor to application) semantics support and study scalability of SSW involving thousands of sensors to about a billion triples. Keywords: Semantic Sensor Web, Spatiotemporal thematic queries, Semantic Web Enablement, Sensor Observation Service [1] Amit Sheth, Cory Henson, Satya

  20. Integrated use of spatial and semantic relationships for extracting road networks from floating car data

    NASA Astrophysics Data System (ADS)

    Li, Jun; Qin, Qiming; Xie, Chao; Zhao, Yue

    2012-10-01

    The update frequency of digital road maps influences the quality of road-dependent services. However, digital road maps surveyed by probe vehicles or extracted from remotely sensed images still have a long updating circle and their cost remain high. With GPS technology and wireless communication technology maturing and their cost decreasing, floating car technology has been used in traffic monitoring and management, and the dynamic positioning data from floating cars become a new data source for updating road maps. In this paper, we aim to update digital road maps using the floating car data from China's National Commercial Vehicle Monitoring Platform, and present an incremental road network extraction method suitable for the platform's GPS data whose sampling frequency is low and which cover a large area. Based on both spatial and semantic relationships between a trajectory point and its associated road segment, the method classifies each trajectory point, and then merges every trajectory point into the candidate road network through the adding or modifying process according to its type. The road network is gradually updated until all trajectories have been processed. Finally, this method is applied in the updating process of major roads in North China and the experimental results reveal that it can accurately derive geometric information of roads under various scenes. This paper provides a highly-efficient, low-cost approach to update digital road maps.

  1. Neuro-Semantics and Semantics.

    ERIC Educational Resources Information Center

    Holmes, Stewart W.

    1987-01-01

    Draws distinctions between the terms semantics (dealing with such verbal parameters as dictionaries and "laws" of logic and rhetoric), general semantics (semantics, plus the complex, dynamic, organismal properties of human beings and their physical environment), and neurosemantics (names for relations-based input from the neurosensory system, and…

  2. Rule-based fault-tolerant flight control

    NASA Technical Reports Server (NTRS)

    Handelman, Dave

    1988-01-01

    Fault tolerance has always been a desirable characteristic of aircraft. The ability to withstand unexpected changes in aircraft configuration has a direct impact on the ability to complete a mission effectively and safely. The possible synergistic effects of combining techniques of modern control theory, statistical hypothesis testing, and artificial intelligence in the attempt to provide failure accommodation for aircraft are investigated. This effort has resulted in the definition of a theory for rule based control and a system for development of such a rule based controller. Although presented here in response to the goal of aircraft fault tolerance, the rule based control technique is applicable to a wide range of complex control problems.

  3. Semantics by analogy for illustrative volume visualization☆

    PubMed Central

    Gerl, Moritz; Rautek, Peter; Isenberg, Tobias; Gröller, Eduard

    2012-01-01

    We present an interactive graphical approach for the explicit specification of semantics for volume visualization. This explicit and graphical specification of semantics for volumetric features allows us to visually assign meaning to both input and output parameters of the visualization mapping. This is in contrast to the implicit way of specifying semantics using transfer functions. In particular, we demonstrate how to realize a dynamic specification of semantics which allows to flexibly explore a wide range of mappings. Our approach is based on three concepts. First, we use semantic shader augmentation to automatically add rule-based rendering functionality to static visualization mappings in a shader program, while preserving the visual abstraction that the initial shader encodes. With this technique we extend recent developments that define a mapping between data attributes and visual attributes with rules, which are evaluated using fuzzy logic. Second, we let users define the semantics by analogy through brushing on renderings of the data attributes of interest. Third, the rules are specified graphically in an interface that provides visual clues for potential modifications. Together, the presented methods offer a high degree of freedom in the specification and exploration of rule-based mappings and avoid the limitations of a linguistic rule formulation. PMID:23576827

  4. A case study: semantic integration of gene-disease associations for type 2 diabetes mellitus from literature and biomedical data resources.

    PubMed

    Rebholz-Schuhmann, Dietrich; Grabmüller, Christoph; Kavaliauskas, Silvestras; Croset, Samuel; Woollard, Peter; Backofen, Rolf; Filsell, Wendy; Clark, Dominic

    2014-07-01

    In the Semantic Enrichment of the Scientific Literature (SESL) project, researchers from academia and from life science and publishing companies collaborated in a pre-competitive way to integrate and share information for type 2 diabetes mellitus (T2DM) in adults. This case study exposes benefits from semantic interoperability after integrating the scientific literature with biomedical data resources, such as UniProt Knowledgebase (UniProtKB) and the Gene Expression Atlas (GXA). We annotated scientific documents in a standardized way, by applying public terminological resources for diseases and proteins, and other text-mining approaches. Eventually, we compared the genetic causes of T2DM across the data resources to demonstrate the benefits from the SESL triple store. Our solution enables publishers to distribute their content with little overhead into remote data infrastructures, such as into any Virtual Knowledge Broker. PMID:24201223

  5. Bounded-time fault-tolerant rule-based systems

    NASA Technical Reports Server (NTRS)

    Browne, James C.; Emerson, Allen; Gouda, Mohamed; Miranker, Daniel; Mok, Aloysius; Rosier, Louis

    1990-01-01

    Two systems concepts are introduced: bounded response-time and self-stabilization in the context of rule-based programs. These concepts are essential for the design of rule-based programs which must be highly fault tolerant and perform in a real time environment. The mechanical analysis of programs for these two properties is discussed. The techniques are used to analyze a NASA application.

  6. Developing a semantic web model for medical differential diagnosis recommendation.

    PubMed

    Mohammed, Osama; Benlamri, Rachid

    2014-10-01

    In this paper we describe a novel model for differential diagnosis designed to make recommendations by utilizing semantic web technologies. The model is a response to a number of requirements, ranging from incorporating essential clinical diagnostic semantics to the integration of data mining for the process of identifying candidate diseases that best explain a set of clinical features. We introduce two major components, which we find essential to the construction of an integral differential diagnosis recommendation model: the evidence-based recommender component and the proximity-based recommender component. Both approaches are driven by disease diagnosis ontologies designed specifically to enable the process of generating diagnostic recommendations. These ontologies are the disease symptom ontology and the patient ontology. The evidence-based diagnosis process develops dynamic rules based on standardized clinical pathways. The proximity-based component employs data mining to provide clinicians with diagnosis predictions, as well as generates new diagnosis rules from provided training datasets. This article describes the integration between these two components along with the developed diagnosis ontologies to form a novel medical differential diagnosis recommendation model. This article also provides test cases from the implementation of the overall model, which shows quite promising diagnostic recommendation results. PMID:25178271

  7. Overcoming deficiencies of the rule-based medical expert system.

    PubMed

    Hughes, C A; Gose, E E; Roseman, D L

    1990-05-01

    One of the current deficiencies of the rule-based expert system is its static nature. As these systems are applied to medicine, this shortcoming becomes accentuated by: the rapid speed at which new knowledge is generated, the regional differences associated with the expression of many diseases, and the rate at which patient demographics and disease incidence change over time. This research presents a solution to the static nature of the rule-based expert system by proposing a hybrid system. This system consists of an expert system and a statistical analysis system linked to a patient database. The additional feature of a rule base manager which initiates automatic database analysis to refresh the statistical correlation of each rule ensures a dynamic, current, statistically accurate rule base. The philosophical differences between data and knowledge are also addressed as they apply to this type of hybrid system. The system is then used to generate four rule bases from different knowledge sources. These rule bases are then compared. PMID:2401135

  8. Automated implementation of rule-based expert systems with neural networks for time-critical applications

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, P. A.; Huang, Song; Govind, Girish

    1991-01-01

    In fault diagnosis, control and real-time monitoring, both timing and accuracy are critical for operators or machines to reach proper solutions or appropriate actions. Expert systems are becoming more popular in the manufacturing community for dealing with such problems. In recent years, neural networks have revived and their applications have spread to many areas of science and engineering. A method of using neural networks to implement rule-based expert systems for time-critical applications is discussed here. This method can convert a given rule-based system into a neural network with fixed weights and thresholds. The rules governing the translation are presented along with some examples. We also present the results of automated machine implementation of such networks from the given rule-base. This significantly simplifies the translation process to neural network expert systems from conventional rule-based systems. Results comparing the performance of the proposed approach based on neural networks vs. the classical approach are given. The possibility of very large scale integration (VLSI) realization of such neural network expert systems is also discussed.

  9. Techniques and implementation of the embedded rule-based expert system using Ada

    NASA Technical Reports Server (NTRS)

    Liberman, Eugene M.; Jones, Robert E.

    1991-01-01

    Ada is becoming an increasingly popular programming language for large Government-funded software projects. Ada with its portability, transportability, and maintainability lends itself well to today's complex programming environment. In addition, expert systems have also assured a growing role in providing human-like reasoning capability and expertise for computer systems. The integration of expert system technology with Ada programming language, specifically a rule-based expert system using an ART-Ada (Automated Reasoning Tool for Ada) system shell is discussed. The NASA Lewis Research Center was chosen as a beta test site for ART-Ada. The test was conducted by implementing the existing Autonomous Power EXpert System (APEX), a Lisp-base power expert system, in ART-Ada. Three components, the rule-based expert system, a graphics user interface, and communications software make up SMART-Ada (Systems fault Management with ART-Ada). The main objective, to conduct a beta test on the ART-Ada rule-based expert system shell, was achieved. The system is operational. New Ada tools will assist in future successful projects. ART-Ada is one such tool and is a viable alternative to the straight Ada code when an application requires a rule-based or knowledge-based approach.

  10. The Development of the Ability to Semantically Integrate Information in Speech and Iconic Gesture in Comprehension

    ERIC Educational Resources Information Center

    Sekine, Kazuki; Sowden, Hannah; Kita, Sotaro

    2015-01-01

    We examined whether children's ability to integrate speech and gesture follows the pattern of a broader developmental shift between 3- and 5-year-old children (Ramscar & Gitcho, 2007) regarding the ability to process two pieces of information simultaneously. In Experiment 1, 3-year-olds, 5-year-olds, and adults were presented with either an…

  11. Semantic Domain-Specific Functional Integration for Action-Related vs. Abstract Concepts

    ERIC Educational Resources Information Center

    Ghio, Marta; Tettamanti, Marco

    2010-01-01

    A central topic in cognitive neuroscience concerns the representation of concepts and the specific neural mechanisms that mediate conceptual knowledge. Recently proposed modal theories assert that concepts are grounded on the integration of multimodal, distributed representations. The aim of the present work is to complement the available…

  12. A Semantic Analysis of XML Schema Matching for B2B Systems Integration

    ERIC Educational Resources Information Center

    Kim, Jaewook

    2011-01-01

    One of the most critical steps to integrating heterogeneous e-Business applications using different XML schemas is schema matching, which is known to be costly and error-prone. Many automatic schema matching approaches have been proposed, but the challenge is still daunting because of the complexity of schemas and immaturity of technologies in…

  13. Case Study for Integration of an Oncology Clinical Site in a Semantic Interoperability Solution based on HL7 v3 and SNOMED-CT: Data Transformation Needs.

    PubMed

    Ibrahim, Ahmed; Bucur, Anca; Perez-Rey, David; Alonso, Enrique; de Hoog, Matthy; Dekker, Andre; Marshall, M Scott

    2015-01-01

    This paper describes the data transformation pipeline defined to support the integration of a new clinical site in a standards-based semantic interoperability environment. The available datasets combined structured and free-text patient data in Dutch, collected in the context of radiation therapy in several cancer types. Our approach aims at both efficiency and data quality. We combine custom-developed scripts, standard tools and manual validation by clinical and knowledge experts. We identified key challenges emerging from the several sources of heterogeneity in our case study (systems, language, data structure, clinical domain) and implemented solutions that we will further generalize for the integration of new sites. We conclude that the required effort for data transformation is manageable which supports the feasibility of our semantic interoperability solution. The achieved semantic interoperability will be leveraged for the deployment and evaluation at the clinical site of applications enabling secondary use of care data for research. This work has been funded by the European Commission through the INTEGRATE (FP7-ICT-2009-6-270253) and EURECA (FP7-ICT-2011-288048) projects. PMID:26306242

  14. Geo-Semantic Framework for Integrating Long-Tail Data and Model Resources for Advancing Earth System Science

    NASA Astrophysics Data System (ADS)

    Elag, M.; Kumar, P.

    2014-12-01

    Often, scientists and small research groups collect data, which target to address issues and have limited geographic or temporal range. A large number of such collections together constitute a large database that is of immense value to Earth Science studies. Complexity of integrating these data include heterogeneity in dimensions, coordinate systems, scales, variables, providers, users and contexts. They have been defined as long-tail data. Similarly, we use "long-tail models" to characterize a heterogeneous collection of models and/or modules developed for targeted problems by individuals and small groups, which together provide a large valuable collection. Complexity of integrating across these models include differing variable names and units for the same concept, model runs at different time steps and spatial resolution, use of differing naming and reference conventions, etc. Ability to "integrate long-tail models and data" will provide an opportunity for the interoperability and reusability of communities' resources, where not only models can be combined in a workflow, but each model will be able to discover and (re)use data in application specific context of space, time and questions. This capability is essential to represent, understand, predict, and manage heterogeneous and interconnected processes and activities by harnessing the complex, heterogeneous, and extensive set of distributed resources. Because of the staggering production rate of long-tail models and data resulting from the advances in computational, sensing, and information technologies, an important challenge arises: how can geoinformatics bring together these resources seamlessly, given the inherent complexity among model and data resources that span across various domains. We will present a semantic-based framework to support integration of "long-tail" models and data. This builds on existing technologies including: (i) SEAD (Sustainable Environmental Actionable Data) which supports curation

  15. The Role of Semantics in Open-World, Integrative, Collaborative Science Data Platforms

    NASA Astrophysics Data System (ADS)

    Fox, Peter; Chen, Yanning; Wang, Han; West, Patrick; Erickson, John; Ma, Marshall

    2014-05-01

    As collaborative science spreads into more and more Earth and space science fields, both participants and funders are expressing stronger needs for highly functional data and information capabilities. Characteristics include a) easy to use, b) highly integrated, c) leverage investments, d) accommodate rapid technical change, and e) do not incur undue expense or time to build or maintain - these are not a small set of requirements. Based on our accumulated experience over the last ~ decade and several key technical approaches, we adapt, extend, and integrate several open source applications and frameworks to handle major portions of functionality for these platforms. This includes: an object-type repository, collaboration tools, identity management, all within a portal managing diverse content and applications. In this contribution, we present our methods and results of information models, adaptation, integration and evolution of a networked data science architecture based on several open source technologies (Drupal, VIVO, the Comprehensive Knowledge Archive Network; CKAN, and the Global Handle System; GHS). In particular we present the Deep Carbon Observatory - a platform for international science collaboration. We present and discuss key functional and non-functional attributes, and discuss the general applicability of the platform.

  16. Integration of the ProActive Suite and the semantic-oriented monitoring tool SemMon

    NASA Astrophysics Data System (ADS)

    Funika, Wlodzimierz; Caromel, Denis; Koperek, Pawel; Kupisz, Mateusz

    In this paper we present our semantic-based approach to the monitoring of distributed applications built with the ProActive Parallel Suite framework. It is based on a semantic description of what is to be monitored, it's measurable capabilities, and related operations. We explore the ability to adapt a semantic-oriented monitoring tool, SemMon to ProActive. The latter provides a stable environment for development of parallel applications, while SemMon is aimed at semantic-oriented performance monitoring support, originally designed for distributed Java applications. We introduce a uniform monitoring environment model which describes the resources provided by ProActive and supports JMX-based notifications. A sample monitoring session is provided as well as plans for further research.

  17. Changes in Knowledge Structures from Building Semantic Net versus Production Rule Representations of Subject Content.

    ERIC Educational Resources Information Center

    Jonassen, David H.

    1993-01-01

    Compares the effects on the knowledge structure of the learners of using two different Mindtools--semantic networks and rule-based expert systems--for representing the content of a course. Results showed that students in the semantic network class possessed more hierarchical knowledge structures than the other group. (Contains 29 references.) (JLB)

  18. The Enterprise Data Trust at Mayo Clinic: a semantically integrated warehouse of biomedical data

    PubMed Central

    Beck, Scott A; Fisk, Thomas B; Mohr, David N

    2010-01-01

    Mayo Clinic's Enterprise Data Trust is a collection of data from patient care, education, research, and administrative transactional systems, organized to support information retrieval, business intelligence, and high-level decision making. Structurally it is a top-down, subject-oriented, integrated, time-variant, and non-volatile collection of data in support of Mayo Clinic's analytic and decision-making processes. It is an interconnected piece of Mayo Clinic's Enterprise Information Management initiative, which also includes Data Governance, Enterprise Data Modeling, the Enterprise Vocabulary System, and Metadata Management. These resources enable unprecedented organization of enterprise information about patient, genomic, and research data. While facile access for cohort definition or aggregate retrieval is supported, a high level of security, retrieval audit, and user authentication ensures privacy, confidentiality, and respect for the trust imparted by our patients for the respectful use of information about their conditions. PMID:20190054

  19. The Enterprise Data Trust at Mayo Clinic: a semantically integrated warehouse of biomedical data.

    PubMed

    Chute, Christopher G; Beck, Scott A; Fisk, Thomas B; Mohr, David N

    2010-01-01

    Mayo Clinic's Enterprise Data Trust is a collection of data from patient care, education, research, and administrative transactional systems, organized to support information retrieval, business intelligence, and high-level decision making. Structurally it is a top-down, subject-oriented, integrated, time-variant, and non-volatile collection of data in support of Mayo Clinic's analytic and decision-making processes. It is an interconnected piece of Mayo Clinic's Enterprise Information Management initiative, which also includes Data Governance, Enterprise Data Modeling, the Enterprise Vocabulary System, and Metadata Management. These resources enable unprecedented organization of enterprise information about patient, genomic, and research data. While facile access for cohort definition or aggregate retrieval is supported, a high level of security, retrieval audit, and user authentication ensures privacy, confidentiality, and respect for the trust imparted by our patients for the respectful use of information about their conditions. PMID:20190054

  20. Semantic Bim and GIS Modelling for Energy-Efficient Buildings Integrated in a Healthcare District

    NASA Astrophysics Data System (ADS)

    Sebastian, R.; Böhms, H. M.; Bonsma, P.; van den Helm, P. W.

    2013-09-01

    The subject of energy-efficient buildings (EeB) is among the most urgent research priorities in the European Union (EU). In order to achieve the broadest impact, innovative approaches to EeB need to resolve challenges at the neighbourhood level, instead of only focusing on improvements of individual buildings. For this purpose, the design phase of new building projects as well as building retrofitting projects is the crucial moment for integrating multi-scale EeB solutions. In EeB design process, clients, architects, technical designers, contractors, and end-users altogether need new methods and tools for designing energy-efficiency buildings integrated in their neighbourhoods. Since the scope of designing covers multiple dimensions, the new design methodology relies on the inter-operability between Building Information Modelling (BIM) and Geospatial Information Systems (GIS). Design for EeB optimisation needs to put attention on the inter-connections between the architectural systems and the MEP/HVAC systems, as well as on the relation of Product Lifecycle Modelling (PLM), Building Management Systems (BMS), BIM and GIS. This paper is descriptive and it presents an actual EU FP7 large-scale collaborative research project titled STREAMER. The research on the inter-operability between BIM and GIS for holistic design of energy-efficient buildings in neighbourhood scale is supported by real case studies of mixed-use healthcare districts. The new design methodology encompasses all scales and all lifecycle phases of the built environment, as well as the whole lifecycle of the information models that comprises: Building Information Model (BIM), Building Assembly Model (BAM), Building Energy Model (BEM), and Building Operation Optimisation Model (BOOM).

  1. An integrated engineering simulation environment

    SciTech Connect

    Alvarado, F.L.; Lasseter, R.H.; Liu, Y.

    1988-02-01

    This paper presents an implementation of a new concept, the Integrated Engineering Simulation Environment (IESE). At the core of the IESE is an object-oriented database system which uses semantic data models and graphics-oriented manipulations. An on-line rule-based expert system is incorporated to enforce constraints on connections. Examples of application of the IESE to Electromagnetic Transient Simulations are presented. The main result of the paper is to establish the generality of this new approach to engineering software development, and to show that extremely diverse applications (including graphics interfaces) can be accommodated by simple modifications to database schemata, without reprogramming.

  2. Context-Based Semantic Annotations in CoPEs: An Ontological and Rule-Based Approach

    ERIC Educational Resources Information Center

    Boudebza, Souâad; Berkani, Lamia; Azouaou, Faiçal

    2013-01-01

    Knowledge capitalization is one of many problems facing online communities of practice (CoPs). Knowledge accumulated through the participation in the community must be capitalized for future reuse. Most of proposals are specific and focus on knowledge modeling disregarding the reuse of that knowledge. In this paper, we are particularly interested…

  3. A Semantic Rule-Based Framework for Efficient Retrieval of Educational Materials

    ERIC Educational Resources Information Center

    Mahmoudi, Maryam Tayefeh; Taghiyareh, Fattaneh; Badie, Kambiz

    2013-01-01

    Retrieving resources in an appropriate manner has a promising role in increasing the performance of educational support systems. A variety of works have been done to organize materials for educational purposes using tagging techniques. Despite the effectiveness of these techniques within certain domains, organizing resources in a way being…

  4. Semantic Desktop

    NASA Astrophysics Data System (ADS)

    Sauermann, Leo; Kiesel, Malte; Schumacher, Kinga; Bernardi, Ansgar

    In diesem Beitrag wird gezeigt, wie der Arbeitsplatz der Zukunft aussehen könnte und wo das Semantic Web neue Möglichkeiten eröffnet. Dazu werden Ansätze aus dem Bereich Semantic Web, Knowledge Representation, Desktop-Anwendungen und Visualisierung vorgestellt, die es uns ermöglichen, die bestehenden Daten eines Benutzers neu zu interpretieren und zu verwenden. Dabei bringt die Kombination von Semantic Web und Desktop Computern besondere Vorteile - ein Paradigma, das unter dem Titel Semantic Desktop bekannt ist. Die beschriebenen Möglichkeiten der Applikationsintegration sind aber nicht auf den Desktop beschränkt, sondern können genauso in Web-Anwendungen Verwendung finden.

  5. Semantic Mapping.

    ERIC Educational Resources Information Center

    Johnson, Dale D.; And Others

    1986-01-01

    Describes semantic mapping, an effective strategy for vocabulary instruction that involves the categorical structuring of information in graphic form and requires students to relate new words to their own experience and prior knowledge. (HOD)

  6. Optimal Test Design with Rule-Based Item Generation

    ERIC Educational Resources Information Center

    Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.

    2013-01-01

    Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…

  7. Rule-Based Category Learning in Down Syndrome

    ERIC Educational Resources Information Center

    Phillips, B. Allyson; Conners, Frances A.; Merrill, Edward; Klinger, Mark R.

    2014-01-01

    Rule-based category learning was examined in youths with Down syndrome (DS), youths with intellectual disability (ID), and typically developing (TD) youths. Two tasks measured category learning: the Modified Card Sort task (MCST) and the Concept Formation test of the Woodcock-Johnson-III (Woodcock, McGrew, & Mather, 2001). In regression-based…

  8. Rule based fuzzy logic approach for classification of fibromyalgia syndrome.

    PubMed

    Arslan, Evren; Yildiz, Sedat; Albayrak, Yalcin; Koklukaya, Etem

    2016-06-01

    Fibromyalgia syndrome (FMS) is a chronic muscle and skeletal system disease observed generally in women, manifesting itself with a widespread pain and impairing the individual's quality of life. FMS diagnosis is made based on the American College of Rheumatology (ACR) criteria. However, recently the employability and sufficiency of ACR criteria are under debate. In this context, several evaluation methods, including clinical evaluation methods were proposed by researchers. Accordingly, ACR had to update their criteria announced back in 1990, 2010 and 2011. Proposed rule based fuzzy logic method aims to evaluate FMS at a different angle as well. This method contains a rule base derived from the 1990 ACR criteria and the individual experiences of specialists. The study was conducted using the data collected from 60 inpatient and 30 healthy volunteers. Several tests and physical examination were administered to the participants. The fuzzy logic rule base was structured using the parameters of tender point count, chronic widespread pain period, pain severity, fatigue severity and sleep disturbance level, which were deemed important in FMS diagnosis. It has been observed that generally fuzzy predictor was 95.56 % consistent with at least of the specialists, who are not a creator of the fuzzy rule base. Thus, in diagnosis classification where the severity of FMS was classified as well, consistent findings were obtained from the comparison of interpretations and experiences of specialists and the fuzzy logic approach. The study proposes a rule base, which could eliminate the shortcomings of 1990 ACR criteria during the FMS evaluation process. Furthermore, the proposed method presents a classification on the severity of the disease, which was not available with the ACR criteria. The study was not limited to only disease classification but at the same time the probability of occurrence and severity was classified. In addition, those who were not suffering from FMS were

  9. Application of rule based methods to predicting storm surge

    NASA Astrophysics Data System (ADS)

    Royston, S. J.; Horsburgh, K. J.; Lawry, J.

    2012-04-01

    The accurate forecast of storm surge, the long wavelength sea level response to meteorological forcing, is imperative for flood warning purposes. There remain regions of the world where operational forecast systems have not been developed and in these locations it is worthwhile considering numerically simpler, data-driven techniques to provide operational services. In this paper, we investigate the applicability of a class of data driven methods referred to as rule based models to the problem of forecasting storm surge. The accuracy of the rule based model is found to be comparable to several alternative data-driven techniques, all of which result in marginally worse but acceptable forecasts compared with the UK's operational hydrodynamic forecast model, given the reduction in computational effort. Promisingly, the rule based model is considered to be skillful in forecasting total water levels above a given flood warning threshold, with a Brier Skill Score of 0.58 against a climatological forecast (the operational storm surge system has a Brier Skill Score of up to 0.75 for the same data set). The structure of the model can be interrogated as IF-THEN rules and we find that the model structure in this case is consistent with our understanding of the physical system. Furthermore, the rule based approach provides probabilistic forecasts of storm surge, which is much more informative to flood warning managers than alternative approaches. Therefore, the rule based model provides reasonably skillful forecasts in comparison with the operational forecast model, for a significant reduction in development and run time, and is therefore considered to be an appropriate data driven approach that could be employed to forecast storm surge in regions of the world where a fully fledged hydrodynamic forecast system does not exist, provided a good observational and meteorological forecast can be made.

  10. Benefits and Costs of Lexical Decomposition and Semantic Integration during the Processing of Transparent and Opaque English Compounds

    ERIC Educational Resources Information Center

    Ji, Hongbo; Gagne, Christina L.; Spalding, Thomas L.

    2011-01-01

    Six lexical decision experiments were conducted to examine the influence of complex structure on the processing speed of English compounds. All experiments revealed that semantically transparent compounds (e.g., "rosebud") were processed more quickly than matched monomorphemic words (e.g., "giraffe"). Opaque compounds (e.g., "hogwash") were also…

  11. Using Eye Tracking to Investigate Semantic and Spatial Representations of Scientific Diagrams during Text-Diagram Integration

    ERIC Educational Resources Information Center

    Jian, Yu-Cin; Wu, Chao-Jung

    2015-01-01

    We investigated strategies used by readers when reading a science article with a diagram and assessed whether semantic and spatial representations were constructed while reading the diagram. Seventy-one undergraduate participants read a scientific article while tracking their eye movements and then completed a reading comprehension test. Our…

  12. Reading Development Electrified: Semantic and Syntactic Integration during Sentence Comprehension in School-Age Children and Young Adults

    ERIC Educational Resources Information Center

    VanDyke, Justine M.

    2011-01-01

    Adults are able to access semantic and syntactic information rapidly as they hear or read in real-time in order to interpret sentences. Young children, on the other hand, tend to rely on syntactically-based parsing routines, adopting the first noun as the agent of a sentence regardless of plausibility, at least during oral comprehension. Little is…

  13. SSWAP: A Simple Semantic Web Architecture and Protocol for Semantic Web Services

    Technology Transfer Automated Retrieval System (TEKTRAN)

    SSWAP (Simple Semantic Web Architecture and Protocol) is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP is the driving technology behind the Virtual Plant Information Network, an NSF-funded semantic w...

  14. Spatial rule-based modeling: a method and its application to the human mitotic kinetochore.

    PubMed

    Ibrahim, Bashar; Henze, Richard; Gruenert, Gerd; Egbert, Matthew; Huwald, Jan; Dittrich, Peter

    2013-01-01

    A common problem in the analysis of biological systems is the combinatorial explosion that emerges from the complexity of multi-protein assemblies. Conventional formalisms, like differential equations, Boolean networks and Bayesian networks, are unsuitable for dealing with the combinatorial explosion, because they are designed for a restricted state space with fixed dimensionality. To overcome this problem, the rule-based modeling language, BioNetGen, and the spatial extension, SRSim, have been developed. Here, we describe how to apply rule-based modeling to integrate experimental data from different sources into a single spatial simulation model and how to analyze the output of that model. The starting point for this approach can be a combination of molecular interaction data, reaction network data, proximities, binding and diffusion kinetics and molecular geometries at different levels of detail. We describe the technique and then use it to construct a model of the human mitotic inner and outer kinetochore, including the spindle assembly checkpoint signaling pathway. This allows us to demonstrate the utility of the procedure, show how a novel perspective for understanding such complex systems becomes accessible and elaborate on challenges that arise in the formulation, simulation and analysis of spatial rule-based models. PMID:24709796

  15. Spatial Rule-Based Modeling: A Method and Its Application to the Human Mitotic Kinetochore

    PubMed Central

    Ibrahim, Bashar; Henze, Richard; Gruenert, Gerd; Egbert, Matthew; Huwald, Jan; Dittrich, Peter

    2013-01-01

    A common problem in the analysis of biological systems is the combinatorial explosion that emerges from the complexity of multi-protein assemblies. Conventional formalisms, like differential equations, Boolean networks and Bayesian networks, are unsuitable for dealing with the combinatorial explosion, because they are designed for a restricted state space with fixed dimensionality. To overcome this problem, the rule-based modeling language, BioNetGen, and the spatial extension, SRSim, have been developed. Here, we describe how to apply rule-based modeling to integrate experimental data from different sources into a single spatial simulation model and how to analyze the output of that model. The starting point for this approach can be a combination of molecular interaction data, reaction network data, proximities, binding and diffusion kinetics and molecular geometries at different levels of detail. We describe the technique and then use it to construct a model of the human mitotic inner and outer kinetochore, including the spindle assembly checkpoint signaling pathway. This allows us to demonstrate the utility of the procedure, show how a novel perspective for understanding such complex systems becomes accessible and elaborate on challenges that arise in the formulation, simulation and analysis of spatial rule-based models. PMID:24709796

  16. Live Social Semantics

    NASA Astrophysics Data System (ADS)

    Alani, Harith; Szomszor, Martin; Cattuto, Ciro; van den Broeck, Wouter; Correndo, Gianluca; Barrat, Alain

    Social interactions are one of the key factors to the success of conferences and similar community gatherings. This paper describes a novel application that integrates data from the semantic web, online social networks, and a real-world contact sensing platform. This application was successfully deployed at ESWC09, and actively used by 139 people. Personal profiles of the participants were automatically generated using several Web 2.0 systems and semantic academic data sources, and integrated in real-time with face-to-face contact networks derived from wearable sensors. Integration of all these heterogeneous data layers made it possible to offer various services to conference attendees to enhance their social experience such as visualisation of contact data, and a site to explore and connect with other participants. This paper describes the architecture of the application, the services we provided, and the results we achieved in this deployment.

  17. Automated rule-base creation via CLIPS-Induce

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick M.

    1994-01-01

    Many CLIPS rule-bases contain one or more rule groups that perform classification. In this paper we describe CLIPS-Induce, an automated system for the creation of a CLIPS classification rule-base from a set of test cases. CLIPS-Induce consists of two components, a decision tree induction component and a CLIPS production extraction component. ID3, a popular decision tree induction algorithm, is used to induce a decision tree from the test cases. CLIPS production extraction is accomplished through a top-down traversal of the decision tree. Nodes of the tree are used to construct query rules, and branches of the tree are used to construct classification rules. The learned CLIPS productions may easily be incorporated into a large CLIPS system that perform tasks such as accessing a database or displaying information.

  18. Fuzzy logic control synthesis without any rule base.

    PubMed

    Novakovic, B M

    1999-01-01

    A new analytic fuzzy logic control (FLC) system synthesis without any rule base is proposed. For this purpose the following objectives are preferred and reached: 1) an introduction of a new adaptive shape of fuzzy sets and a new adaptive distribution of input fuzzy sets, 2) a determination of an analytic activation function for activation of output fuzzy sets, instead of using of min-max operators, and 3) a definition of a new analytic function that determines the positions of centers of output fuzzy sets in each mapping process, instead of definition of the rule base. A real capability of the proposed FLC synthesis procedures is presented by synthesis of FLC of robot of RRTR-structure. PMID:18252321

  19. Individual differences in the joint effects of semantic priming and word frequency: The role of lexical integrity

    PubMed Central

    Yap, Melvin J.; Tse, Chi-Shing; Balota, David A.

    2009-01-01

    Word frequency and semantic priming effects are among the most robust effects in visual word recognition, and it has been generally assumed that these two variables produce interactive effects in lexical decision performance, with larger priming effects for low-frequency targets. The results from four lexical decision experiments indicate that the joint effects of semantic priming and word frequency are critically dependent upon differences in the vocabulary knowledge of the participants. Specifically, across two Universities, additive effects of the two variables were observed in participants with more vocabulary knowledge, while interactive effects were observed in participants with less vocabulary knowledge. These results are discussed with reference to Borowsky and Besner’s (1993) multistage account and Plaut and Booth’s (2000) single-mechanism model. In general, the findings are also consistent with a flexible lexical processing system that optimizes performance based on processing fluency and task demands. PMID:20161653

  20. Index : A Rule Based Expert System For Computer Network Maintenance

    NASA Astrophysics Data System (ADS)

    Chaganty, Srinivas; Pitchai, Anandhi; Morgan, Thomas W.

    1988-03-01

    Communications is an expert intensive discipline. The application of expert systems for maintenance of large and complex networks, mainly as an aid in trouble shooting, can simplify the task of network management. The important steps involved in troubleshooting are fault detection, fault reporting, fault interpretation and fault isolation. At present, Network Maintenance Facilities are capable of detecting and reporting the faults to network personnel. Fault interpretation refers to the next step in the process, which involves coming up with reasons for the failure. Fault interpretation can be characterized in two ways. First, it involves such a diversity of facts that it is difficult to predict. Secondly, it embodies a wealth of knowledge in the form of network management personnel. The application of expert systems in these interpretive tasks is an important step towards automation of network maintenance. In this paper, INDEX (Intelligent Network Diagnosis Expediter), a rule based production system for computer network alarm interpretation is described. It acts as an intelligent filter for people analyzing network alarms. INDEX analyzes the alarms in the network and identifies proper maintenance action to be taken.The important feature of this production system is that it is data driven. Working memory is the principal data repository of production systems and its contents represent the current state of the problem. Control is based upon which productions match the constantly changing working memory elements. Implementation of the prototype is in OPS83. Major issues in rule based system development such as rule base organization, implementation and efficiency are discussed.

  1. Dopaminergic Genetic Polymorphisms Predict Rule-based Category Learning.

    PubMed

    Byrne, Kaileigh A; Davis, Tyler; Worthy, Darrell A

    2016-07-01

    Dopaminergic genes play an important role in cognitive function. DRD2 and DARPP-32 dopamine receptor gene polymorphisms affect striatal dopamine binding potential, and the Val158Met single-nucleotide polymorphism of the COMT gene moderates dopamine availability in the pFC. Our study assesses the role of these gene polymorphisms on performance in two rule-based category learning tasks. Participants completed unidimensional and conjunctive rule-based tasks. In the unidimensional task, a rule along a single stimulus dimension can be used to distinguish category members. In contrast, a conjunctive rule utilizes a combination of two dimensions to distinguish category members. DRD2 C957T TT homozygotes outperformed C allele carriers on both tasks, and DARPP-32 AA homozygotes outperformed G allele carriers on both tasks. However, we found an interaction between COMT and task type where Met allele carriers outperformed Val homozygotes in the conjunctive rule task, but both groups performed equally well in the unidimensional task. Thus, striatal dopamine binding may play a critical role in both types of rule-based tasks, whereas prefrontal dopamine binding is important for learning more complex conjunctive rule tasks. Modeling results suggest that striatal dopaminergic genes influence selective attention processes whereas cortical genes mediate the ability to update complex rule representations. PMID:26918585

  2. SemanticOrganizer Brings Teams Together

    NASA Technical Reports Server (NTRS)

    Laufenberg, Lawrence

    2003-01-01

    SemanticOrganizer enables researchers in different locations to share, search for, and integrate data. Its customizable semantic links offer fast access to interrelated information. This knowledge management and information integration tool also supports real-time instrument data collection and collaborative image annotation.

  3. Algorithms and semantic infrastructure for mutation impact extraction and grounding

    PubMed Central

    2010-01-01

    Background Mutation impact extraction is a hitherto unaccomplished task in state of the art mutation extraction systems. Protein mutations and their impacts on protein properties are hidden in scientific literature, making them poorly accessible for protein engineers and inaccessible for phenotype-prediction systems that currently depend on manually curated genomic variation databases. Results We present the first rule-based approach for the extraction of mutation impacts on protein properties, categorizing their directionality as positive, negative or neutral. Furthermore protein and mutation mentions are grounded to their respective UniProtKB IDs and selected protein properties, namely protein functions to concepts found in the Gene Ontology. The extracted entities are populated to an OWL-DL Mutation Impact ontology facilitating complex querying for mutation impacts using SPARQL. We illustrate retrieval of proteins and mutant sequences for a given direction of impact on specific protein properties. Moreover we provide programmatic access to the data through semantic web services using the SADI (Semantic Automated Discovery and Integration) framework. Conclusion We address the problem of access to legacy mutation data in unstructured form through the creation of novel mutation impact extraction methods which are evaluated on a corpus of full-text articles on haloalkane dehalogenases, tagged by domain experts. Our approaches show state of the art levels of precision and recall for Mutation Grounding and respectable level of precision but lower recall for the task of Mutant-Impact relation extraction. The system is deployed using text mining and semantic web technologies with the goal of publishing to a broad spectrum of consumers. PMID:21143808

  4. Generative Semantics

    ERIC Educational Resources Information Center

    Bagha, Karim Nazari

    2011-01-01

    Generative semantics is (or perhaps was) a research program within linguistics, initiated by the work of George Lakoff, John R. Ross, Paul Postal and later McCawley. The approach developed out of transformational generative grammar in the mid 1960s, but stood largely in opposition to work by Noam Chomsky and his students. The nature and genesis of…

  5. Fuzzylot: a novel self-organising fuzzy-neural rule-based pilot system for automated vehicles.

    PubMed

    Pasquier, M; Quek, C; Toh, M

    2001-10-01

    This paper presents part of our research work concerned with the realisation of an Intelligent Vehicle and the technologies required for its routing, navigation, and control. An automated driver prototype has been developed using a self-organising fuzzy rule-based system (POPFNN-CRI(S)) to model and subsequently emulate human driving expertise. The ability of fuzzy logic to represent vague information using linguistic variables makes it a powerful tool to develop rule-based control systems when an exact working model is not available, as is the case of any vehicle-driving task. Designing a fuzzy system, however, is a complex endeavour, due to the need to define the variables and their associated fuzzy sets, and determine a suitable rule base. Many efforts have thus been devoted to automating this process, yielding the development of learning and optimisation techniques. One of them is the family of POP-FNNs, or Pseudo-Outer Product Fuzzy Neural Networks (TVR, AARS(S), AARS(NS), CRI, Yager). These generic self-organising neural networks developed at the Intelligent Systems Laboratory (ISL/NTU) are based on formal fuzzy mathematical theory and are able to objectively extract a fuzzy rule base from training data. In this application, a driving simulator has been developed, that integrates a detailed model of the car dynamics, complete with engine characteristics and environmental parameters, and an OpenGL-based 3D-simulation interface coupled with driving wheel and accelerator/ brake pedals. The simulator has been used on various road scenarios to record from a human pilot driving data consisting of steering and speed control actions associated to road features. Specifically, the POPFNN-CRI(S) system is used to cluster the data and extract a fuzzy rule base modelling the human driving behaviour. Finally, the effectiveness of the generated rule base has been validated using the simulator in autopilot mode. PMID:11681754

  6. Using Eye Tracking to Investigate Semantic and Spatial Representations of Scientific Diagrams During Text-Diagram Integration

    NASA Astrophysics Data System (ADS)

    Jian, Yu-Cin; Wu, Chao-Jung

    2015-02-01

    We investigated strategies used by readers when reading a science article with a diagram and assessed whether semantic and spatial representations were constructed while reading the diagram. Seventy-one undergraduate participants read a scientific article while tracking their eye movements and then completed a reading comprehension test. Our results showed that the text-diagram referencing strategy was commonly used. However, some readers adopted other reading strategies, such as reading the diagram or text first. We found all readers who had referred to the diagram spent roughly the same amount of time reading and performed equally well. However, some participants who ignored the diagram performed more poorly on questions that tested understanding of basic facts. This result indicates that dual coding theory may be a possible theory to explain the phenomenon. Eye movement patterns indicated that at least some readers had extracted semantic information of the scientific terms when first looking at the diagram. Readers who read the scientific terms on the diagram first tended to spend less time looking at the same terms in the text, which they read after. Besides, presented clear diagrams can help readers process both semantic and spatial information, thereby facilitating an overall understanding of the article. In addition, although text-first and diagram-first readers spent similar total reading time on the text and diagram parts of the article, respectively, text-first readers had significantly less number of saccades of text and diagram than diagram-first readers. This result might be explained as text-directed reading.

  7. A semantic grid infrastructure enabling integrated access and analysis of multilevel biomedical data in support of postgenomic clinical trials on cancer.

    PubMed

    Tsiknakis, Manolis; Brochhausen, Mathias; Nabrzyski, Jarek; Pucacki, Juliusz; Sfakianakis, Stelios G; Potamias, George; Desmedt, Cristine; Kafetzopoulos, Dimitris

    2008-03-01

    This paper reports on original results of the Advancing Clinico-Genomic Trials on Cancer integrated project focusing on the design and development of a European biomedical grid infrastructure in support of multicentric, postgenomic clinical trials (CTs) on cancer. Postgenomic CTs use multilevel clinical and genomic data and advanced computational analysis and visualization tools to test hypothesis in trying to identify the molecular reasons for a disease and the stratification of patients in terms of treatment. This paper provides a presentation of the needs of users involved in postgenomic CTs, and presents such needs in the form of scenarios, which drive the requirements engineering phase of the project. Subsequently, the initial architecture specified by the project is presented, and its services are classified and discussed. A key set of such services are those used for wrapping heterogeneous clinical trial management systems and other public biological databases. Also, the main technological challenge, i.e. the design and development of semantically rich grid services is discussed. In achieving such an objective, extensive use of ontologies and metadata are required. The Master Ontology on Cancer, developed by the project, is presented, and our approach to develop the required metadata registries, which provide semantically rich information about available data and computational services, is provided. Finally, a short discussion of the work lying ahead is included. PMID:18348950

  8. Simulation of large-scale rule-based models

    SciTech Connect

    Hlavacek, William S; Monnie, Michael I; Colvin, Joshua; Faseder, James

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  9. Classification of Contaminated Sites Using a Fuzzy Rule Based System

    SciTech Connect

    Lemos, F.L. de; Van Velzen, K.; Ross, T.

    2006-07-01

    This paper presents the general framework of a multi level model to manage contaminated sites that is being developed. A rule based system along with a scoring system for ranking sites for phase 1 ESA is being proposed (Level 1). Level 2, which consists of the recommendation of the consultant based on their phase 1 ESA is reasonably straightforward. Level 3 which consists of classifying sites which already had a phase 2 ESA conducted on them will involve a multi-objective decision making tool. Fuzzy set theory, which includes the concept of membership functions, was adjudged as the best way to deal with uncertain and non-random information. (authors)

  10. Teaching medical diagnosis: a rule-based approach.

    PubMed

    Michalowski, W; Rubin, S; Aggarwal, H

    1993-01-01

    This paper discusses the design of a diagnostic process simulator which teaches medical students to think clinically. This was possible to achieve due to the application of a rule-based approach to represent diagnosis and treatments. Whilst using the simulator, as a result of the student's incorrect and correct decisions, the clinical situation changes accordingly. New diagnostic options result in the ability to choose further clinical and laboratory tests. The simulator is being implemented on Sun workstations and Macintosh computers using Prolog programming language. PMID:8139404

  11. Dynamic composition of semantic pathways for medical computational problem solving by means of semantic rules.

    PubMed

    Bratsas, Charalampos; Bamidis, Panagiotis; Kehagias, Dionisis D; Kaimakamis, Evangelos; Maglaveras, Nicos

    2011-03-01

    This paper presents a semantic rule-based system for the composition of successful algorithmic pathways capable of solving medical computational problems (MCPs). A subset of medical algorithms referring to MCP solving concerns well-known medical problems and their computational algorithmic solutions. These solutions result from computations within mathematical models aiming to enhance healthcare quality via support for diagnosis and treatment automation, especially useful for educational purposes. Currently, there is a plethora of computational algorithms on the web, which pertain to MCPs and provide all computational facilities required to solve a medical problem. An inherent requirement for the successful construction of algorithmic pathways for managing real medical cases is the composition of a sequence of computational algorithms. The aim of this paper is to approach the composition of such pathways via the design of appropriate finite-state machines (FSMs), the use of ontologies, and SWRL semantic rules. The goal of semantic rules is to automatically associate different algorithms that are represented as different states of the FSM in order to result in a successful pathway. The rule-based approach is herein implemented on top of Knowledge-Based System for Intelligent Computational Search in Medicine (KnowBaSICS-M), an ontology-based system for MCP semantic management. Preliminary results have shown that the proposed system adequately produces algorithmic pathways in agreement with current international medical guidelines. PMID:21335316

  12. Towards computerizing intensive care sedation guidelines: design of a rule-based architecture for automated execution of clinical guidelines

    PubMed Central

    2010-01-01

    Background Computerized ICUs rely on software services to convey the medical condition of their patients as well as assisting the staff in taking treatment decisions. Such services are useful for following clinical guidelines quickly and accurately. However, the development of services is often time-consuming and error-prone. Consequently, many care-related activities are still conducted based on manually constructed guidelines. These are often ambiguous, which leads to unnecessary variations in treatments and costs. The goal of this paper is to present a semi-automatic verification and translation framework capable of turning manually constructed diagrams into ready-to-use programs. This framework combines the strengths of the manual and service-oriented approaches while decreasing their disadvantages. The aim is to close the gap in communication between the IT and the medical domain. This leads to a less time-consuming and error-prone development phase and a shorter clinical evaluation phase. Methods A framework is proposed that semi-automatically translates a clinical guideline, expressed as an XML-based flow chart, into a Drools Rule Flow by employing semantic technologies such as ontologies and SWRL. An overview of the architecture is given and all the technology choices are thoroughly motivated. Finally, it is shown how this framework can be integrated into a service-oriented architecture (SOA). Results The applicability of the Drools Rule language to express clinical guidelines is evaluated by translating an example guideline, namely the sedation protocol used for the anaesthetization of patients, to a Drools Rule Flow and executing and deploying this Rule-based application as a part of a SOA. The results show that the performance of Drools is comparable to other technologies such as Web Services and increases with the number of decision nodes present in the Rule Flow. Most delays are introduced by loading the Rule Flows. Conclusions The framework is an

  13. A hierarchical fuzzy rule-based approach to aphasia diagnosis.

    PubMed

    Akbarzadeh-T, Mohammad-R; Moshtagh-Khorasani, Majid

    2007-10-01

    Aphasia diagnosis is a particularly challenging medical diagnostic task due to the linguistic uncertainty and vagueness, inconsistencies in the definition of aphasic syndromes, large number of measurements with imprecision, natural diversity and subjectivity in test objects as well as in opinions of experts who diagnose the disease. To efficiently address this diagnostic process, a hierarchical fuzzy rule-based structure is proposed here that considers the effect of different features of aphasia by statistical analysis in its construction. This approach can be efficient for diagnosis of aphasia and possibly other medical diagnostic applications due to its fuzzy and hierarchical reasoning construction. Initially, the symptoms of the disease which each consists of different features are analyzed statistically. The measured statistical parameters from the training set are then used to define membership functions and the fuzzy rules. The resulting two-layered fuzzy rule-based system is then compared with a back propagating feed-forward neural network for diagnosis of four Aphasia types: Anomic, Broca, Global and Wernicke. In order to reduce the number of required inputs, the technique is applied and compared on both comprehensive and spontaneous speech tests. Statistical t-test analysis confirms that the proposed approach uses fewer Aphasia features while also presenting a significant improvement in terms of accuracy. PMID:17293167

  14. Rule-based expert system for maritime anomaly detection

    NASA Astrophysics Data System (ADS)

    Roy, Jean

    2010-04-01

    Maritime domain operators/analysts have a mandate to be aware of all that is happening within their areas of responsibility. This mandate derives from the needs to defend sovereignty, protect infrastructures, counter terrorism, detect illegal activities, etc., and it has become more challenging in the past decade, as commercial shipping turned into a potential threat. In particular, a huge portion of the data and information made available to the operators/analysts is mundane, from maritime platforms going about normal, legitimate activities, and it is very challenging for them to detect and identify the non-mundane. To achieve such anomaly detection, they must establish numerous relevant situational facts from a variety of sensor data streams. Unfortunately, many of the facts of interest just cannot be observed; the operators/analysts thus use their knowledge of the maritime domain and their reasoning faculties to infer these facts. As they are often overwhelmed by the large amount of data and information, automated reasoning tools could be used to support them by inferring the necessary facts, ultimately providing indications and warning on a small number of anomalous events worthy of their attention. Along this line of thought, this paper describes a proof-of-concept prototype of a rule-based expert system implementing automated rule-based reasoning in support of maritime anomaly detection.

  15. g.infer: A GRASS GIS module for rule-based data-driven classification and workflow control.

    NASA Astrophysics Data System (ADS)

    Löwe, Peter

    2013-04-01

    This poster describes the internal architecture of the new GRASS GIS module g.infer [1] and demonstrates application scenarios . The new module for GRASS GIS Version 6.x and 7.x enables rule-based analysis and workflow management via data-driven inference processes based on the C Language Integrated Production System (CLIPS) [2]. g.infer uses the pyClips module [3] to provide an Python-based environment for CLIPS within the GRASS GIS environment for rule-based knowledge engineering. Application scenarios range from rule-based classification tasks, event-driven workflow-control to complex simulations for tasks such as Soil Erosion Monitoring and Disaster Early Warning [4]. References: [1] Löwe P.: Introducing the new GRASS module g.infer for data-driven rule-based applications, Vol.8 2012-08, Geoinformatics FCE CTU, ISSN 1802-2669 [2] http://clipsrules.sourceforge.net/ [3] http://pyclips.sourceforge.net/web/ [4] Löwe P.: A Spatial Decision Support System for Radar-metereology Data in South Africa, Transactions in GIS 2004, (2): 235-244

  16. Rule based design of conceptual models for formative evaluation

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.; Chang, Kai; Hale, Joseph P.; Bester, Terri; Rix, Thomas; Wang, Yaowen

    1994-01-01

    A Human-Computer Interface (HCI) Prototyping Environment with embedded evaluation capability has been investigated. This environment will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. This environment, which allows for rapid prototyping and evaluation of graphical interfaces, includes the following four components: (1) a HCI development tool; (2) a low fidelity simulator development tool; (3) a dynamic, interactive interface between the HCI and the simulator; and (4) an embedded evaluator that evaluates the adequacy of a HCI based on a user's performance. The embedded evaluation tool collects data while the user is interacting with the system and evaluates the adequacy of an interface based on a user's performance. This paper describes the design of conceptual models for the embedded evaluation system using a rule-based approach.

  17. Rule-based extrapolation: a continuing challenge for exemplar models.

    PubMed

    Denton, Stephen E; Kruschke, John K; Erickson, Michael A

    2008-08-01

    Erickson and Kruschke (1998, 2002) demonstrated that in rule-plus-exception categorization, people generalize category knowledge by extrapolating in a rule-like fashion, even when they are presented with a novel stimulus that is most similar to a known exception. Although exemplar models have been found to be deficient in explaining rule-based extrapolation, Rodrigues and Murre (2007) offered a variation of an exemplar model that was better able to account for such performance. Here, we present the results of a new rule-plus-exception experiment that yields rule-like extrapolation similar to that of previous experiments, and yet the data are not accounted for by Rodrigues and Murre's augmented exemplar model. Further, a hybrid rule-and-exemplar model is shown to better describe the data. Thus, we maintain that rule-plus-exception categorization continues to be a challenge for exemplar-only models. PMID:18792504

  18. A High-Level Language for Rule-Based Modelling

    PubMed Central

    Pedersen, Michael; Phillips, Andrew; Plotkin, Gordon D.

    2015-01-01

    Rule-based languages such as Kappa excel in their support for handling the combinatorial complexities prevalent in many biological systems, including signalling pathways. But Kappa provides little structure for organising rules, and large models can therefore be hard to read and maintain. This paper introduces a high-level, modular extension of Kappa called LBS-κ. We demonstrate the constructs of the language through examples and three case studies: a chemotaxis switch ring, a MAPK cascade, and an insulin signalling pathway. We then provide a formal definition of LBS-κ through an abstract syntax and a translation to plain Kappa. The translation is implemented in a compiler tool which is available as a web application. We finally demonstrate how to increase the expressivity of LBS-κ through embedded scripts in a general-purpose programming language, a technique which we view as generally applicable to other domain specific languages. PMID:26043208

  19. Grapheme-color synaesthesia benefits rule-based Category learning.

    PubMed

    Watson, Marcus R; Blair, Mark R; Kozik, Pavel; Akins, Kathleen A; Enns, James T

    2012-09-01

    Researchers have long suspected that grapheme-color synaesthesia is useful, but research on its utility has so far focused primarily on episodic memory and perceptual discrimination. Here we ask whether it can be harnessed during rule-based Category learning. Participants learned through trial and error to classify grapheme pairs that were organized into categories on the basis of their associated synaesthetic colors. The performance of synaesthetes was similar to non-synaesthetes viewing graphemes that were physically colored in the same way. Specifically, synaesthetes learned to categorize stimuli effectively, they were able to transfer this learning to novel stimuli, and they falsely recognized grapheme-pair foils, all like non-synaesthetes viewing colored graphemes. These findings demonstrate that synaesthesia can be exploited when learning the kind of material taught in many classroom settings. PMID:22763316

  20. Rule based design of conceptual models for formative evaluation

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.; Chang, Kai; Hale, Joseph P.; Bester, Terri; Rix, Thomas; Wang, Yaowen

    1994-01-01

    A Human-Computer Interface (HCI) Prototyping Environment with embedded evaluation capability has been investigated. This environment will be valuable in developing and refining HCI standards and evaluating program/project interface development, especially Space Station Freedom on-board displays for payload operations. This environment, which allows for rapid prototyping and evaluation of graphical interfaces, includes the following four components: (1) a HCI development tool, (2) a low fidelity simulator development tool, (3) a dynamic, interactive interface between the HCI and the simulator, and (4) an embedded evaluator that evaluates the adequacy of a HCI based on a user's performance. The embedded evaluation tool collects data while the user is interacting with the system and evaluates the adequacy of an interface based on a user's performance. This paper describes the design of conceptual models for the embedded evaluation system using a rule-based approach.

  1. A Rule-Based Industrial Boiler Selection System

    NASA Astrophysics Data System (ADS)

    Tan, C. F.; Khalil, S. N.; Karjanto, J.; Tee, B. T.; Wahidin, L. S.; Chen, W.; Rauterberg, G. W. M.; Sivarao, S.; Lim, T. L.

    2015-09-01

    Boiler is a device used for generating the steam for power generation, process use or heating, and hot water for heating purposes. Steam boiler consists of the containing vessel and convection heating surfaces only, whereas a steam generator covers the whole unit, encompassing water wall tubes, super heaters, air heaters and economizers. The selection of the boiler is very important to the industry for conducting the operation system successfully. The selection criteria are based on rule based expert system and multi-criteria weighted average method. The developed system consists of Knowledge Acquisition Module, Boiler Selection Module, User Interface Module and Help Module. The system capable of selecting the suitable boiler based on criteria weighted. The main benefits from using the system is to reduce the complexity in the decision making for selecting the most appropriate boiler to palm oil process plant.

  2. Rule-based circuit optimization for CMOS VLSI

    SciTech Connect

    Lai, F.

    1987-01-01

    A closed-loop design system iJADE was developed in Franz LISP. iJADE is a hierarchical CMOS VLSI circuit optimizer. Using a switch-level timing simulator and a timing analyzer, the program pinpoints the critical paths. The path-delay reduction algorithms and a rule-based expert system are then applied to adjust transistor sizes such that the speed of the circuit can be improved while keeping constraints satisfied. iJADE is also capable of detecting and correcting the timing errors of synchronous circuits. The circuit is described in SPICE-like input format, and then partitioned into blocks. Delays are computed on a block-by-block basis hierarchically, using a simple model based on input rise time, block type, and output load.

  3. Approaches to the verification of rule-based expert systems

    NASA Technical Reports Server (NTRS)

    Culbert, Chris; Riley, Gary; Savely, Robert T.

    1987-01-01

    Expert systems are a highly useful spinoff of artificial intelligence research. One major stumbling block to extended use of expert systems is the lack of well-defined verification and validation (V and V) methodologies. Since expert systems are computer programs, the definitions of verification and validation from conventional software are applicable. The primary difficulty with expert systems is the use of development methodologies which do not support effective V and V. If proper techniques are used to document requirements, V and V of rule-based expert systems is possible, and may be easier than with conventional code. For NASA applications, the flight technique panels used in previous programs should provide an excellent way to verify the rules used in expert systems. There are, however, some inherent differences in expert systems that will affect V and V considerations.

  4. A high-level language for rule-based modelling.

    PubMed

    Pedersen, Michael; Phillips, Andrew; Plotkin, Gordon D

    2015-01-01

    Rule-based languages such as Kappa excel in their support for handling the combinatorial complexities prevalent in many biological systems, including signalling pathways. But Kappa provides little structure for organising rules, and large models can therefore be hard to read and maintain. This paper introduces a high-level, modular extension of Kappa called LBS-κ. We demonstrate the constructs of the language through examples and three case studies: a chemotaxis switch ring, a MAPK cascade, and an insulin signalling pathway. We then provide a formal definition of LBS-κ through an abstract syntax and a translation to plain Kappa. The translation is implemented in a compiler tool which is available as a web application. We finally demonstrate how to increase the expressivity of LBS-κ through embedded scripts in a general-purpose programming language, a technique which we view as generally applicable to other domain specific languages. PMID:26043208

  5. Fuzzy-rule-based image reconstruction for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Mondal, Partha P.; Rajan, K.

    2005-09-01

    Positron emission tomography (PET) and single-photon emission computed tomography have revolutionized the field of medicine and biology. Penalized iterative algorithms based on maximum a posteriori (MAP) estimation eliminate noisy artifacts by utilizing available prior information in the reconstruction process but often result in a blurring effect. MAP-based algorithms fail to determine the density class in the reconstructed image and hence penalize the pixels irrespective of the density class. Reconstruction with better edge information is often difficult because prior knowledge is not taken into account. The recently introduced median-root-prior (MRP)-based algorithm preserves the edges, but a steplike streaking effect is observed in the reconstructed image, which is undesirable. A fuzzy approach is proposed for modeling the nature of interpixel interaction in order to build an artifact-free edge-preserving reconstruction. The proposed algorithm consists of two elementary steps: (1) edge detection, in which fuzzy-rule-based derivatives are used for the detection of edges in the nearest neighborhood window (which is equivalent to recognizing nearby density classes), and (2) fuzzy smoothing, in which penalization is performed only for those pixels for which no edge is detected in the nearest neighborhood. Both of these operations are carried out iteratively until the image converges. Analysis shows that the proposed fuzzy-rule-based reconstruction algorithm is capable of producing qualitatively better reconstructed images than those reconstructed by MAP and MRP algorithms. The reconstructed images are sharper, with small features being better resolved owing to the nature of the fuzzy potential function.

  6. Rule-Based Orientation Recognition Of A Moving Object

    NASA Astrophysics Data System (ADS)

    Gove, Robert J.

    1989-03-01

    This paper presents a detailed description and a comparative analysis of the algorithms used to determine the position and orientation of an object in real-time. The exemplary object, a freely moving gold-fish in an aquarium, provides "real-world" motion, with definable characteristics of motion (the fish never swims upside-down) and the complexities of a non-rigid body. For simplicity of implementation, and since a restricted and stationary viewing domain exists (fish-tank), we reduced the problem of obtaining 3D correspondence information to trivial alignment calculations by using two cameras orthogonally viewing the object. We applied symbolic processing techniques to recognize the 3-D orientation of a moving object of known identity in real-time. Assuming motion, each new frame (sensed by the two cameras) provides images of the object's profile which has most likely undergone translation, rotation, scaling and/or bending of the non-rigid object since the previous frame. We developed an expert system which uses heuristics of the object's motion behavior in the form of rules and information obtained via low-level image processing (like numerical inertial axis calculations) to dynamically estimate the object's orientation. An inference engine provides these estimates at frame rates of up to 10 per second (which is essentially real-time). The advantages of the rule-based approach to orientation recognition will be compared other pattern recognition techniques. Our results of an investigation of statistical pattern recognition, neural networks, and procedural techniques for orientation recognition will be included. We implemented the algorithms in a rapid-prototyping environment, the TI-Ezplorer, equipped with an Odyssey and custom imaging hardware. A brief overview of the workstation is included to clarify one motivation for our choice of algorithms. These algorithms exploit two facets of the prototype image processing and understanding workstation - both low

  7. Genetic learning in rule-based and neural systems

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.

    1993-01-01

    The design of neural networks and fuzzy systems can involve complex, nonlinear, and ill-conditioned optimization problems. Often, traditional optimization schemes are inadequate or inapplicable for such tasks. Genetic Algorithms (GA's) are a class of optimization procedures whose mechanics are based on those of natural genetics. Mathematical arguments show how GAs bring substantial computational leverage to search problems, without requiring the mathematical characteristics often necessary for traditional optimization schemes (e.g., modality, continuity, availability of derivative information, etc.). GA's have proven effective in a variety of search tasks that arise in neural networks and fuzzy systems. This presentation begins by introducing the mechanism and theoretical underpinnings of GA's. GA's are then related to a class of rule-based machine learning systems called learning classifier systems (LCS's). An LCS implements a low-level production-system that uses a GA as its primary rule discovery mechanism. This presentation illustrates how, despite its rule-based framework, an LCS can be thought of as a competitive neural network. Neural network simulator code for an LCS is presented. In this context, the GA is doing more than optimizing and objective function. It is searching for an ecology of hidden nodes with limited connectivity. The GA attempts to evolve this ecology such that effective neural network performance results. The GA is particularly well adapted to this task, given its naturally-inspired basis. The LCS/neural network analogy extends itself to other, more traditional neural networks. Conclusions to the presentation discuss the implications of using GA's in ecological search problems that arise in neural and fuzzy systems.

  8. Timescale analysis of rule-based biochemical reaction networks

    PubMed Central

    Klinke, David J.; Finley, Stacey D.

    2012-01-01

    The flow of information within a cell is governed by a series of protein-protein interactions that can be described as a reaction network. Mathematical models of biochemical reaction networks can be constructed by repetitively applying specific rules that define how reactants interact and what new species are formed upon reaction. To aid in understanding the underlying biochemistry, timescale analysis is one method developed to prune the size of the reaction network. In this work, we extend the methods associated with timescale analysis to reaction rules instead of the species contained within the network. To illustrate this approach, we applied timescale analysis to a simple receptor-ligand binding model and a rule-based model of Interleukin-12 (IL-12) signaling in näive CD4+ T cells. The IL-12 signaling pathway includes multiple protein-protein interactions that collectively transmit information; however, the level of mechanistic detail sufficient to capture the observed dynamics has not been justified based upon the available data. The analysis correctly predicted that reactions associated with JAK2 and TYK2 binding to their corresponding receptor exist at a pseudo-equilibrium. In contrast, reactions associated with ligand binding and receptor turnover regulate cellular response to IL-12. An empirical Bayesian approach was used to estimate the uncertainty in the timescales. This approach complements existing rank- and flux-based methods that can be used to interrogate complex reaction networks. Ultimately, timescale analysis of rule-based models is a computational tool that can be used to reveal the biochemical steps that regulate signaling dynamics. PMID:21954150

  9. Semantically aided interpretation and querying of Jefferson Project data using the SemantEco framework

    NASA Astrophysics Data System (ADS)

    Patton, E. W.; Pinheiro, P.; McGuinness, D. L.

    2014-12-01

    We will describe the benefits we realized using semantic technologies to address the often challenging and resource intensive task of ontology alignment in service of data integration. Ontology alignment became relatively simple as we reused our existing semantic data integration framework, SemantEco. We work in the context of the Jefferson Project (JP), an effort to monitor and predict the health of Lake George in NY by deploying a large-scale sensor network in the lake, and analyzing the high-resolution sensor data. SemantEco is an open-source framework for building semantically-aware applications to assist users, particularly non-experts, in exploration and interpretation of integrated scientific data. SemantEco applications are composed of a set of modules that incorporate new datasets, extend the semantic capabilities of the system to integrate and reason about data, and provide facets for extending or controlling semantic queries. Whereas earlier SemantEco work focused on integration of water, air, and species data from government sources, we focus on redeploying it to provide a provenance-aware, semantic query and interpretation interface for JP's sensor data. By employing a minor alignment between SemantEco's ontology and the Human-Aware Sensor Network Ontology used to model the JP's sensor deployments, we were able to bring SemantEco's capabilities to bear on the JP sensor data and metadata. This alignment enabled SemantEco to perform the following tasks: (1) select JP datasets related to water quality; (2) understand how the JP's notion of water quality relates to water quality concepts in previous work; and (3) reuse existing SemantEco interactive data facets, e.g. maps and time series visualizations, and modules, e.g. the regulation module that interprets water quality data through the lens of various federal and state regulations. Semantic technologies, both as the engine driving SemantEco and the means of modeling the JP data, enabled us to rapidly

  10. Preserved Musical Semantic Memory in Semantic Dementia

    PubMed Central

    Weinstein, Jessica; Koenig, Phyllis; Gunawardena, Delani; McMillan, Corey; Bonner, Michael; Grossman, Murray

    2012-01-01

    Objective To understand the scope of semantic impairment in semantic dementia. Design Case study. Setting Academic medical center. Patient A man with semantic dementia, as demonstrated by clinical, neuropsychological, and imaging studies. Main Outcome Measures Music performance and magnetic resonance imaging results. Results Despite profoundly impaired semantic memory for words and objects due to left temporal lobe atrophy, this semiprofessional musician was creative and expressive in demonstrating preserved musical knowledge. Conclusion Long-term representations of words and objects in semantic memory may be dissociated from meaningful knowledge in other domains, such as music. PMID:21320991

  11. A rule-based expert system for generating control displays at the Advanced Photon Source

    NASA Astrophysics Data System (ADS)

    Coulter, Karen J.

    1994-12-01

    The integration of a rule-based expert system for generating screen displays for controlling and monitoring instrumentation under the Experimental Physics and Industrial Control System (EPICS) is presented. The expert system is implemented using CLIPS, an expert system shell from the Software Technology Branch at Lyndon B. Johnson Space Center. The user selects the hardware input and output to be displayed and the expert system constructs a graphical control screen appropriate for the data. Such a system provides a method for implementing a common look and feel for displays created by several different users and reduces the amount of time required to create displays for new hardware configurations. Users are able to modify the displays as needed using the EPICS display editor tool.

  12. A rule-based expert system for generating control displays at the Advanced Photon Source

    SciTech Connect

    Coulter, K.J.

    1993-11-01

    The integration of a rule-based expert system for generating screen displays for controlling and monitoring instrumentation under the Experimental Physics and Industrial Control System (EPICS) is presented. The expert system is implemented using CLIPS, an expert system shell from the Software Technology Branch at Lyndon B. Johnson Space Center. The user selects the hardware input and output to be displayed and the expert system constructs a graphical control screen appropriate for the data. Such a system provides a method for implementing a common look and feel for displays created by several different users and reduces the amount of time required to create displays for new hardware configurations. Users are able to modify the displays as needed using the EPICS display editor tool.

  13. Rule-based deduplication of article records from bibliographic databases

    PubMed Central

    Jiang, Yu; Lin, Can; Meng, Weiyi; Yu, Clement; Cohen, Aaron M.; Smalheiser, Neil R.

    2014-01-01

    We recently designed and deployed a metasearch engine, Metta, that sends queries and retrieves search results from five leading biomedical databases: PubMed, EMBASE, CINAHL, PsycINFO and the Cochrane Central Register of Controlled Trials. Because many articles are indexed in more than one of these databases, it is desirable to deduplicate the retrieved article records. This is not a trivial problem because data fields contain a lot of missing and erroneous entries, and because certain types of information are recorded differently (and inconsistently) in the different databases. The present report describes our rule-based method for deduplicating article records across databases and includes an open-source script module that can be deployed freely. Metta was designed to satisfy the particular needs of people who are writing systematic reviews in evidence-based medicine. These users want the highest possible recall in retrieval, so it is important to err on the side of not deduplicating any records that refer to distinct articles, and it is important to perform deduplication online in real time. Our deduplication module is designed with these constraints in mind. Articles that share the same publication year are compared sequentially on parameters including PubMed ID number, digital object identifier, journal name, article title and author list, using text approximation techniques. In a review of Metta searches carried out by public users, we found that the deduplication module was more effective at identifying duplicates than EndNote without making any erroneous assignments. PMID:24434031

  14. A Rules-Based Simulation of Bacterial Turbulence

    NASA Astrophysics Data System (ADS)

    Mikel-Stites, Maxwell; Staples, Anne

    2015-11-01

    In sufficiently dense bacterial populations (>40% bacteria by volume), unusual collective swimming behaviors have been consistently observed, resembling von Karman vortex streets. The source of these collective swimming behavior has yet to be fully determined, and as of yet, no research has been conducted that would define whether or not this behavior is derived predominantly from the properties of the surrounding media, or if it is an emergent behavior as a result of the ``rules'' governing the behavior of individual bacteria. The goal of this research is to ascertain whether or not it is possible to design a simulation that can replicate the qualitative behavior of the densely packed bacterial populations using only behavioral rules to govern the actions of each bacteria, with the physical properties of the media being neglected. The results of the simulation will address whether or not it is possible for the system's overall behavior to be driven exclusively by these rule-based dynamics. In order to examine this, the behavioral simulation was written in MATLAB on a fixed grid, and updated sequentially with the bacterial behavior, including randomized tumbling, gathering and perceptual sub-functions. If the simulation is successful, it will serve as confirmation that it is possible to generate these qualitatively vortex-like behaviors without specific physical media (that the phenomena arises in emergent fashion from behavioral rules), or as evidence that the observed behavior requires some specific set of physical parameters.

  15. A Novel Rules Based Approach for Estimating Software Birthmark

    PubMed Central

    Binti Alias, Norma; Anwar, Sajid

    2015-01-01

    Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark. PMID:25945363

  16. Rule-based deduplication of article records from bibliographic databases.

    PubMed

    Jiang, Yu; Lin, Can; Meng, Weiyi; Yu, Clement; Cohen, Aaron M; Smalheiser, Neil R

    2014-01-01

    We recently designed and deployed a metasearch engine, Metta, that sends queries and retrieves search results from five leading biomedical databases: PubMed, EMBASE, CINAHL, PsycINFO and the Cochrane Central Register of Controlled Trials. Because many articles are indexed in more than one of these databases, it is desirable to deduplicate the retrieved article records. This is not a trivial problem because data fields contain a lot of missing and erroneous entries, and because certain types of information are recorded differently (and inconsistently) in the different databases. The present report describes our rule-based method for deduplicating article records across databases and includes an open-source script module that can be deployed freely. Metta was designed to satisfy the particular needs of people who are writing systematic reviews in evidence-based medicine. These users want the highest possible recall in retrieval, so it is important to err on the side of not deduplicating any records that refer to distinct articles, and it is important to perform deduplication online in real time. Our deduplication module is designed with these constraints in mind. Articles that share the same publication year are compared sequentially on parameters including PubMed ID number, digital object identifier, journal name, article title and author list, using text approximation techniques. In a review of Metta searches carried out by public users, we found that the deduplication module was more effective at identifying duplicates than EndNote without making any erroneous assignments. PMID:24434031

  17. A novel rules based approach for estimating software birthmark.

    PubMed

    Nazir, Shah; Shahzad, Sara; Khan, Sher Afzal; Alias, Norma Binti; Anwar, Sajid

    2015-01-01

    Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark. PMID:25945363

  18. Rule-based automatic segmentation for 3-D coronary arteriography

    NASA Astrophysics Data System (ADS)

    Sarwal, Alok; Truitt, Paul; Ozguner, Fusun; Zhang, Qian; Parker, Dennis L.

    1992-03-01

    Coronary arteriography is a technique used for evaluating the state of coronary arteries and assessing the need for bypass surgery and angioplasty. The present clinical application of this technology is based on the use of a contrast medium for manual radiographic visualization. This method is inaccurate due to varying interpretation of the visual results. Coronary arteriography based quantitations are impractical in a clinical setting without the use of automatic techniques applied to the 3-D reconstruction of the arterial tree. Such a system will provide an easily reproducible method for following the temporal changes in coronary morphology. The labeling of the arteries and establishing of the correspondence between multiple views is necessary for all subsequent processing required for 3-D reconstruction. This work represents a rule based expert system utilized for automatic labeling and segmentation of the arterial branches across multiple views. X-ray data of two and three views of human subjects and a pig arterial cast have been used for this research.

  19. Semantic Similarity in Biomedical Ontologies

    PubMed Central

    Pesquita, Catia; Faria, Daniel; Falcão, André O.; Lord, Phillip; Couto, Francisco M.

    2009-01-01

    In recent years, ontologies have become a mainstream topic in biomedical research. When biological entities are described using a common schema, such as an ontology, they can be compared by means of their annotations. This type of comparison is called semantic similarity, since it assesses the degree of relatedness between two entities by the similarity in meaning of their annotations. The application of semantic similarity to biomedical ontologies is recent; nevertheless, several studies have been published in the last few years describing and evaluating diverse approaches. Semantic similarity has become a valuable tool for validating the results drawn from biomedical studies such as gene clustering, gene expression data analysis, prediction and validation of molecular interactions, and disease gene prioritization. We review semantic similarity measures applied to biomedical ontologies and propose their classification according to the strategies they employ: node-based versus edge-based and pairwise versus groupwise. We also present comparative assessment studies and discuss the implications of their results. We survey the existing implementations of semantic similarity measures, and we describe examples of applications to biomedical research. This will clarify how biomedical researchers can benefit from semantic similarity measures and help them choose the approach most suitable for their studies. Biomedical ontologies are evolving toward increased coverage, formality, and integration, and their use for annotation is increasingly becoming a focus of both effort by biomedical experts and application of automated annotation procedures to create corpora of higher quality and completeness than are currently available. Given that semantic similarity measures are directly dependent on these evolutions, we can expect to see them gaining more relevance and even becoming as essential as sequence similarity is today in biomedical research. PMID:19649320

  20. On Decision-Making Among Multiple Rule-Bases in Fuzzy Control Systems

    NASA Technical Reports Server (NTRS)

    Tunstel, Edward; Jamshidi, Mo

    1997-01-01

    Intelligent control of complex multi-variable systems can be a challenge for single fuzzy rule-based controllers. This class of problems cam often be managed with less difficulty by distributing intelligent decision-making amongst a collection of rule-bases. Such an approach requires that a mechanism be chosen to ensure goal-oriented interaction between the multiple rule-bases. In this paper, a hierarchical rule-based approach is described. Decision-making mechanisms based on generalized concepts from single-rule-based fuzzy control are described. Finally, the effects of different aggregation operators on multi-rule-base decision-making are examined in a navigation control problem for mobile robots.

  1. Using hierarchically structured problem-solving knowledge in a rule-based process planning system

    SciTech Connect

    Hummel, K.E.; Brooks, S.L.

    1987-06-01

    A rule-based expert system, XCUT, currently is being developed which will generate process plans for the production of machined parts, given a feature-based part description. Due to the vast and dynamic nature of process planning knowledge, a technique has been used in the development of XCUT that segments problem solving knowledge into multiple rule bases. These rule bases are structured in a hierarchical manner that is reflective of the problem decomposition procedure used to generate a plan. An inference engine, HERB (Hierarchical Expert Rule Bases), has been developed which supports the manipulation of multiple rule bases during the planning process. This paper illustrates the hierarchical nature of problem-solving knowledge in the XCUT system and describes the use of HERB for programming with hierarchically structured rule bases. 6 refs., 21 figs.

  2. Rule-based reasoning for system dynamics in cell systems.

    PubMed

    Jeong, Euna; Nagasaki, Masao; Miyano, Satoru

    2008-01-01

    A system-dynamics-centered ontology, called the Cell System Ontology (CSO), has been developed for representation of diverse biological pathways. Many of the pathway data based on the ontology have been created from databases via data conversion or curated by expert biologists. It is essential to validate the pathway data which may cause unexpected issues such as semantic inconsistency and incompleteness. This paper discusses three criteria for validating the pathway data based on CSO as follows: (1) structurally correct models in terms of Petri nets, (2) biologically correct models to capture biological meaning, and (3) systematically correct models to reflect biological behaviors. Simultaneously, we have investigated how logic-based rules can be used for the ontology to extend its expressiveness and to complement the ontology by reasoning, which aims at qualifying pathway knowledge. Finally, we show how the proposed approach helps exploring dynamic modeling and simulation tasks without prior knowledge. PMID:19425120

  3. Cross border semantic interoperability for clinical research: the EHR4CR semantic resources and services.

    PubMed

    Daniel, Christel; Ouagne, David; Sadou, Eric; Forsberg, Kerstin; Gilchrist, Mark Mc; Zapletal, Eric; Paris, Nicolas; Hussain, Sajjad; Jaulent, Marie-Christine; Md, Dipka Kalra

    2016-01-01

    With the development of platforms enabling the use of routinely collected clinical data in the context of international clinical research, scalable solutions for cross border semantic interoperability need to be developed. Within the context of the IMI EHR4CR project, we first defined the requirements and evaluation criteria of the EHR4CR semantic interoperability platform and then developed the semantic resources and supportive services and tooling to assist hospital sites in standardizing their data for allowing the execution of the project use cases. The experience gained from the evaluation of the EHR4CR platform accessing to semantically equivalent data elements across 11 European participating EHR systems from 5 countries demonstrated how far the mediation model and mapping efforts met the expected requirements of the project. Developers of semantic interoperability platforms are beginning to address a core set of requirements in order to reach the goal of developing cross border semantic integration of data. PMID:27570649

  4. Cross border semantic interoperability for clinical research: the EHR4CR semantic resources and services

    PubMed Central

    Daniel, Christel; Ouagne, David; Sadou, Eric; Forsberg, Kerstin; Gilchrist, Mark Mc; Zapletal, Eric; Paris, Nicolas; Hussain, Sajjad; Jaulent, Marie-Christine; MD, Dipka Kalra

    2016-01-01

    With the development of platforms enabling the use of routinely collected clinical data in the context of international clinical research, scalable solutions for cross border semantic interoperability need to be developed. Within the context of the IMI EHR4CR project, we first defined the requirements and evaluation criteria of the EHR4CR semantic interoperability platform and then developed the semantic resources and supportive services and tooling to assist hospital sites in standardizing their data for allowing the execution of the project use cases. The experience gained from the evaluation of the EHR4CR platform accessing to semantically equivalent data elements across 11 European participating EHR systems from 5 countries demonstrated how far the mediation model and mapping efforts met the expected requirements of the project. Developers of semantic interoperability platforms are beginning to address a core set of requirements in order to reach the goal of developing cross border semantic integration of data. PMID:27570649

  5. A semantic approach to the efficient integration of interactive and automatic target recognition systems for the analysis of complex infrastructure from aerial imagery

    NASA Astrophysics Data System (ADS)

    Bauer, A.; Peinsipp-Byma, E.

    2008-04-01

    The analysis of complex infrastructure from aerial imagery, for instance a detailed analysis of an airfield, requires the interpreter, besides to be familiar with the sensor's imaging characteristics, to have a detailed understanding of the infrastructure domain. The required domain knowledge includes knowledge about the processes and functions involved in the operation of the infrastructure, the potential objects used to provide those functions and their spatial and functional interrelations. Since it is not possible yet to provide reliable automatic object recognition (AOR) for the analysis of such complex scenes, we developed systems to support a human interpreter with either interactive approaches, able to assist the interpreter with previously acquired expert knowledge about the domain in question, or AOR methods, capable of detecting, recognizing or analyzing certain classes of objects for certain sensors. We believe, to achieve an optimal result at the end of an interpretation process in terms of efficiency and effectivity, it is essential to integrate both interactive and automatic approaches to image interpretation. In this paper we present an approach inspired by the advancing semantic web technology to represent domain knowledge, the capabilities of available AOR modules and the image parameters in an explicit way. This enables us to seamlessly extend an interactive image interpretation environment with AOR modules in a way that we can automatically select suitable AOR methods for the current subtask, focus them on an appropriate area of interest and reintegrate their results into the environment.

  6. The development of co-speech gesture and its semantic integration with speech in 6- to 12-year-old children with autism spectrum disorders.

    PubMed

    So, Wing-Chee; Wong, Miranda Kit-Yi; Lui, Ming; Yip, Virginia

    2015-11-01

    Previous work leaves open the question of whether children with autism spectrum disorders aged 6-12 years have delay in producing gestures compared to their typically developing peers. This study examined gestural production among school-aged children in a naturalistic context and how their gestures are semantically related to the accompanying speech. Delay in gestural production was found in children with autism spectrum disorders through their middle to late childhood. Compared to their typically developing counterparts, children with autism spectrum disorders gestured less often and used fewer types of gestures, in particular markers, which carry culture-specific meaning. Typically developing children's gestural production was related to language and cognitive skills, but among children with autism spectrum disorders, gestural production was more strongly related to the severity of socio-communicative impairment. Gesture impairment also included the failure to integrate speech with gesture: in particular, supplementary gestures are absent in children with autism spectrum disorders. The findings extend our understanding of gestural production in school-aged children with autism spectrum disorders during spontaneous interaction. The results can help guide new therapies for gestural production for children with autism spectrum disorders in middle and late childhood. PMID:25488001

  7. A new type of simplified fuzzy rule-based system

    NASA Astrophysics Data System (ADS)

    Angelov, Plamen; Yager, Ronald

    2012-02-01

    Over the last quarter of a century, two types of fuzzy rule-based (FRB) systems dominated, namely Mamdani and Takagi-Sugeno type. They use the same type of scalar fuzzy sets defined per input variable in their antecedent part which are aggregated at the inference stage by t-norms or co-norms representing logical AND/OR operations. In this paper, we propose a significantly simplified alternative to define the antecedent part of FRB systems by data Clouds and density distribution. This new type of FRB systems goes further in the conceptual and computational simplification while preserving the best features (flexibility, modularity, and human intelligibility) of its predecessors. The proposed concept offers alternative non-parametric form of the rules antecedents, which fully reflects the real data distribution and does not require any explicit aggregation operations and scalar membership functions to be imposed. Instead, it derives the fuzzy membership of a particular data sample to a Cloud by the data density distribution of the data associated with that Cloud. Contrast this to the clustering which is parametric data space decomposition/partitioning where the fuzzy membership to a cluster is measured by the distance to the cluster centre/prototype ignoring all the data that form that cluster or approximating their distribution. The proposed new approach takes into account fully and exactly the spatial distribution and similarity of all the real data by proposing an innovative and much simplified form of the antecedent part. In this paper, we provide several numerical examples aiming to illustrate the concept.

  8. Rule-based Cross-matching of Very Large Catalogs

    NASA Astrophysics Data System (ADS)

    Ogle, P. M.; Mazzarella, J.; Ebert, R.; Fadda, D.; Lo, T.; Terek, S.; Schmitz, M.; NED Team

    2015-09-01

    The NASA Extragalactic Database (NED) has deployed a new rule-based cross-matching algorithm called Match Expert (MatchEx), capable of cross-matching very large catalogs (VLCs) with >10 million objects. MatchEx goes beyond traditional position-based cross-matching algorithms by using other available data together with expert logic to determine which candidate match is the best. Furthermore, the local background density of sources is used to determine and minimize the false-positive match rate and to estimate match completeness. The logical outcome and statistical probability of each match decision is stored in the database and may be used to tune the algorithm and adjust match parameter thresholds. For our first production run, we cross-matched the GALEX All Sky Survey Catalog (GASC), containing nearly 40 million NUV-detected sources, against a directory of 180 million objects in NED. Candidate matches were identified for each GASC source within a 7''.5 radius. These candidates were filtered on position-based matching probability and on other criteria including object type and object name. We estimate a match completeness of 97.6% and a match accuracy of 99.75%. Over the next year, we will be cross-matching over 2 billion catalog sources to NED, including the Spitzer Source List, the 2MASS point-source catalog, AllWISE, and SDSS DR 10. We expect to add new capabilities to filter candidate matches based on photometry, redshifts, and refined object classifications. We will also extend MatchEx to handle more heterogenous datasets federated from smaller catalogs through NED's literature pipeline.

  9. Rule-based classification models of molecular autofluorescence.

    PubMed

    Su, Bo-Han; Tu, Yi-Shu; Lin, Olivia A; Harn, Yeu-Chern; Shen, Meng-Yu; Tseng, Yufeng J

    2015-02-23

    Fluorescence-based detection has been commonly used in high-throughput screening (HTS) assays. Autofluorescent compounds, which can emit light in the absence of artificial fluorescent markers, often interfere with the detection of fluorophores and result in false positive signals in these assays. This interference presents a major issue in fluorescence-based screening techniques. In an effort to reduce the time and cost that will be spent on prescreening of autofluorescent compounds, in silico autofluorescence prediction models were developed for selected fluorescence-based assays in this study. Five prediction models were developed based on the respective fluorophores used in these HTS assays, which absorb and emit light at specific wavelengths (excitation/emission): Alexa Fluor 350 (A350) (340 nm/450 nm), 7-amino-4-trifluoromethyl-coumarin (AFC) (405 nm/520 nm), Alexa Fluor 488 (A488) (480 nm/540 nm), Rhodamine (547 nm/598 nm), and Texas Red (547 nm/618 nm). The C5.0 rule-based classification algorithm and PubChem 2D chemical structure fingerprints were used to develop prediction models. To optimize the accuracies of these prediction models despite the highly imbalanced ratio of fluorescent versus nonfluorescent compounds presented in the collected data sets, oversampling and undersampling strategies were applied. The average final accuracy achieved for the training set was 97%, and that for the testing set was 92%. In addition, five external data sets were used to further validate the models. Ultimately, 14 representative structural features (or rules) were determined to efficiently predict autofluorescence in data sets containing both fluorescent and nonfluorescent compounds. Several cases were illustrated in this study to demonstrate the applicability of these rules. PMID:25625768

  10. A Semantic Approach with Decision Support for Safety Service in Smart Home Management.

    PubMed

    Huang, Xiaoci; Yi, Jianjun; Zhu, Xiaomin; Chen, Shaoli

    2016-01-01

    Research on smart homes (SHs) has increased significantly in recent years because of the convenience provided by having an assisted living environment. The functions of SHs as mentioned in previous studies, particularly safety services, are seldom discussed or mentioned. Thus, this study proposes a semantic approach with decision support for safety service in SH management. The focus of this contribution is to explore a context awareness and reasoning approach for risk recognition in SH that enables the proper decision support for flexible safety service provision. The framework of SH based on a wireless sensor network is described from the perspective of neighbourhood management. This approach is based on the integration of semantic knowledge in which a reasoner can make decisions about risk recognition and safety service. We present a management ontology for a SH and relevant monitoring contextual information, which considers its suitability in a pervasive computing environment and is service-oriented. We also propose a rule-based reasoning method to provide decision support through reasoning techniques and context-awareness. A system prototype is developed to evaluate the feasibility, time response and extendibility of the approach. The evaluation of our approach shows that it is more effective in daily risk event recognition. The decisions for service provision are shown to be accurate. PMID:27527170

  11. Rule-based classification of multi-temporal satellite imagery for habitat and agricultural land cover mapping

    NASA Astrophysics Data System (ADS)

    Lucas, Richard; Rowlands, Aled; Brown, Alan; Keyworth, Steve; Bunting, Peter

    AimTo evaluate the use of time-series of Landsat sensor data acquired over an annual cycle for mapping semi-natural habitats and agricultural land cover. LocationBerwyn Mountains, North Wales, United Kingdom. MethodsUsing eCognition Expert, segmentation of the Landsat sensor data was undertaken for actively managed agricultural land based on Integrated Administration and Control System (IACS) land parcel boundaries, whilst a per-pixel level segmentation was undertaken for all remaining areas. Numerical decision rules based on fuzzy logic that coupled knowledge of ecology and the information content of single and multi-date remotely sensed data and derived products (e.g., vegetation indices) were developed to discriminate vegetation types based primarily on inferred differences in phenology, structure, wetness and productivity. ResultsThe rule-based classification gave a good representation of the distribution of habitats and agricultural land. The more extensive, contiguous and homogeneous habitats could be mapped with accuracies exceeding 80%, although accuracies were lower for more complex environments (e.g., upland mosaics) or those with broad definition (e.g., semi-improved grasslands). Main conclusionsThe application of a rule-based classification to temporal imagery acquired over selected periods within an annual cycle provides a viable approach for mapping and monitoring of habitats and agricultural land in the United Kingdom that could be employed operationally.

  12. A rule-based seizure prediction method for focal neocortical epilepsy

    PubMed Central

    Aarabi, Ardalan; He, Bin

    2012-01-01

    Objective In the present study, we have developed a novel patient-specific rule-based seizure prediction system for focal neocortical epilepsy. Methods Five univariate measures including correlation dimension, correlation entropy, noise level, Lempel-Ziv complexity, and largest Lyapunov exponent as well as one bivariate measure, nonlinear interdependence, were extracted from non-overlapping 10-second segments of intracranial electroencephalogram (iEEG) data recorded using electrodes implanted deep in the brain and/or placed on the cortical surface. The spatio-temporal information was then integrated by using rules established based on patient-specific changes observed in the period prior to a seizure sample for each patient. The system was tested on 316 h of iEEG data containing 49 seizures recorded in eleven patients with medically intractable focal neocortical epilepsy. Results For seizure occurrence periods of 30 and 50 min our method showed an average sensitivity of 79.9% and 90.2% with an average false prediction rate of 0.17 and 0.11/h, respectively. In terms of sensitivity and false prediction rate, the system showed superiority to random and periodical predictors. Conclusions The nonlinear analysis of iEEG in the period prior to seizures revealed patient-specific spatio-temporal changes that were significantly different from those observed within baselines in the majority of the seizures analyzed in this study. Significance The present results suggest that the patient specific rule-based approach may become a potentially useful approach for predicting seizures prior to onset. PMID:22361267

  13. Neural substrates of similarity and rule-based strategies in judgment

    PubMed Central

    von Helversen, Bettina; Karlsson, Linnea; Rasch, Björn; Rieskamp, Jörg

    2014-01-01

    Making accurate judgments is a core human competence and a prerequisite for success in many areas of life. Plenty of evidence exists that people can employ different judgment strategies to solve identical judgment problems. In categorization, it has been demonstrated that similarity-based and rule-based strategies are associated with activity in different brain regions. Building on this research, the present work tests whether solving two identical judgment problems recruits different neural substrates depending on people's judgment strategies. Combining cognitive modeling of judgment strategies at the behavioral level with functional magnetic resonance imaging (fMRI), we compare brain activity when using two archetypal judgment strategies: a similarity-based exemplar strategy and a rule-based heuristic strategy. Using an exemplar-based strategy should recruit areas involved in long-term memory processes to a larger extent than a heuristic strategy. In contrast, using a heuristic strategy should recruit areas involved in the application of rules to a larger extent than an exemplar-based strategy. Largely consistent with our hypotheses, we found that using an exemplar-based strategy led to relatively higher BOLD activity in the anterior prefrontal and inferior parietal cortex, presumably related to retrieval and selective attention processes. In contrast, using a heuristic strategy led to relatively higher activity in areas in the dorsolateral prefrontal and the temporal-parietal cortex associated with cognitive control and information integration. Thus, even when people solve identical judgment problems, different neural substrates can be recruited depending on the judgment strategy involved. PMID:25360099

  14. Research of Expended Production Rule Based on Fuzzy Conceptual Graphs*

    NASA Astrophysics Data System (ADS)

    Liu, Peiqi; Li, Longji; Zhang, Linye; Li, Zengzhi

    In the knowledge engineering, the fuzzy conceptual graphs and the production rule are two important knowledge representation methods. Because the confidence information can't be represented in the fuzzy conceptual graphs and the fuzzy knowledge can't be represented in the production rules, the ability of their knowledge representation is grievous insufficiency. In this paper, the extended production rule which is a new knowledge representation method has been presented. In the extended production rule, the antecedent and consequent of a rule are represented by fuzzy conceptual graphs, and the sustaining relation between antecedent and consequent is the confidence. The rule combines the fuzzy knowledge with the confidence effectually. It not only retains the semantic plentifulness of facts and proposition, but also makes the reasoning results more effectively. According to the extended production rule, the uncertain reasoning algorithm based on fuzzy conceptual graphs is designed. By the experiment test and analysis, the reasoning effects of the extended production rule are more in reason. The researching results are applied in the designed of uncertain inference engine in fuzzy expert system.

  15. Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2009-01-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the exemplar…

  16. The Semantic eScience Framework

    NASA Astrophysics Data System (ADS)

    McGuinness, Deborah; Fox, Peter; Hendler, James

    2010-05-01

    The goal of this effort is to design and implement a configurable and extensible semantic eScience framework (SESF). Configuration requires research into accommodating different levels of semantic expressivity and user requirements from use cases. Extensibility is being achieved in a modular approach to the semantic encodings (i.e. ontologies) performed in community settings, i.e. an ontology framework into which specific applications all the way up to communities can extend the semantics for their needs.We report on how we are accommodating the rapid advances in semantic technologies and tools and the sustainable software path for the future (certain) technical advances. In addition to a generalization of the current data science interface, we will present plans for an upper-level interface suitable for use by clearinghouses, and/or educational portals, digital libraries, and other disciplines.SESF builds upon previous work in the Virtual Solar-Terrestrial Observatory. The VSTO utilizes leading edge knowledge representation, query and reasoning techniques to support knowledge-enhanced search, data access, integration, and manipulation. It encodes term meanings and their inter-relationships in ontologies anduses these ontologies and associated inference engines to semantically enable the data services. The Semantically-Enabled Science Data Integration (SESDI) project implemented data integration capabilities among three sub-disciplines; solar radiation, volcanic outgassing and atmospheric structure using extensions to existingmodular ontolgies and used the VSTO data framework, while adding smart faceted search and semantic data registrationtools. The Semantic Provenance Capture in Data Ingest Systems (SPCDIS) has added explanation provenance capabilities to an observational data ingest pipeline for images of the Sun providing a set of tools to answer diverseend user questions such as ``Why does this image look bad?. http://tw.rpi.edu/portal/SESF

  17. The Semantic eScience Framework

    NASA Astrophysics Data System (ADS)

    Fox, P. A.; McGuinness, D. L.

    2009-12-01

    The goal of this effort is to design and implement a configurable and extensible semantic eScience framework (SESF). Configuration requires research into accommodating different levels of semantic expressivity and user requirements from use cases. Extensibility is being achieved in a modular approach to the semantic encodings (i.e. ontologies) performed in community settings, i.e. an ontology framework into which specific applications all the way up to communities can extend the semantics for their needs.We report on how we are accommodating the rapid advances in semantic technologies and tools and the sustainable software path for the future (certain) technical advances. In addition to a generalization of the current data science interface, we will present plans for an upper-level interface suitable for use by clearinghouses, and/or educational portals, digital libraries, and other disciplines.SESF builds upon previous work in the Virtual Solar-Terrestrial Observatory. The VSTO utilizes leading edge knowledge representation, query and reasoning techniques to support knowledge-enhanced search, data access, integration, and manipulation. It encodes term meanings and their inter-relationships in ontologies anduses these ontologies and associated inference engines to semantically enable the data services. The Semantically-Enabled Science Data Integration (SESDI) project implemented data integration capabilities among three sub-disciplines; solar radiation, volcanic outgassing and atmospheric structure using extensions to existingmodular ontolgies and used the VSTO data framework, while adding smart faceted search and semantic data registrationtools. The Semantic Provenance Capture in Data Ingest Systems (SPCDIS) has added explanation provenance capabilities to an observational data ingest pipeline for images of the Sun providing a set of tools to answer diverseend user questions such as ``Why does this image look bad?.

  18. LORD: a phenotype-genotype semantically integrated biomedical data tool to support rare disease diagnosis coding in health information systems

    PubMed Central

    Choquet, Remy; Maaroufi, Meriem; Fonjallaz, Yannick; de Carrara, Albane; Vandenbussche, Pierre-Yves; Dhombres, Ferdinand; Landais, Paul

    2015-01-01

    Characterizing a rare disease diagnosis for a given patient is often made through expert’s networks. It is a complex task that could evolve over time depending on the natural history of the disease and the evolution of the scientific knowledge. Most rare diseases have genetic causes and recent improvements of sequencing techniques contribute to the discovery of many new diseases every year. Diagnosis coding in the rare disease field requires data from multiple knowledge bases to be aggregated in order to offer the clinician a global information space from possible diagnosis to clinical signs (phenotypes) and known genetic mutations (genotype). Nowadays, the major barrier to the coding activity is the lack of consolidation of such information scattered in different thesaurus such as Orphanet, OMIM or HPO. The Linking Open data for Rare Diseases (LORD) web portal we developed stands as the first attempt to fill this gap by offering an integrated view of 8,400 rare diseases linked to more than 14,500 signs and 3,270 genes. The application provides a browsing feature to navigate through the relationships between diseases, signs and genes, and some Application Programming Interfaces to help its integration in health information systems in routine. PMID:26958175

  19. LORD: a phenotype-genotype semantically integrated biomedical data tool to support rare disease diagnosis coding in health information systems.

    PubMed

    Choquet, Remy; Maaroufi, Meriem; Fonjallaz, Yannick; de Carrara, Albane; Vandenbussche, Pierre-Yves; Dhombres, Ferdinand; Landais, Paul

    2015-01-01

    Characterizing a rare disease diagnosis for a given patient is often made through expert's networks. It is a complex task that could evolve over time depending on the natural history of the disease and the evolution of the scientific knowledge. Most rare diseases have genetic causes and recent improvements of sequencing techniques contribute to the discovery of many new diseases every year. Diagnosis coding in the rare disease field requires data from multiple knowledge bases to be aggregated in order to offer the clinician a global information space from possible diagnosis to clinical signs (phenotypes) and known genetic mutations (genotype). Nowadays, the major barrier to the coding activity is the lack of consolidation of such information scattered in different thesaurus such as Orphanet, OMIM or HPO. The Linking Open data for Rare Diseases (LORD) web portal we developed stands as the first attempt to fill this gap by offering an integrated view of 8,400 rare diseases linked to more than 14,500 signs and 3,270 genes. The application provides a browsing feature to navigate through the relationships between diseases, signs and genes, and some Application Programming Interfaces to help its integration in health information systems in routine. PMID:26958175

  20. Fuzzy rule base design using tabu search algorithm for nonlinear system modeling.

    PubMed

    Bagis, Aytekin

    2008-01-01

    This paper presents an approach to fuzzy rule base design using tabu search algorithm (TSA) for nonlinear system modeling. TSA is used to evolve the structure and the parameter of fuzzy rule base. The use of the TSA, in conjunction with a systematic neighbourhood structure for the determination of fuzzy rule base parameters, leads to a significant improvement in the performance of the model. To demonstrate the effectiveness of the presented method, several numerical examples given in the literature are examined. The results obtained by means of the identified fuzzy rule bases are compared with those belonging to other modeling approaches in the literature. The simulation results indicate that the method based on the use of a TSA performs an important and very effective modeling procedure in fuzzy rule base design in the modeling of the nonlinear or complex systems. PMID:17945233

  1. Semantic-Web Technology: Applications at NASA

    NASA Technical Reports Server (NTRS)

    Ashish, Naveen

    2004-01-01

    We provide a description of work at the National Aeronautics and Space Administration (NASA) on building system based on semantic-web concepts and technologies. NASA has been one of the early adopters of semantic-web technologies for practical applications. Indeed there are several ongoing 0 endeavors on building semantics based systems for use in diverse NASA domains ranging from collaborative scientific activity to accident and mishap investigation to enterprise search to scientific information gathering and integration to aviation safety decision support We provide a brief overview of many applications and ongoing work with the goal of informing the external community of these NASA endeavors.

  2. Disorders of semantic memory.

    PubMed

    McCarthy, R A; Warrington, E K

    1994-10-29

    It is now established that selective disorders of semantic memory may arise after focal cerebral lesions. Debate and dissension remain on three principal issues: category specificity, the status of modality-dependent knowledge, and the stability and sufficiency of stored information. Theories of category specificity have focused on the frequently reported dissociation between living things and man-made objects. However, other dimensions need theoretical integration. Impairments can be both finer-grain and broader in range. A second variable of importance is stimulus modality. Reciprocal interactive dissociations between vision and language and between animals and objects will be described. These indicate that the derivation of semantic information is constrained by input modality: we appear to have evolved separable databases for the visual and the verbal world. Thirdly, an orthogonal distinction has been drawn between degradation disorders, where representations are insufficient for comprehension, and access deficits, in which representations have become unstable. These issues may have their parallel in the acquisition of knowledge by the developing child. PMID:7886158

  3. SEMANTICS AND CRITICAL READING.

    ERIC Educational Resources Information Center

    FLANIGAN, MICHAEL C.

    PROFICIENCY IN CRITICAL READING CAN BE ACCELERATED BY MAKING STUDENTS AWARE OF VARIOUS SEMANTIC DEVICES THAT HELP CLARIFY MEANINGS AND PURPOSES. EXCERPTS FROM THE ARTICLE "TEEN-AGE CORRUPTION" FROM THE NINTH-GRADE SEMANTICS UNIT WRITTEN BY THE PROJECT ENGLISH DEMONSTRATION CENTER AT EUCLID, OHIO, ARE USED TO ILLUSTRATE HOW SEMANTICS RELATE TO…

  4. An ontology-based hierarchical semantic modeling approach to clinical pathway workflows.

    PubMed

    Ye, Yan; Jiang, Zhibin; Diao, Xiaodi; Yang, Dong; Du, Gang

    2009-08-01

    This paper proposes an ontology-based approach of modeling clinical pathway workflows at the semantic level for facilitating computerized clinical pathway implementation and efficient delivery of high-quality healthcare services. A clinical pathway ontology (CPO) is formally defined in OWL web ontology language (OWL) to provide common semantic foundation for meaningful representation and exchange of pathway-related knowledge. A CPO-based semantic modeling method is then presented to describe clinical pathways as interconnected hierarchical models including the top-level outcome flow and intervention workflow level along a care timeline. Furthermore, relevant temporal knowledge can be fully represented by combing temporal entities in CPO and temporal rules based on semantic web rule language (SWRL). An illustrative example about a clinical pathway for cesarean section shows the applicability of the proposed methodology in enabling structured semantic descriptions of any real clinical pathway. PMID:19539278

  5. Semantic SenseLab: implementing the vision of the Semantic Web in neuroscience

    PubMed Central

    Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi

    2011-01-01

    Summary Objective Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Methods Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. Conclusion We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/ PMID:20006477

  6. Application of a rule-based knowledge system using CLIPS for the taxonomy of selected Opuntia species

    NASA Technical Reports Server (NTRS)

    Heymans, Bart C.; Onema, Joel P.; Kuti, Joseph O.

    1991-01-01

    A rule based knowledge system was developed in CLIPS (C Language Integrated Production System) for identifying Opuntia species in the family Cactaceae, which contains approx. 1500 different species. This botanist expert tool system is capable of identifying selected Opuntia plants from the family level down to the species level when given some basic characteristics of the plants. Many plants are becoming of increasing importance because of their nutrition and human health potential, especially in the treatment of diabetes mellitus. The expert tool system described can be extremely useful in an unequivocal identification of many useful Opuntia species.

  7. Biomedical semantics in the Semantic Web.

    PubMed

    Splendiani, Andrea; Burger, Albert; Paschke, Adrian; Romano, Paolo; Marshall, M Scott

    2011-01-01

    The Semantic Web offers an ideal platform for representing and linking biomedical information, which is a prerequisite for the development and application of analytical tools to address problems in data-intensive areas such as systems biology and translational medicine. As for any new paradigm, the adoption of the Semantic Web offers opportunities and poses questions and challenges to the life sciences scientific community: which technologies in the Semantic Web stack will be more beneficial for the life sciences? Is biomedical information too complex to benefit from simple interlinked representations? What are the implications of adopting a new paradigm for knowledge representation? What are the incentives for the adoption of the Semantic Web, and who are the facilitators? Is there going to be a Semantic Web revolution in the life sciences?We report here a few reflections on these questions, following discussions at the SWAT4LS (Semantic Web Applications and Tools for Life Sciences) workshop series, of which this Journal of Biomedical Semantics special issue presents selected papers from the 2009 edition, held in Amsterdam on November 20th. PMID:21388570

  8. Biomedical semantics in the Semantic Web

    PubMed Central

    2011-01-01

    The Semantic Web offers an ideal platform for representing and linking biomedical information, which is a prerequisite for the development and application of analytical tools to address problems in data-intensive areas such as systems biology and translational medicine. As for any new paradigm, the adoption of the Semantic Web offers opportunities and poses questions and challenges to the life sciences scientific community: which technologies in the Semantic Web stack will be more beneficial for the life sciences? Is biomedical information too complex to benefit from simple interlinked representations? What are the implications of adopting a new paradigm for knowledge representation? What are the incentives for the adoption of the Semantic Web, and who are the facilitators? Is there going to be a Semantic Web revolution in the life sciences? We report here a few reflections on these questions, following discussions at the SWAT4LS (Semantic Web Applications and Tools for Life Sciences) workshop series, of which this Journal of Biomedical Semantics special issue presents selected papers from the 2009 edition, held in Amsterdam on November 20th. PMID:21388570

  9. The semantic anatomical network: Evidence from healthy and brain-damaged patient populations.

    PubMed

    Fang, Yuxing; Han, Zaizhu; Zhong, Suyu; Gong, Gaolang; Song, Luping; Liu, Fangsong; Huang, Ruiwang; Du, Xiaoxia; Sun, Rong; Wang, Qiang; He, Yong; Bi, Yanchao

    2015-09-01

    Semantic processing is central to cognition and is supported by widely distributed gray matter (GM) regions and white matter (WM) tracts. The exact manner in which GM regions are anatomically connected to process semantics remains unknown. We mapped the semantic anatomical network (connectome) by conducting diffusion imaging tractography in 48 healthy participants across 90 GM "nodes," and correlating the integrity of each obtained WM edge and semantic performance across 80 brain-damaged patients. Fifty-three WM edges were obtained whose lower integrity associated with semantic deficits and together with their linked GM nodes constitute a semantic WM network. Graph analyses of this network revealed three structurally segregated modules that point to distinct semantic processing components and identified network hubs and connectors that are central in the communication across the subnetworks. Together, our results provide an anatomical framework of human semantic network, advancing the understanding of the structural substrates supporting semantic processing. PMID:26059098

  10. Semantic framework for mapping object-oriented model to semantic web languages.

    PubMed

    Ježek, Petr; Mouček, Roman

    2015-01-01

    The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework. PMID:25762923

  11. Semantic framework for mapping object-oriented model to semantic web languages

    PubMed Central

    Ježek, Petr; Mouček, Roman

    2015-01-01

    The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework. PMID:25762923

  12. Semantic Web for data harmonization in Chinese medicine

    PubMed Central

    2010-01-01

    Scientific studies to investigate Chinese medicine with Western medicine have been generating a large amount of data to be shared preferably under a global data standard. This article provides an overview of Semantic Web and identifies some representative Semantic Web applications in Chinese medicine. Semantic Web is proposed as a standard for representing Chinese medicine data and facilitating their integration with Western medicine data. PMID:20205772

  13. SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services

    PubMed Central

    Gessler, Damian DG; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T

    2009-01-01

    Background SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. Results There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at , developer tools at , and a portal to third-party ontologies at (a "swap meet"). Conclusion SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the confounding of content, structure, and presentation. SSWAP is novel by establishing

  14. Semantic Networks and Social Networks

    ERIC Educational Resources Information Center

    Downes, Stephen

    2005-01-01

    Purpose: To illustrate the need for social network metadata within semantic metadata. Design/methodology/approach: Surveys properties of social networks and the semantic web, suggests that social network analysis applies to semantic content, argues that semantic content is more searchable if social network metadata is merged with semantic web…

  15. Significance testing of rules in rule-based models of human problem solving

    NASA Technical Reports Server (NTRS)

    Lewis, C. M.; Hammer, J. M.

    1986-01-01

    Rule-based models of human problem solving have typically not been tested for statistical significance. Three methods of testing rules - analysis of variance, randomization, and contingency tables - are presented. Advantages and disadvantages of the methods are also described.

  16. Application of rule-based data mining techniques to real time ATLAS Grid job monitoring data

    NASA Astrophysics Data System (ADS)

    Ahrens, R.; Harenberg, T.; Kalinin, S.; Mättig, P.; Sandhoff, M.; dos Santos, T.; Volkmer, F.

    2012-12-01

    The Job Execution Monitor (JEM) is a job-centric grid job monitoring software developed at the University of Wuppertal and integrated into the pilot-based PanDA job brokerage system leveraging physics analysis and Monte Carlo event production for the ATLAS experiment on the Worldwide LHC Computing Grid (WLCG). With JEM, job progress and grid worker node health can be supervised in real time by users, site admins and shift personnel. Imminent error conditions can be detected early and countermeasures can be initiated by the Job's owner immedeatly. Grid site admins can access aggregated data of all monitored jobs to infer the site status and to detect job and Grid worker node misbehavior. Shifters can use the same aggregated data to quickly react to site error conditions and broken production tasks. In this work, the application of novel data-centric rule based methods and data-mining techniques to the real time monitoring data is discussed. The usage of such automatic inference techniques on monitoring data to provide job and site health summary information to users and admins is presented. Finally, the provision of a secure real-time control and steering channel to the job as extension of the presented monitoring software is considered and a possible model of such the control method is presented.

  17. Designing caption production rules based on face, text, and motion detection

    NASA Astrophysics Data System (ADS)

    Chapdelaine, C.; Beaulieu, M.; Gagnon, L.

    2008-02-01

    Producing off-line captions for the deaf and hearing impaired people is a labor-intensive task that can require up to 18 hours of production per hour of film. Captions are placed manually close to the region of interest but it must avoid masking human faces, texts or any moving objects that might be relevant to the story flow. Our goal is to use image processing techniques to reduce the off-line caption production process by automatically placing the captions on the proper consecutive frames. We implemented a computer-assisted captioning software tool which integrates detection of faces, texts and visual motion regions. The near frontal faces are detected using a cascade of weak classifier and tracked through a particle filter. Then, frames are scanned to perform text spotting and build a region map suitable for text recognition. Finally, motion mapping is based on the Lukas-Kanade optical flow algorithm and provides MPEG-7 motion descriptors. The combined detected items are then fed to a rule-based algorithm to determine the best captions localization for the related sequences of frames. This paper focuses on the defined rules to assist the human captioners and the results of a user evaluation for this approach.

  18. Enabling Semantic Interoperability for Earth System Science

    NASA Astrophysics Data System (ADS)

    Raskin, R.

    2004-12-01

    Data interoperability across heterogeneous systems can be hampered by differences in terminology, particularly when multiple scientific communities are involved. To reconcile differences in semantics, a common semantic framework was created as a collection of ontologies. Such a shared understanding of concepts enables ontology-aware software tools to understand the meaning of terms in documents and web pages. The ontologies were created as part of the Semantic Web for Earth and Environmental Terminology (SWEET) prototype. The ontologies provide a representation of Earth system science knowledge and associated data, organized in a scalable structure, bulding on the keywords developed by the NASA Global Change Master Directory (GCMD). An integrated search tool consults the ontologies to enable searches without an exact term match. The ontologies can be used within other applications (such as Earth Science Markup Language descriptors) and future semantic web services in Earth system science.

  19. Individual Differences in the Joint Effects of Semantic Priming and Word Frequency Revealed by RT Distributional Analyses: The Role of Lexical Integrity

    ERIC Educational Resources Information Center

    Yap, Melvin J.; Tse, Chi-Shing; Balota, David A.

    2009-01-01

    Word frequency and semantic priming effects are among the most robust effects in visual word recognition, and it has been generally assumed that these two variables produce interactive effects in lexical decision performance, with larger priming effects for low-frequency targets. The results from four lexical decision experiments indicate that the…

  20. The Development of Co-Speech Gesture and Its Semantic Integration with Speech in 6- to 12-Year-Old Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    So, Wing-Chee; Wong, Miranda Kit-Yi; Lui, Ming; Yip, Virginia

    2015-01-01

    Previous work leaves open the question of whether children with autism spectrum disorders aged 6-12?years have delay in producing gestures compared to their typically developing peers. This study examined gestural production among school-aged children in a naturalistic context and how their gestures are semantically related to the accompanying…

  1. Automatic semantic annotation of real-world web images.

    PubMed

    Wong, R C F; Leung, C H C

    2008-11-01

    As the number of web images is increasing at a rapid rate, searching them semantically presents a significant challenge. Many raw images are constantly uploaded with little meaningful direct annotations of semantic content, limiting their search and discovery. In this paper, we present a semantic annotation technique based on the use of image parametric dimensions and metadata. Using decision trees and rule induction, we develop a rule-based approach to formulate explicit annotations for images fully automatically, so that by the use of our method, semantic query such as " sunset by the sea in autumn in New York" can be answered and indexed purely by machine. Our system is evaluated quantitatively using more than 100,000 web images. Experimental results indicate that this approach is able to deliver highly competent performance, attaining good recall and precision rates of sometimes over 80%. This approach enables a new degree of semantic richness to be automatically associated with images which previously can only be performed manually. PMID:18787242

  2. Semantic prosody and judgment.

    PubMed

    Hauser, David J; Schwarz, Norbert

    2016-07-01

    Some words tend to co-occur exclusively with a positive or negative context in natural language use, even though such valence patterns are not dictated by definitions or are part of the words' core meaning. These words contain semantic prosody, a subtle valenced meaning derived from co-occurrence in language. As language and thought are heavily intertwined, we hypothesized that semantic prosody can affect evaluative inferences about related ambiguous concepts. Participants inferred that an ambiguous medical outcome was more negative when it was caused, a verb with negative semantic prosody, than when it was produced, a synonymous verb with no semantic prosody (Studies 1a, 1b). Participants completed sentence fragments in a manner consistent with semantic prosody (Study 2), and semantic prosody affected various other judgments in line with evaluative inferences (estimates of an event's likelihood in Study 3). Finally, semantic prosody elicited both positive and negative evaluations of outcomes across a large set of semantically prosodic verbs (Study 4). Thus, semantic prosody can exert a strong influence on evaluative judgment. (PsycINFO Database Record PMID:27243765

  3. LEARNING SEMANTICS-ENHANCED LANGUAGE MODELS APPLIED TO UNSUEPRVISED WSD

    SciTech Connect

    VERSPOOR, KARIN; LIN, SHOU-DE

    2007-01-29

    An N-gram language model aims at capturing statistical syntactic word order information from corpora. Although the concept of language models has been applied extensively to handle a variety of NLP problems with reasonable success, the standard model does not incorporate semantic information, and consequently limits its applicability to semantic problems such as word sense disambiguation. We propose a framework that integrates semantic information into the language model schema, allowing a system to exploit both syntactic and semantic information to address NLP problems. Furthermore, acknowledging the limited availability of semantically annotated data, we discuss how the proposed model can be learned without annotated training examples. Finally, we report on a case study showing how the semantics-enhanced language model can be applied to unsupervised word sense disambiguation with promising results.

  4. Anomia as a Marker of Distinct Semantic Memory Impairments in Alzheimer’s Disease and Semantic Dementia

    PubMed Central

    Reilly, Jamie; Peelle, Jonathan E.; Antonucci, Sharon M.; Grossman, Murray

    2011-01-01

    Objective Many neurologically-constrained models of semantic memory have been informed by two primary temporal lobe pathologies: Alzheimer’s Disease (AD) and Semantic Dementia (SD). However, controversy persists regarding the nature of the semantic impairment associated with these patient populations. Some argue that AD presents as a disconnection syndrome in which linguistic impairment reflects difficulties in lexical or perceptual means of semantic access. In contrast, there is a wider consensus that SD reflects loss of core knowledge that underlies word and object meaning. Object naming provides a window into the integrity of semantic knowledge in these two populations. Method We examined naming accuracy, errors and the correlation of naming ability with neuropsychological measures (semantic ability, executive functioning, and working memory) in a large sample of patients with AD (n=36) and SD (n=21). Results Naming ability and naming errors differed between groups, as did neuropsychological predictors of naming ability. Despite a similar extent of baseline cognitive impairment, SD patients were more anomic than AD patients. Conclusions These results add to a growing body of literature supporting a dual impairment to semantic content and active semantic processing in AD, and confirm the fundamental deficit in semantic content in SD. We interpret these findings as supporting of a model of semantic memory premised upon dynamic interactivity between the process and content of conceptual knowledge. PMID:21443339

  5. Neuronal Activation for Semantically Reversible Sentences

    PubMed Central

    Richardson, Fiona M.; Thomas, Michael S. C.; Price, Cathy J.

    2010-01-01

    Semantically reversible sentences are prone to misinterpretation and take longer for typically developing children and adults to comprehend; they are also particularly problematic for those with language difficulties such as aphasia or Specific Language Impairment. In our study we used fMRI to compare the processing of semantically reversible and nonreversible sentences in 41 healthy participants to identify how semantic reversibility influences neuronal activation. By including several linguistic and nonlinguistic conditions within our paradigm, we were also able to test whether the processing of semantically reversible sentences places additional load on sentence-specific processing, such as syntactic processing and syntactic-semantic integration, or on phonological working memory. Our results identified increased activation for reversible sentences in a region on the left temporal–parietal boundary, which was also activated when the same group of participants carried out an articulation task which involved saying “one, three” repeatedly. We conclude that the processing of semantically reversible sentences places additional demands on the subarticulation component of phonological working memory. PMID:19445603

  6. e-Science and biological pathway semantics

    PubMed Central

    Luciano, Joanne S; Stevens, Robert D

    2007-01-01

    Background The development of e-Science presents a major set of opportunities and challenges for the future progress of biological and life scientific research. Major new tools are required and corresponding demands are placed on the high-throughput data generated and used in these processes. Nowhere is the demand greater than in the semantic integration of these data. Semantic Web tools and technologies afford the chance to achieve this semantic integration. Since pathway knowledge is central to much of the scientific research today it is a good test-bed for semantic integration. Within the context of biological pathways, the BioPAX initiative, part of a broader movement towards the standardization and integration of life science databases, forms a necessary prerequisite for its successful application of e-Science in health care and life science research. This paper examines whether BioPAX, an effort to overcome the barrier of disparate and heterogeneous pathway data sources, addresses the needs of e-Science. Results We demonstrate how BioPAX pathway data can be used to ask and answer some useful biological questions. We find that BioPAX comes close to meeting a broad range of e-Science needs, but certain semantic weaknesses mean that these goals are missed. We make a series of recommendations for re-modeling some aspects of BioPAX to better meet these needs. Conclusion Once these semantic weaknesses are addressed, it will be possible to integrate pathway information in a manner that would be useful in e-Science. PMID:17493286

  7. Anticipating Words and Their Gender: An Event-related Brain Potential Study of Semantic Integration, Gender Expectancy, and Gender Agreement in Spanish Sentence Reading

    PubMed Central

    Wicha, Nicole Y. Y.; Moreno, Eva M.; Kutas, Marta

    2012-01-01

    Recent studies indicate that the human brain attends to and uses grammatical gender cues during sentence comprehension. Here, we examine the nature and time course of the effect of gender on word-by-word sentence reading. Event-related brain potentials were recorded to an article and noun, while native Spanish speakers read medium- to high-constraint Spanish sentences for comprehension. The noun either fit the sentence meaning or not, and matched the preceding article in gender or not; in addition, the preceding article was either expected or unexpected based on prior sentence context. Semantically anomalous nouns elicited an N400. Gender-disagreeing nouns elicited a posterior late positivity (P600), replicating previous findings for words. Gender agreement and semantic congruity interacted in both the N400 window—with a larger negativity frontally for double violations—and the P600 window—with a larger positivity for semantic anomalies, relative to the prestimulus baseline. Finally, unexpected articles elicited an enhanced positivity (500–700 msec post onset) relative to expected articles. Overall, our data indicate that readers anticipate and attend to the gender of both articles and nouns, and use gender in real time to maintain agreement and to build sentence meaning. PMID:15453979

  8. Ontology Reuse in Geoscience Semantic Applications

    NASA Astrophysics Data System (ADS)

    Mayernik, M. S.; Gross, M. B.; Daniels, M. D.; Rowan, L. R.; Stott, D.; Maull, K. E.; Khan, H.; Corson-Rikert, J.

    2015-12-01

    The tension between local ontology development and wider ontology connections is fundamental to the Semantic web. It is often unclear, however, what the key decision points should be for new semantic web applications in deciding when to reuse existing ontologies and when to develop original ontologies. In addition, with the growth of semantic web ontologies and applications, new semantic web applications can struggle to efficiently and effectively identify and select ontologies to reuse. This presentation will describe the ontology comparison, selection, and consolidation effort within the EarthCollab project. UCAR, Cornell University, and UNAVCO are collaborating on the EarthCollab project to use semantic web technologies to enable the discovery of the research output from a diverse array of projects. The EarthCollab project is using the VIVO Semantic web software suite to increase discoverability of research information and data related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) diverse research projects informed by geodesy through the UNAVCO geodetic facility and consortium. This presentation will outline of EarthCollab use cases, and provide an overview of key ontologies being used, including the VIVO-Integrated Semantic Framework (VIVO-ISF), Global Change Information System (GCIS), and Data Catalog (DCAT) ontologies. We will discuss issues related to bringing these ontologies together to provide a robust ontological structure to support the EarthCollab use cases. It is rare that a single pre-existing ontology meets all of a new application's needs. New projects need to stitch ontologies together in ways that fit into the broader semantic web ecosystem.

  9. Communication: General Semantics Perspectives.

    ERIC Educational Resources Information Center

    Thayer, Lee, Ed.

    This book contains the edited papers from the eleventh International Conference on General Semantics, titled "A Search for Relevance." The conference questioned, as a central theme, the relevance of general semantics in a world of wars and human misery. Reacting to a fundamental Korzybski-ian principle that man's view of reality is distorted by…

  10. The Semantic Learning Organization

    ERIC Educational Resources Information Center

    Sicilia, Miguel-Angel; Lytras, Miltiadis D.

    2005-01-01

    Purpose: The aim of this paper is introducing the concept of a "semantic learning organization" (SLO) as an extension of the concept of "learning organization" in the technological domain. Design/methodology/approach: The paper takes existing definitions and conceptualizations of both learning organizations and Semantic Web technology to develop…

  11. Aging and Semantic Activation.

    ERIC Educational Resources Information Center

    Howard, Darlene V.

    Three studies tested the theory that long term memory consists of a semantically organized network of concept nodes interconnected by leveled associations or relations, and that when a stimulus is processed, the corresponding concept node is assumed to be temporarily activated and this activation spreads to nearby semantically related nodes. In…

  12. Order Theoretical Semantic Recommendation

    SciTech Connect

    Joslyn, Cliff A.; Hogan, Emilie A.; Paulson, Patrick R.; Peterson, Elena S.; Stephan, Eric G.; Thomas, Dennis G.

    2013-07-23

    Mathematical concepts of order and ordering relations play multiple roles in semantic technologies. Discrete totally ordered data characterize both input streams and top-k rank-ordered recommendations and query output, while temporal attributes establish numerical total orders, either over time points or in the more complex case of startend temporal intervals. But also of note are the fully partially ordered data, including both lattices and non-lattices, which actually dominate the semantic strcuture of ontological systems. Scalar semantic similarities over partially-ordered semantic data are traditionally used to return rank-ordered recommendations, but these require complementation with true metrics available over partially ordered sets. In this paper we report on our work in the foundations of partial order measurement in ontologies, with application to top-k semantic recommendation in workflows.

  13. A Metrics Taxonomy and Reporting Strategy for Rule-Based Alerts

    PubMed Central

    Krall, Michael; Gerace, Alexander

    2015-01-01

    Context: Because institutions rely on rule-based alerts as an important component of their safety and quality strategies, they should determine whether the alerts achieve the expected benefit. Objective: To develop and to test a method of reporting outcome metrics for rule-based electronic health record alerts on a large scale. Methods: We empirically developed an action-oriented alerts taxonomy according to structure, actions, and implicit intended process outcomes using a set of 333 rule-based alerts at Kaiser Permanente Northwest. Next we developed a method for producing metrics reports for alert classes. Finally, we applied this method to alert taxa. Main Outcome Measures: Outcome measures were the successful development of a rule-based alerts taxonomy and the demonstration of its application in a reporting strategy. Results: We identified 9 major and 17 overall classes of alerts. We developed a specific metric approach for 5 of these classes, including the 3 most numerous ones in our institution, accounting for 224 (67%) of our alerts. Some alert classes do not readily lend themselves to this approach. Conclusions: We developed a taxonomy for rule-based alerts and demonstrated its application in developing outcome metrics reports on a large scale. This information allows tuning or retiring alerts and may inform the need to develop complementary or alternative approaches to address organizational imperatives. A method that assigns alerts to classes each amenable to a particular reporting strategy could reduce the difficulty of producing metrics reports. PMID:26057684

  14. Semantics, Pragmatics, and the Nature of Semantic Theories

    ERIC Educational Resources Information Center

    Spewak, David Charles, Jr.

    2013-01-01

    The primary concern of this dissertation is determining the distinction between semantics and pragmatics and how context sensitivity should be accommodated within a semantic theory. I approach the question over how to distinguish semantics from pragmatics from a new angle by investigating what the objects of a semantic theory are, namely…

  15. Semantic processing of EHR data for clinical research.

    PubMed

    Sun, Hong; Depraetere, Kristof; De Roo, Jos; Mels, Giovanni; De Vloed, Boris; Twagirumukiza, Marc; Colaert, Dirk

    2015-12-01

    There is a growing need to semantically process and integrate clinical data from different sources for clinical research. This paper presents an approach to integrate EHRs from heterogeneous resources and generate integrated data in different data formats or semantics to support various clinical research applications. The proposed approach builds semantic data virtualization layers on top of data sources, which generate data in the requested semantics or formats on demand. This approach avoids upfront dumping to and synchronizing of the data with various representations. Data from different EHR systems are first mapped to RDF data with source semantics, and then converted to representations with harmonized domain semantics where domain ontologies and terminologies are used to improve reusability. It is also possible to further convert data to application semantics and store the converted results in clinical research databases, e.g. i2b2, OMOP, to support different clinical research settings. Semantic conversions between different representations are explicitly expressed using N3 rules and executed by an N3 Reasoner (EYE), which can also generate proofs of the conversion processes. The solution presented in this paper has been applied to real-world applications that process large scale EHR data. PMID:26515501

  16. A Relation Routing Scheme for Distributed Semantic Media Query

    PubMed Central

    Liao, Zhuhua; Zhang, Guoqiang; Yi, Aiping; Zhang, Guoqing; Liang, Wei

    2013-01-01

    Performing complex semantic queries over large-scale distributed media contents is a challenging task for rich media applications. The dynamics and openness of data sources make it uneasy to realize a query scheme that simultaneously achieves precision, scalability, and reliability. In this paper, a novel relation routing scheme (RRS) is proposed by renovating the routing model of Content Centric Network (CCN) for directly querying large-scale semantic media content. By using proper query model and routing mechanism, semantic queries with complex relation constrains from users can be guided towards potential media sources through semantic guider nodes. The scattered and fragmented query results can be integrated on their way back for semantic needs or to avoid duplication. Several new techniques, such as semantic-based naming, incomplete response avoidance, timeout checking, and semantic integration, are developed in this paper to improve the accuracy, efficiency, and practicality of the proposed approach. Both analytical and experimental results show that the proposed scheme is a promising and effective solution for complex semantic queries and integration over large-scale networks. PMID:24319383

  17. Semantic Alignment between ICD-11 and SNOMED CT.

    PubMed

    Rodrigues, Jean-Marie; Robinson, David; Della Mea, Vincenzo; Campbell, James; Rector, Alan; Schulz, Stefan; Brear, Hazel; Üstün, Bedirhan; Spackman, Kent; Chute, Christopher G; Millar, Jane; Solbrig, Harold; Brand Persson, Kristina

    2015-01-01

    Due to fundamental differences in design and editorial policies, semantic interoperability between two de facto standard terminologies in the healthcare domain--the International Classification of Diseases (ICD) and SNOMED CT (SCT), requires combining two different approaches: (i) axiom-based, which states logically what is universally true, using an ontology language such as OWL; (ii) rule-based, expressed as queries on the axiom-based knowledge. We present the ICD-SCT harmonization process including: a) a new architecture for ICD-11, b) a protocol for the semantic alignment of ICD and SCT, and c) preliminary results of the alignment applied to more than half the domain currently covered by the draft ICD-11. PMID:26262160

  18. Evaluation of Semantic-Based Information Retrieval Methods in the Autism Phenotype Domain

    PubMed Central

    Hassanpour, Saeed; O’Connor, Martin J.; Das, Amar K.

    2011-01-01

    Biomedical ontologies are increasingly being used to improve information retrieval methods. In this paper, we present a novel information retrieval approach that exploits knowledge specified by the Semantic Web ontology and rule languages OWL and SWRL. We evaluate our approach using an autism ontology that has 156 SWRL rules defining 145 autism phenotypes. Our approach uses a vector space model to correlate how well these phenotypes relate to the publications used to define them. We compare a vector space phenotype representation using class hierarchies with one that extends this method to incorporate additional semantics encoded in SWRL rules. From a PubMed-extracted corpus of 75 articles, we show that average rank of a related paper using the class hierarchy method is 4.6 whereas the average rank using the extended rule-based method is 3.3. Our results indicate that incorporating rule-based definitions in information retrieval methods can improve search for relevant publications. PMID:22195112

  19. Evaluation of semantic-based information retrieval methods in the autism phenotype domain.

    PubMed

    Hassanpour, Saeed; O'Connor, Martin J; Das, Amar K

    2011-01-01

    Biomedical ontologies are increasingly being used to improve information retrieval methods. In this paper, we present a novel information retrieval approach that exploits knowledge specified by the Semantic Web ontology and rule languages OWL and SWRL. We evaluate our approach using an autism ontology that has 156 SWRL rules defining 145 autism phenotypes. Our approach uses a vector space model to correlate how well these phenotypes relate to the publications used to define them. We compare a vector space phenotype representation using class hierarchies with one that extends this method to incorporate additional semantics encoded in SWRL rules. From a PubMed-extracted corpus of 75 articles, we show that average rank of a related paper using the class hierarchy method is 4.6 whereas the average rank using the extended rule-based method is 3.3. Our results indicate that incorporating rule-based definitions in information retrieval methods can improve search for relevant publications. PMID:22195112

  20. Hybrid neural net and rule based system for boiler monitoring and diagnosis

    SciTech Connect

    Kraft, T.; Okagaki, K.; Ishii, R.; Surko, P. ); Brandon, A.; DeWeese, A.; Peterson, S.; Bjordal, R. )

    1991-01-01

    A fully recurrent neural net is coupled with a rule-based expert system in this operator adviser system. The neural net has been trained to recognize normal high-efficiency operating behavior of the power plant boiler, and the rule-based expert system diagnoses problems and suggests maintenance and/or operator actions when the boiler strays outside the envelope of normal operating conditions. The fully recurrent neural net provides an accurate model of a boiler even when the load demand is changing rapidly and the boiler operating conditions varying over a wide range. The hybrid system has been quicker and easier to generate than a strictly rule-based one, and has been designed to be more easily portable to other units This paper describes the ongoing development work for monitoring SDGE and E's South Bay Plant, Unit. 1.

  1. Hybrid neural network and rule-based pattern recognition system capable of self-modification

    SciTech Connect

    Glover, C.W.; Silliman, M.; Walker, M.; Spelt, P.F. ); Rao, N.S.V. . Dept. of Computer Science)

    1990-01-01

    This paper describes a hybrid neural network and rule-based pattern recognition system architecture which is capable of self-modification or learning. The central research issue to be addressed for a multiclassifier hybrid system is whether such a system can perform better than the two classifiers taken by themselves. The hybrid system employs a hierarchical architecture, and it can be interfaced with one or more sensors. Feature extraction routines operating on raw sensor data produce feature vectors which serve as inputs to neural network classifiers at the next level in the hierarchy. These low-level neural networks are trained to provide further discrimination of the sensor data. A set of feature vectors is formed from a concatenation of information from the feature extraction routines and the low-level neural network results. A rule-based classifier system uses this feature set to determine if certain expected environmental states, conditions, or objects are present in the sensors' current data stream. The rule-based system has been given an a priori set of models of the expected environmental states, conditions, or objects which it is expected to identify. The rule-based system forms many candidate directed graphs of various combinations of incoming features vectors, and it uses a suitably chosen metric to measure the similarity between candidate and model directed graphs. The rule-based system must decide if there is a match between one of the candidate graphs and a model graph. If a match is found, then the rule-based system invokes a routine to create and train a new high-level neural network from the appropriate feature vector data to recognize when this model state is present in future sensor data streams. 12 refs., 3 figs.

  2. A Semantic Graph Query Language

    SciTech Connect

    Kaplan, I L

    2006-10-16

    Semantic graphs can be used to organize large amounts of information from a number of sources into one unified structure. A semantic query language provides a foundation for extracting information from the semantic graph. The graph query language described here provides a simple, powerful method for querying semantic graphs.

  3. A Defense of Semantic Minimalism

    ERIC Educational Resources Information Center

    Kim, Su

    2012-01-01

    Semantic Minimalism is a position about the semantic content of declarative sentences, i.e., the content that is determined entirely by syntax. It is defined by the following two points: "Point 1": The semantic content is a complete/truth-conditional proposition. "Point 2": The semantic content is useful to a theory of…

  4. A rule-based systems approach to spacecraft communications configuration optimization

    NASA Technical Reports Server (NTRS)

    Rash, James L.; Wong, Yen F.; Cieplak, James J.

    1988-01-01

    An experimental rule-based system for optimizing user spacecraft communications configurations was developed at NASA to support mission planning for spacecraft that obtain telecommunications services through NASA's Tracking and Data Relay Satellite System. Designated Expert for Communications Configuration Optimization (ECCO), and implemented in the OPS5 production system language, the system has shown the validity of a rule-based systems approach to this optimization problem. The development of ECCO and the incremental optimizatin method on which it is based are discussed. A test case using hypothetical mission data is included to demonstrate the optimization concept.

  5. A rule-based systems approach to spacecraft communications configuration optimization

    NASA Technical Reports Server (NTRS)

    Rash, James L.; Wong, Yen F.; Cieplak, James J.

    1988-01-01

    An experimental rule-based system for optimizing user spacecraft communications configurations was developed at NASA to support mission planning for spacecraft that obtain telecommunications services through NASA's Tracking and Data Relay Satellite System. Designated Expert for Communications Configuration Optimization (ECCO), and implemented in the OPS5 production system language, the system has shown the validity of a rule-based systems approach to this optimization problem. The development of ECCO and the incremental optimization method on which it is based are discussed. A test case using hypothetical mission data is included to demonstrate the optimization concept.

  6. A rule-based systems approach to spacecraft communications configuration optimization

    NASA Astrophysics Data System (ADS)

    Rash, James L.; Wong, Yen F.; Cieplak, James J.

    An experimental rule-based system for optimizing user spacecraft communications configurations was developed at NASA to support mission planning for spacecraft that obtain telecommunications services through NASA's Tracking and Data Relay Satellite System. Designated Expert for Communications Configuration Optimization (ECCO), and implemented in the OPS5 production system language, the system has shown the validity of a rule-based systems approach to this optimization problem. The development of ECCO and the incremental optimizatin method on which it is based are discussed. A test case using hypothetical mission data is included to demonstrate the optimization concept.

  7. GetBonNie for building, analyzing and sharing rule-based models

    SciTech Connect

    Hu, Bin

    2008-01-01

    GetBonNie is a suite of web-based services for building, analyzing, and sharing rule-based models specified according to the conventions of the BioNetGen language (BNGL). Services include (1) an applet for drawing, editing, and viewing graphs of BNGL; (2) a network-generation engine for translating a set of rules into a chemical reaction network; (3) simulation engines that implement generate-first, on-the-fly, and network-free methods for simulating rule-based models; and (4) a database for sharing models, parameter values, annotations, simulation tasks and results.

  8. Traditional versus rule-based programming techniques: Application to the control of optional flight information

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.; Abbott, Kathy H.

    1987-01-01

    To the software design community, the concern over the costs associated with a program's execution time and implementation is great. It is always desirable, and sometimes imperative, that the proper programming technique is chosen which minimizes all costs for a given application or type of application. A study is described that compared cost-related factors associated with traditional programming techniques to rule-based programming techniques for a specific application. The results of this study favored the traditional approach regarding execution efficiency, but favored the rule-based approach regarding programmer productivity (implementation ease). Although this study examined a specific application, the results should be widely applicable.

  9. Rule based artificial intelligence expert system for determination of upper extremity impairment rating.

    PubMed

    Lim, I; Walkup, R K; Vannier, M W

    1993-04-01

    Quantitative evaluation of upper extremity impairment, a percentage rating most often determined using a rule based procedure, has been implemented on a personal computer using an artificial intelligence, rule-based expert system (AI system). In this study, the rules given in Chapter 3 of the AMA Guides to the Evaluation of Permanent Impairment (Third Edition) were used to develop such an AI system for the Apple Macintosh. The program applies the rules from the Guides in a consistent and systematic fashion. It is faster and less error-prone than the manual method, and the results have a higher degree of precision, since intermediate values are not truncated. PMID:8334872

  10. Semantic-based crossmodal processing during visual suppression

    PubMed Central

    Cox, Dustin; Hong, Sang Wook

    2015-01-01

    To reveal the mechanisms underpinning the influence of auditory input on visual awareness, we examine, (1) whether purely semantic-based multisensory integration facilitates the access to visual awareness for familiar visual events, and (2) whether crossmodal semantic priming is the mechanism responsible for the semantic auditory influence on visual awareness. Using continuous flash suppression, we rendered dynamic and familiar visual events (e.g., a video clip of an approaching train) inaccessible to visual awareness. We manipulated the semantic auditory context of the videos by concurrently pairing them with a semantically matching soundtrack (congruent audiovisual condition), a semantically non-matching soundtrack (incongruent audiovisual condition), or with no soundtrack (neutral video-only condition). We found that participants identified the suppressed visual events significantly faster (an earlier breakup of suppression) in the congruent audiovisual condition compared to the incongruent audiovisual condition and video-only condition. However, this facilitatory influence of semantic auditory input was only observed when audiovisual stimulation co-occurred. Our results suggest that the enhanced visual processing with a semantically congruent auditory input occurs due to audiovisual crossmodal processing rather than semantic priming, which may occur even when visual information is not available to visual awareness. PMID:26082736

  11. The MMI Semantic Framework: Rosetta Stones for Earth Sciences

    NASA Astrophysics Data System (ADS)

    Rueda, C.; Bermudez, L. E.; Graybeal, J.; Alexander, P.

    2009-12-01

    Semantic interoperability—the exchange of meaning among computer systems—is needed to successfully share data in Ocean Science and across all Earth sciences. The best approach toward semantic interoperability requires a designed framework, and operationally tested tools and infrastructure within that framework. Currently available technologies make a scientific semantic framework feasible, but its development requires sustainable architectural vision and development processes. This presentation outlines the MMI Semantic Framework, including recent progress on it and its client applications. The MMI Semantic Framework consists of tools, infrastructure, and operational and community procedures and best practices, to meet short-term and long-term semantic interoperability goals. The design and prioritization of the semantic framework capabilities are based on real-world scenarios in Earth observation systems. We describe some key uses cases, as well as the associated requirements for building the overall infrastructure, which is realized through the MMI Ontology Registry and Repository. This system includes support for community creation and sharing of semantic content, ontology registration, version management, and seamless integration of user-friendly tools and application programming interfaces. The presentation describes the architectural components for semantic mediation, registry and repository for vocabularies, ontology, and term mappings. We show how the technologies and approaches in the framework can address community needs for managing and exchanging semantic information. We will demonstrate how different types of users and client applications exploit the tools and services for data aggregation, visualization, archiving, and integration. Specific examples from OOSTethys (http://www.oostethys.org) and the Ocean Observatories Initiative Cyberinfrastructure (http://www.oceanobservatories.org) will be cited. Finally, we show how semantic augmentation of web

  12. Trusting Crowdsourced Geospatial Semantics

    NASA Astrophysics Data System (ADS)

    Goodhue, P.; McNair, H.; Reitsma, F.

    2015-08-01

    The degree of trust one can place in information is one of the foremost limitations of crowdsourced geospatial information. As with the development of web technologies, the increased prevalence of semantics associated with geospatial information has increased accessibility and functionality. Semantics also provides an opportunity to extend indicators of trust for crowdsourced geospatial information that have largely focused on spatio-temporal and social aspects of that information. Comparing a feature's intrinsic and extrinsic properties to associated ontologies provides a means of semantically assessing the trustworthiness of crowdsourced geospatial information. The application of this approach to unconstrained semantic submissions then allows for a detailed assessment of the trust of these features whilst maintaining the descriptive thoroughness this mode of information submission affords. The resulting trust rating then becomes an attribute of the feature, providing not only an indication as to the trustworthiness of a specific feature but is able to be aggregated across multiple features to illustrate the overall trustworthiness of a dataset.

  13. Algebraic Semantics for Narrative

    ERIC Educational Resources Information Center

    Kahn, E.

    1974-01-01

    This paper uses discussion of Edmund Spenser's "The Faerie Queene" to present a theoretical framework for explaining the semantics of narrative discourse. The algebraic theory of finite automata is used. (CK)

  14. Rule-based approach to operating system selection: RMS vs. UNIX

    SciTech Connect

    Phifer, M.S.; Sadlowe, A.R.; Emrich, M.L.; Gadagkar, H.P.

    1988-10-01

    A rule-based system is under development for choosing computer operating systems. Following a brief historical account, this paper compares and contrasts the essential features of two operating systems highlighting particular applications. ATandT's UNIX System and Datapoint Corporations's Resource Management System (RMS) are used as illustrative examples. 11 refs., 3 figs.

  15. Effectiveness of Visual Imagery versus Rule-Based Strategies in Teaching Spelling to Learning Disabled Students.

    ERIC Educational Resources Information Center

    Darch, Craig; Simpson, Robert G.

    1990-01-01

    Among 28 upper elementary learning-disabled students in a summer remedial program, those that were taught spelling with explicit rule-based strategies out-performed students presented with a visual imagery mnemonic on unit tests, a posttest, and a standardized spelling test. Contains 20 references. (SV)

  16. Effects of Multimedia on Cognitive Load, Self-Efficacy, and Multiple Rule-Based Problem Solving

    ERIC Educational Resources Information Center

    Zheng, Robert; McAlack, Matthew; Wilmes, Barbara; Kohler-Evans, Patty; Williamson, Jacquee

    2009-01-01

    This study investigates effects of multimedia on cognitive load, self-efficacy and learners' ability to solve multiple rule-based problems. Two hundred twenty-two college students were randomly assigned to interactive and non-interactive multimedia groups. Based on Engelkamp's multimodal theory, the present study investigates the role of…

  17. Haunted by a doppelgänger: irrelevant facial similarity affects rule-based judgments.

    PubMed

    von Helversen, Bettina; Herzog, Stefan M; Rieskamp, Jörg

    2014-01-01

    Judging other people is a common and important task. Every day professionals make decisions that affect the lives of other people when they diagnose medical conditions, grant parole, or hire new employees. To prevent discrimination, professional standards require that decision makers render accurate and unbiased judgments solely based on relevant information. Facial similarity to previously encountered persons can be a potential source of bias. Psychological research suggests that people only rely on similarity-based judgment strategies if the provided information does not allow them to make accurate rule-based judgments. Our study shows, however, that facial similarity to previously encountered persons influences judgment even in situations in which relevant information is available for making accurate rule-based judgments and where similarity is irrelevant for the task and relying on similarity is detrimental. In two experiments in an employment context we show that applicants who looked similar to high-performing former employees were judged as more suitable than applicants who looked similar to low-performing former employees. This similarity effect was found despite the fact that the participants used the relevant résumé information about the applicants by following a rule-based judgment strategy. These findings suggest that similarity-based and rule-based processes simultaneously underlie human judgment. PMID:23895921

  18. A rule-based expert system for chemical prioritization using effects-based chemical categories

    EPA Science Inventory

    A rule-based expert system (ES) was developed to predict chemical binding to the estrogen receptor (ER) patterned on the research approaches championed by Gilman Veith to whom this article and journal issue are dedicated. The ERES was built to be mechanistically-transparent and m...

  19. Applications of fuzzy sets to rule-based expert system development

    NASA Technical Reports Server (NTRS)

    Lea, Robert N.

    1989-01-01

    Problems of implementing rule-based expert systems using fuzzy sets are considered. A fuzzy logic software development shell is used that allows inclusion of both crisp and fuzzy rules indecision making and process control problems. Results are given that compare this type of expert system to a human expert in some specific applications. Advantages and disadvantages of such systems are discussed.

  20. Applications of fuzzy sets to rule-based expert system development

    NASA Technical Reports Server (NTRS)

    Lea, Robert N.

    1989-01-01

    Problems of implementing rule-based expert systems using fuzzy sets are considered. A fuzzy logic software development shell is used that allows inclusion of both crisp and fuzzy rules in decision making and process control problems. Results are given that compare this type of expert system to a human expert in some specific applications. Advantages and disadvantages of such systems are discussed.

  1. Presentation of clinical guidelines via a rule-based expert charting system.

    PubMed

    Schriger, D L; Baraff, L J; Hassanvand, M; Cretin, S

    1995-01-01

    This paper discusses the theoretical basis and cumulative experience with EDECS, the Emergency Department Expert Charting System. This rule-based expert-system introduces clinical guidelines into the flow of patient care while creating the medical record and patient aftercare instructions. PMID:8591354

  2. Rule-Based Category Learning in Children: The Role of Age and Executive Functioning

    PubMed Central

    Rabi, Rahel; Minda, John Paul

    2014-01-01

    Rule-based category learning was examined in 4–11 year-olds and adults. Participants were asked to learn a set of novel perceptual categories in a classification learning task. Categorization performance improved with age, with younger children showing the strongest rule-based deficit relative to older children and adults. Model-based analyses provided insight regarding the type of strategy being used to solve the categorization task, demonstrating that the use of the task appropriate strategy increased with age. When children and adults who identified the correct categorization rule were compared, the performance deficit was no longer evident. Executive functions were also measured. While both working memory and inhibitory control were related to rule-based categorization and improved with age, working memory specifically was found to marginally mediate the age-related improvements in categorization. When analyses focused only on the sample of children, results showed that working memory ability and inhibitory control were associated with categorization performance and strategy use. The current findings track changes in categorization performance across childhood, demonstrating at which points performance begins to mature and resemble that of adults. Additionally, findings highlight the potential role that working memory and inhibitory control may play in rule-based category learning. PMID:24489658

  3. Semantic querying of relational data for clinical intelligence: a semantic web services-based approach

    PubMed Central

    2013-01-01

    Background Clinical Intelligence, as a research and engineering discipline, is dedicated to the development of tools for data analysis for the purposes of clinical research, surveillance, and effective health care management. Self-service ad hoc querying of clinical data is one desirable type of functionality. Since most of the data are currently stored in relational or similar form, ad hoc querying is problematic as it requires specialised technical skills and the knowledge of particular data schemas. Results A possible solution is semantic querying where the user formulates queries in terms of domain ontologies that are much easier to navigate and comprehend than data schemas. In this article, we are exploring the possibility of using SADI Semantic Web services for semantic querying of clinical data. We have developed a prototype of a semantic querying infrastructure for the surveillance of, and research on, hospital-acquired infections. Conclusions Our results suggest that SADI can support ad-hoc, self-service, semantic queries of relational data in a Clinical Intelligence context. The use of SADI compares favourably with approaches based on declarative semantic mappings from data schemas to ontologies, such as query rewriting and RDFizing by materialisation, because it can easily cope with situations when (i) some computation is required to turn relational data into RDF or OWL, e.g., to implement temporal reasoning, or (ii) integration with external data sources is necessary. PMID:23497556

  4. A probabilistic model of semantic plausibility in sentence processing.

    PubMed

    Padó, Ulrike; Crocker, Matthew W; Keller, Frank

    2009-07-01

    Experimental research shows that human sentence processing uses information from different levels of linguistic analysis, for example, lexical and syntactic preferences as well as semantic plausibility. Existing computational models of human sentence processing, however, have focused primarily on lexico-syntactic factors. Those models that do account for semantic plausibility effects lack a general model of human plausibility intuitions at the sentence level. Within a probabilistic framework, we propose a wide-coverage model that both assigns thematic roles to verb-argument pairs and determines a preferred interpretation by evaluating the plausibility of the resulting (verb, role, argument) triples. The model is trained on a corpus of role-annotated language data. We also present a transparent integration of the semantic model with an incremental probabilistic parser. We demonstrate that both the semantic plausibility model and the combined syntax/semantics model predict judgment and reading time data from the experimental literature. PMID:21585487

  5. Adventures in Semantic Publishing: Exemplar Semantic Enhancements of a Research Article

    PubMed Central

    Shotton, David; Portwin, Katie; Klyne, Graham; Miles, Alistair

    2009-01-01

    Scientific innovation depends on finding, integrating, and re-using the products of previous research. Here we explore how recent developments in Web technology, particularly those related to the publication of data and metadata, might assist that process by providing semantic enhancements to journal articles within the mainstream process of scholarly journal publishing. We exemplify this by describing semantic enhancements we have made to a recent biomedical research article taken from PLoS Neglected Tropical Diseases, providing enrichment to its content and increased access to datasets within it. These semantic enhancements include provision of live DOIs and hyperlinks; semantic markup of textual terms, with links to relevant third-party information resources; interactive figures; a re-orderable reference list; a document summary containing a study summary, a tag cloud, and a citation analysis; and two novel types of semantic enrichment: the first, a Supporting Claims Tooltip to permit “Citations in Context”, and the second, Tag Trees that bring together semantically related terms. In addition, we have published downloadable spreadsheets containing data from within tables and figures, have enriched these with provenance information, and have demonstrated various types of data fusion (mashups) with results from other research articles and with Google Maps. We have also published machine-readable RDF metadata both about the article and about the references it cites, for which we developed a Citation Typing Ontology, CiTO (http://purl.org/net/cito/). The enhanced article, which is available at http://dx.doi.org/10.1371/journal.pntd.0000228.x001, presents a compelling existence proof of the possibilities of semantic publication. We hope the showcase of examples and ideas it contains, described in this paper, will excite the imaginations of researchers and publishers, stimulating them to explore the possibilities of semantic publishing for their own research

  6. Semantics-Based Interoperability Framework for the Geosciences

    NASA Astrophysics Data System (ADS)

    Sinha, A.; Malik, Z.; Raskin, R.; Barnes, C.; Fox, P.; McGuinness, D.; Lin, K.

    2008-12-01

    Interoperability between heterogeneous data, tools and services is required to transform data to knowledge. To meet geoscience-oriented societal challenges such as forcing of climate change induced by volcanic eruptions, we suggest the need to develop semantic interoperability for data, services, and processes. Because such scientific endeavors require integration of multiple data bases associated with global enterprises, implicit semantic-based integration is impossible. Instead, explicit semantics are needed to facilitate interoperability and integration. Although different types of integration models are available (syntactic or semantic) we suggest that semantic interoperability is likely to be the most successful pathway. Clearly, the geoscience community would benefit from utilization of existing XML-based data models, such as GeoSciML, WaterML, etc to rapidly advance semantic interoperability and integration. We recognize that such integration will require a "meanings-based search, reasoning and information brokering", which will be facilitated through inter-ontology relationships (ontologies defined for each discipline). We suggest that Markup languages (MLs) and ontologies can be seen as "data integration facilitators", working at different abstraction levels. Therefore, we propose to use an ontology-based data registration and discovery approach to compliment mark-up languages through semantic data enrichment. Ontologies allow the use of formal and descriptive logic statements which permits expressive query capabilities for data integration through reasoning. We have developed domain ontologies (EPONT) to capture the concept behind data. EPONT ontologies are associated with existing ontologies such as SUMO, DOLCE and SWEET. Although significant efforts have gone into developing data (object) ontologies, we advance the idea of developing semantic frameworks for additional ontologies that deal with processes and services. This evolutionary step will

  7. Semantic Services for Wikipedia

    NASA Astrophysics Data System (ADS)

    Wang, Haofen; Penin, Thomas; Fu, Linyun; Liu, Qiaoling; Xue, Guirong; Yu, Yong

    Wikipedia, a killer application in Web 2.0, has embraced the power of collaborative editing to harness collective intelligence. It features many attractive characteristics, like entity-based link graph, abundant categorization and semi-structured layout, and can serve as an ideal data source to extract high quality and well-structured data. In this chapter, we first propose several solutions to extract knowledge from Wikipedia. We do not only consider information from the relational summaries of articles (infoboxes) but also semi-automatically extract it from the article text using the structured content available. Due to differences with information extraction from the Web, it is necessary to tackle new problems, like the lack of redundancy in Wikipedia that is dealt with by extending traditional machine learning algorithms to work with few labeled data. Furthermore, we also exploit the widespread categories as a complementary way to discover additional knowledge. Benefiting from both structured and textural information, we additionally provide a suggestion service for Wikipedia authoring. With the aim to facilitate semantic reuse, our proposal provides users with facilities such as link, categories and infobox content suggestions. The proposed enhancements can be applied to attract more contributors and lighten the burden of professional editors. Finally, we developed an enhanced search system, which can ease the process of exploiting Wikipedia. To provide a user-friendly interface, it extends the faceted search interface with relation navigation and let the user easily express his complex information needs in an interactive way. In order to achieve efficient query answering, it extends scalable IR engines to index and search both the textual and structured information with an integrated ranking support.

  8. Remote semantic memory is impoverished in hippocampal amnesia.

    PubMed

    Klooster, Nathaniel B; Duff, Melissa C

    2015-12-01

    The necessity of the hippocampus for acquiring new semantic concepts is a topic of considerable debate. However, it is generally accepted that any role the hippocampus plays in semantic memory is time limited and that previously acquired information becomes independent of the hippocampus over time. This view, along with intact naming and word-definition matching performance in amnesia, has led to the notion that remote semantic memory is intact in patients with hippocampal amnesia. Motivated by perspectives of word learning as a protracted process where additional features and senses of a word are added over time, and by recent discoveries about the time course of hippocampal contributions to on-line relational processing, reconsolidation, and the flexible integration of information, we revisit the notion that remote semantic memory is intact in amnesia. Using measures of semantic richness and vocabulary depth from psycholinguistics and first and second language-learning studies, we examined how much information is associated with previously acquired, highly familiar words in a group of patients with bilateral hippocampal damage and amnesia. Relative to healthy demographically matched comparison participants and a group of brain-damaged comparison participants, the patients with hippocampal amnesia performed significantly worse on both productive and receptive measures of vocabulary depth and semantic richness. These findings suggest that remote semantic memory is impoverished in patients with hippocampal amnesia and that the hippocampus may play a role in the maintenance and updating of semantic memory beyond its initial acquisition. PMID:26474741

  9. The Semantic SPASE

    NASA Astrophysics Data System (ADS)

    Hughes, S.; Crichton, D.; Thieman, J.; Ramirez, P.; King, T.; Weiss, M.

    2005-12-01

    The Semantic SPASE (Space Physics Archive Search and Extract) prototype demonstrates the use of semantic web technologies to capture, document, and manage the SPASE data model, support facet- and text-based search, and provide flexible and intuitive user interfaces. The SPASE data model, under development since late 2003 by a consortium of space physics domain experts, is intended to serve as the basis for interoperability between independent data systems. To develop the Semantic SPASE prototype, the data model was first analyzed to determine the inherit object classes and their attributes. These were entered into Stanford Medical Informatics' Protege ontology tool and annotated using definitions from the SPASE documentation. Further analysis of the data model resulted in the addition of class relationships. Finally attributes and relationships that support broad-scope interoperability were added from research associated with the Object-Oriented Data Technology task. To validate the ontology and produce a knowledge base, example data products were ingested. The capture of the data model as an ontology results in a more formal specification of the model. The Protege software is also a powerful management tool and supports plug-ins that produce several graphical notations as output. The stated purpose of the semantic web is to support machine understanding of web-based information. Protege provides an export capability to RDF/XML and RDFS/XML for this purpose. Several research efforts use RDF/XML knowledge bases to provide semantic search. MIT's Simile/Longwell project provides both facet- and text-based search using a suite of metadata browsers and the text-based search engine Lucene. Using the Protege generated RDF knowledge-base a semantic search application was easily built and deployed to run as a web application. Configuration files specify the object attributes and values to be designated as facets (i.e. search) constraints. Semantic web technologies provide

  10. Auto-Generated Semantic Processing Services

    NASA Technical Reports Server (NTRS)

    Davis, Rodney; Hupf, Greg

    2009-01-01

    Auto-Generated Semantic Processing (AGSP) Services is a suite of software tools for automated generation of other computer programs, denoted cross-platform semantic adapters, that support interoperability of computer-based communication systems that utilize a variety of both new and legacy communication software running in a variety of operating- system/computer-hardware combinations. AGSP has numerous potential uses in military, space-exploration, and other government applications as well as in commercial telecommunications. The cross-platform semantic adapters take advantage of common features of computer- based communication systems to enforce semantics, messaging protocols, and standards of processing of streams of binary data to ensure integrity of data and consistency of meaning among interoperating systems. The auto-generation aspect of AGSP Services reduces development time and effort by emphasizing specification and minimizing implementation: In effect, the design, building, and debugging of software for effecting conversions among complex communication protocols, custom device mappings, and unique data-manipulation algorithms is replaced with metadata specifications that map to an abstract platform-independent communications model. AGSP Services is modular and has been shown to be easily integrable into new and legacy NASA flight and ground communication systems.

  11. Teaching Spelling to Students with Learning Disabilities: A Comparison of Rule-Based Strategies versus Traditional Instruction

    ERIC Educational Resources Information Center

    Darch, Craig; Eaves, Ronald C.; Crowe, D. Alan; Simmons, Kate; Conniff, Alexandra

    2006-01-01

    This study compared two instructional methods for teaching spelling to elementary students with learning disabilities (LD). Forty-two elementary students with LD were randomly assigned to one of two instructional groups to teach spelling words: (a) a rule-based strategy group that focused on teaching students spelling rules (based on the "Spelling…

  12. Rule-based mechanisms of learning for intelligent adaptive flight control

    NASA Technical Reports Server (NTRS)

    Handelman, David A.; Stengel, Robert F.

    1990-01-01

    How certain aspects of human learning can be used to characterize learning in intelligent adaptive control systems is investigated. Reflexive and declarative memory and learning are described. It is shown that model-based systems-theoretic adaptive control methods exhibit attributes of reflexive learning, whereas the problem-solving capabilities of knowledge-based systems of artificial intelligence are naturally suited for implementing declarative learning. Issues related to learning in knowledge-based control systems are addressed, with particular attention given to rule-based systems. A mechanism for real-time rule-based knowledge acquisition is suggested, and utilization of this mechanism within the context of failure diagnosis for fault-tolerant flight control is demonstrated.

  13. A self-learning rule base for command following in dynamical systems

    NASA Technical Reports Server (NTRS)

    Tsai, Wei K.; Lee, Hon-Mun; Parlos, Alexander

    1992-01-01

    In this paper, a self-learning Rule Base for command following in dynamical systems is presented. The learning is accomplished though reinforcement learning using an associative memory called SAM. The main advantage of SAM is that it is a function approximator with explicit storage of training samples. A learning algorithm patterned after the dynamic programming is proposed. Two artificially created, unstable dynamical systems are used for testing, and the Rule Base was used to generate a feedback control to improve the command following ability of the otherwise uncontrolled systems. The numerical results are very encouraging. The controlled systems exhibit a more stable behavior and a better capability to follow reference commands. The rules resulting from the reinforcement learning are explicitly stored and they can be modified or augmented by human experts. Due to overlapping storage scheme of SAM, the stored rules are similar to fuzzy rules.

  14. Rule-based system for the fast identification of species of Indian Anopheline mosquitoes.

    PubMed

    Murty, U S; Jamil, K; Krishna, D; Reddy, P J

    1996-12-01

    In a developing country like India, classification and identification of the species of Anopheline mosquitoes in control operations of mosquito-borne diseases is of paramount importance. The WHO monograph, which describes the taxonomic data in the form of a pictorial key is generally difficult to understand by a non-taxonomist. Utilizing the principles of ID3 algorithm, a novel rule-based system, for the fast identification of unknown species of Indian Anopheline mosquitoes, is developed. The rule-based system is user-friendly, menu-driven and even a novice can make use of it in the identification of the unknown species with little practice. The above software is available on floppy disk and can be obtained with a minimum cost. This program can be ported on 5 1/4" or 3 1/2" floppy disk. PMID:9021267

  15. Two-stage rule-based precision positioning control of a piezoelectrically actuated table

    NASA Astrophysics Data System (ADS)

    Kuo, W. M.; Tarng, Y. S.; Nian, C. Y.; Nurhadi, H.

    2010-05-01

    This article proposes a two-stage rule-based precision positioning control method for the linear piezoelectrically actuated table (LPAT). During coarse-tuning stage, the LPAT is actuated by coarse voltages towards the target of 20 µm at a higher velocity; and during fine-tuning stage, it is driven by fine voltage steadily and accurately to reach the target position. The rule-based method is employed to establish the control rules for the voltages and displacements of the two stages using statistical methods. The experimental results demonstrate that the proposed control method can reach steady state quickly, and the steady-state error can be reduced to less than or equal to 0.02 µm for small travel (±0.1 µm) and large travel (±20 mm).

  16. Spatial Queries Entity Recognition and Disambiguation Using Rule-Based Approach

    NASA Astrophysics Data System (ADS)

    Hamzei, E.; Hakimpour, F.; Forati, A.

    2015-12-01

    In the digital world, search engines have been proposed as one of challenging research areas. One of the main issues in search engines studies is query processing, which its aim is to understand user's needs. If unsuitable spatial query processing approach is employed, the results will be associated with high degree of ambiguity. To evade such degree of ambiguity, in this paper we present a new algorithm which depends on rule-based systems to process queries. Our algorithm is implemented in the three basic steps including: deductively iterative splitting the query; finding candidates for the location names, the location types and spatial relationships; and finally checking the relationships logically and conceptually using a rule based system. As we finally present in the paper using our proposed method have two major advantages: the search engines can provide the capability of spatial analysis based on the specific process and secondly because of its disambiguation technique, user reaches the more desirable result.

  17. Creating an ontology driven rules base for an expert system for medical diagnosis.

    PubMed

    Bertaud Gounot, Valérie; Donfack, Valéry; Lasbleiz, Jérémy; Bourde, Annabel; Duvauferrier, Régis

    2011-01-01

    Expert systems of the 1980s have failed on the difficulties of maintaining large rule bases. The current work proposes a method to achieve and maintain rule bases grounded on ontologies (like NCIT). The process described here for an expert system on plasma cell disorder encompasses extraction of a sub-ontology and automatic and comprehensive generation of production rules. The creation of rules is not based directly on classes, but on individuals (instances). Instances can be considered as prototypes of diseases formally defined by "destrictions" in the ontology. Thus, it is possible to use this process to make diagnoses of diseases. The perspectives of this work are considered: the process described with an ontology formalized in OWL1 can be extended by using an ontology in OWL2 and allow reasoning about numerical data in addition to symbolic data. PMID:21893840

  18. Development and Deployment of a Rule-Based Expert System for Autonomous Satellite Monitoring

    NASA Astrophysics Data System (ADS)

    Wong, L.; Kronberg, F.; Hopkins, A.; Machi, F.; Eastham, P.

    In compliance with NASA administrator Daniel Goldin's call for faster, cheaper, better NASA projects, the Center for EUV Astrophysics (CEA) in cooperation with NASA Ames Research Center has developed and deployed a partially autonomous satellite-telemetry monitoring system to monitor the health of the Extreme Ultraviolet Explorer (EUVE) payload. Originally, telemetry was monitored on a 24 hour basis by human operators. Using RTworks, a software package from Talarian Corporation, our development team has developed a rule-based, expert system capable of detecting critical EUVE payload anomalies and notifying an anomaly coordinator. This paper discusses the process of capturing and codifying the knowledge of EUVE operations into rules and how our rule-based system is applied in EUVE autonomous operations.

  19. IMM/Serve: a rule-based program for childhood immunization.

    PubMed Central

    Miller, P. L.; Frawley, S. J.; Sayward, F. G.; Yasnoff, W. A.; Duncan, L.; Fleming, D. W.

    1996-01-01

    A rule-based program, IMM/Serve, is being developed to help guide childhood immunization for initial use, within Oregon. The program is designed primarily for automated use with an online immunization registry, but can also be used interactively by a single user. The paper describes IMM/Serve and discusses 1) the sources of complexity in immunization logic, 2) the potential advantages of a rule-based approach for representing that logic, and 3) the potential advantage of such a program evolving to become the standard of care. Related projects include 1) a computer-based tool to help verify the completeness of the logic, 2) a tool that allows a central part of the logic to be generated automatically, and 3) an approach that allows visualization of the logic graphically. PMID:8947653

  20. The Cognitive and Neural Expression of Semantic Memory Impairment in Mild Cognitive Impairment and Early Alzheimer's Disease

    ERIC Educational Resources Information Center

    Joubert, Sven; Brambati, Simona M.; Ansado, Jennyfer; Barbeau, Emmanuel J.; Felician, Olivier; Didic, Mira; Lacombe, Jacinthe; Goldstein, Rachel; Chayer, Celine; Kergoat, Marie-Jeanne

    2010-01-01

    Semantic deficits in Alzheimer's disease have been widely documented, but little is known about the integrity of semantic memory in the prodromal stage of the illness. The aims of the present study were to: (i) investigate naming abilities and semantic memory in amnestic mild cognitive impairment (aMCI), early Alzheimer's disease (AD) compared to…

  1. The Semantic Web: From Representation to Realization

    NASA Astrophysics Data System (ADS)

    Thórisson, Kristinn R.; Spivack, Nova; Wissner, James M.

    A semantically-linked web of electronic information - the Semantic Web - promises numerous benefits including increased precision in automated information sorting, searching, organizing and summarizing. Realizing this requires significantly more reliable meta-information than is readily available today. It also requires a better way to represent information that supports unified management of diverse data and diverse Manipulation methods: from basic keywords to various types of artificial intelligence, to the highest level of intelligent manipulation - the human mind. How this is best done is far from obvious. Relying solely on hand-crafted annotation and ontologies, or solely on artificial intelligence techniques, seems less likely for success than a combination of the two. In this paper describe an integrated, complete solution to these challenges that has already been implemented and tested with hundreds of thousands of users. It is based on an ontological representational level we call SemCards that combines ontological rigour with flexible user interface constructs. SemCards are machine- and human-readable digital entities that allow non-experts to create and use semantic content, while empowering machines to better assist and participate in the process. SemCards enable users to easily create semantically-grounded data that in turn acts as examples for automation processes, creating a positive iterative feedback loop of metadata creation and refinement between user and machine. They provide a holistic solution to the Semantic Web, supporting powerful management of the full lifecycle of data, including its creation, retrieval, classification, sorting and sharing. We have implemented the SemCard technology on the semantic Web site Twine.com, showing that the technology is indeed versatile and scalable. Here we present the key ideas behind SemCards and describe the initial implementation of the technology.

  2. A two-stage evolutionary process for designing TSK fuzzy rule-based systems.

    PubMed

    Cordon, O; Herrera, F

    1999-01-01

    Nowadays, fuzzy rule-based systems are successfully applied to many different real-world problems. Unfortunately, relatively few well-structured methodologies exist for designing and, in many cases, human experts are not able to express the knowledge needed to solve the problem in the form of fuzzy rules. Takagi-Sugeno-Kang (TSK) fuzzy rule-based systems were enunciated in order to solve this design problem because they are usually identified using numerical data. In this paper we present a two-stage evolutionary process for designing TSK fuzzy rule-based systems from examples combining a generation stage based on a (mu, lambda)-evolution strategy, in which the fuzzy rules with different consequents compete among themselves to form part of a preliminary knowledge base, and a refinement stage in which both the antecedent and consequent parts of the fuzzy rules in this previous knowledge base are adapted by a hybrid evolutionary process composed of a genetic algorithm and an evolution strategy to obtain the final Knowledge base whose rules cooperate in the best possible way. Some aspects make this process different from others proposed until now: the design problem is addressed in two different stages, the use of an angular coding of the consequent parameters that allows us to search across the whole space of possible solutions, and the use of the available knowledge about the system under identification to generate the initial populations of the Evolutionary Algorithms that causes the search process to obtain good solutions more quickly. The performance of the method proposed is shown by solving two different problems: the fuzzy modeling of some three-dimensional surfaces and the computing of the maintenance costs of electrical medium line in Spanish towns. Results obtained are compared with other kind of techniques, evolutionary learning processes to design TSK and Mamdani-type fuzzy rule-based systems in the first case, and classical regression and neural modeling

  3. A New Rule-Based System for the Construction and Structural Characterization of Artificial Proteins

    NASA Astrophysics Data System (ADS)

    Štambuk, Nikola; Konjevoda, Paško; Gotovac, Nikola

    In this paper, we present a new rule-based system for an artificial protein design incorporating ternary amino acid polarity (polar, nonpolar, and neutral). It may be used to design de novo α and β protein fold structures and mixed class proteins. The targeted molecules are artificial proteins with important industrial and biomedical applications, related to the development of diagnostic-therapeutic peptide pharmaceuticals, antibody mimetics, peptide vaccines, new nanobiomaterials and engineered protein scaffolds.

  4. Feature Extraction Of Retinal Images Interfaced With A Rule-Based Expert System

    NASA Astrophysics Data System (ADS)

    Ishag, Na seem; Connell, Kevin; Bolton, John

    1988-12-01

    Feature vectors are automatically extracted from a library of digital retinal images after considerable image processing. Main features extracted are location of optic disc, cup-to-disc ratio using Hough transform techniques and histogram and binary enhancement algorithms, and blood vessel locations. These feature vectors are used to form a relational data base of the images. Relational operations are then used to extract pertinent information from the data base to form replies to queries from the rule-based expert system.

  5. A Metrics Taxonomy and Reporting Strategy for Rule-Based Alerts.

    PubMed

    Krall, Michael; Gerace, Alexander

    2015-01-01

    An action-oriented alerts taxonomy according to structure, actions, and implicit intended process outcomes using a set of 333 rule-based alerts at Kaiser Permanente Northwest (KPNW) was developed. The authors identified 9 major and 17 overall classes of alerts and developed a specific metric approach for 5 of these classes, including the 3 most numerous ones in KPNW, accounting for 224 (67%) of the alerts. PMID:26057684

  6. Temporal Representation in Semantic Graphs

    SciTech Connect

    Levandoski, J J; Abdulla, G M

    2007-08-07

    A wide range of knowledge discovery and analysis applications, ranging from business to biological, make use of semantic graphs when modeling relationships and concepts. Most of the semantic graphs used in these applications are assumed to be static pieces of information, meaning temporal evolution of concepts and relationships are not taken into account. Guided by the need for more advanced semantic graph queries involving temporal concepts, this paper surveys the existing work involving temporal representations in semantic graphs.

  7. Semantic web data warehousing for caGrid

    PubMed Central

    McCusker, James P; Phillips, Joshua A; Beltrán, Alejandra González; Finkelstein, Anthony; Krauthammer, Michael

    2009-01-01

    The National Cancer Institute (NCI) is developing caGrid as a means for sharing cancer-related data and services. As more data sets become available on caGrid, we need effective ways of accessing and integrating this information. Although the data models exposed on caGrid are semantically well annotated, it is currently up to the caGrid client to infer relationships between the different models and their classes. In this paper, we present a Semantic Web-based data warehouse (Corvus) for creating relationships among caGrid models. This is accomplished through the transformation of semantically-annotated caBIG® Unified Modeling Language (UML) information models into Web Ontology Language (OWL) ontologies that preserve those semantics. We demonstrate the validity of the approach by Semantic Extraction, Transformation and Loading (SETL) of data from two caGrid data sources, caTissue and caArray, as well as alignment and query of those sources in Corvus. We argue that semantic integration is necessary for integration of data from distributed web services and that Corvus is a useful way of accomplishing this. Our approach is generalizable and of broad utility to researchers facing similar integration challenges. PMID:19796399

  8. Semantic web data warehousing for caGrid.

    PubMed

    McCusker, James P; Phillips, Joshua A; González Beltrán, Alejandra; Finkelstein, Anthony; Krauthammer, Michael

    2009-01-01

    The National Cancer Institute (NCI) is developing caGrid as a means for sharing cancer-related data and services. As more data sets become available on caGrid, we need effective ways of accessing and integrating this information. Although the data models exposed on caGrid are semantically well annotated, it is currently up to the caGrid client to infer relationships between the different models and their classes. In this paper, we present a Semantic Web-based data warehouse (Corvus) for creating relationships among caGrid models. This is accomplished through the transformation of semantically-annotated caBIG Unified Modeling Language (UML) information models into Web Ontology Language (OWL) ontologies that preserve those semantics. We demonstrate the validity of the approach by Semantic Extraction, Transformation and Loading (SETL) of data from two caGrid data sources, caTissue and caArray, as well as alignment and query of those sources in Corvus. We argue that semantic integration is necessary for integration of data from distributed web services and that Corvus is a useful way of accomplishing this. Our approach is generalizable and of broad utility to researchers facing similar integration challenges. PMID:19796399

  9. Combined model- and rule-based controller synthesis with application to helicopter flight

    NASA Astrophysics Data System (ADS)

    Jiang, Tian-Yue

    This thesis deals with synthesis of combined (nonlinear) model-based and (fuzzy logic) rule-based controllers, along with their applications to helicopter flight control problem. The synthesis involves superimposing two control techniques in order to meet both stability and performance objectives. One is model-based control technique, which is based on inversion of an approximate model of the real system. The other is rule-based control technique that adaptively cancels the inversion errors caused by the approximate model inversion. There are two major aspects of the research effort in this thesis. The first is the development of the adaptive rule-based (fuzzy logic) controllers. The linguistic rule weights and defuzzification output weights in the controllers are adapted for ultimate boundedness of the tracking errors. Numerical results from a helicopter flight control problem indicate improvement and demonstrate effectiveness of the control technique. The second aspect of this research work is the extension of the synthesis to account for control limits. In this thesis, a control saturation related rule-bank in conjunction with the adaptive fuzzy logic controller is designed to trade-off system performance for closed-loop stability when the tendency towards control amplitude and/or rate saturation is detected. Simulation results from both a fixed-wing aircraft trajectory control problem and a helicopter flight control problem show the effectiveness of the synthesis method and the resulting controller in avoiding control saturations.

  10. Associations between rule-based parenting practices and child screen viewing: A cross-sectional study

    PubMed Central

    Kesten, Joanna M.; Sebire, Simon J.; Turner, Katrina M.; Stewart-Brown, Sarah; Bentley, Georgina; Jago, Russell

    2015-01-01

    Background Child screen viewing (SV) is positively associated with poor health indicators. Interventions addressing rule-based parenting practices may offer an effective means of limiting SV. This study examined associations between rule-based parenting practices (limit and collaborative rule setting) and SV in 6–8-year old children. Methods An online survey of 735 mothers in 2011 assessed: time that children spent engaged in SV activities; and the use of limit and collaborative rule setting. Logistic regression was used to examine the extent to which limit and collaborative rule setting were associated with SV behaviours. Results ‘Always’ setting limits was associated with more TV viewing, computer, smartphone and game-console use and a positive association was found between ‘always’ setting limits for game-console use and multi-SV (in girls). Associations were stronger in mothers of girls compared to mothers of boys. ‘Sometimes’ setting limits was associated with more TV viewing. There was no association between ‘sometimes’ setting limits and computer, game-console or smartphone use. There was a negative association between collaborative rule setting and game-console use in boys. Conclusions Limit setting is associated with greater SV. Collaborative rule setting may be effective for managing boys' game-console use. More research is needed to understand rule-based parenting practices. PMID:26844054

  11. An Evaluation and Implementation of Rule-Based Home Energy Management System Using the Rete Algorithm

    PubMed Central

    Fujita, Naotaka; Yoshihisa, Tomoki; Tsukamoto, Masahiko

    2014-01-01

    In recent years, sensors become popular and Home Energy Management System (HEMS) takes an important role in saving energy without decrease in QoL (Quality of Life). Currently, many rule-based HEMSs have been proposed and almost all of them assume “IF-THEN” rules. The Rete algorithm is a typical pattern matching algorithm for IF-THEN rules. Currently, we have proposed a rule-based Home Energy Management System (HEMS) using the Rete algorithm. In the proposed system, rules for managing energy are processed by smart taps in network, and the loads for processing rules and collecting data are distributed to smart taps. In addition, the number of processes and collecting data are reduced by processing rules based on the Rete algorithm. In this paper, we evaluated the proposed system by simulation. In the simulation environment, rules are processed by a smart tap that relates to the action part of each rule. In addition, we implemented the proposed system as HEMS using smart taps. PMID:25136672

  12. An evaluation and implementation of rule-based Home Energy Management System using the Rete algorithm.

    PubMed

    Kawakami, Tomoya; Fujita, Naotaka; Yoshihisa, Tomoki; Tsukamoto, Masahiko

    2014-01-01

    In recent years, sensors become popular and Home Energy Management System (HEMS) takes an important role in saving energy without decrease in QoL (Quality of Life). Currently, many rule-based HEMSs have been proposed and almost all of them assume "IF-THEN" rules. The Rete algorithm is a typical pattern matching algorithm for IF-THEN rules. Currently, we have proposed a rule-based Home Energy Management System (HEMS) using the Rete algorithm. In the proposed system, rules for managing energy are processed by smart taps in network, and the loads for processing rules and collecting data are distributed to smart taps. In addition, the number of processes and collecting data are reduced by processing rules based on the Rete algorithm. In this paper, we evaluated the proposed system by simulation. In the simulation environment, rules are processed by a smart tap that relates to the action part of each rule. In addition, we implemented the proposed system as HEMS using smart taps. PMID:25136672

  13. A rule-based approach for identifying obesity and its comorbidities in medical discharge summaries.

    PubMed

    Mishra, Ninad K; Cummo, David M; Arnzen, James J; Bonander, Jason

    2009-01-01

    OBJECTIVE Evaluate the effectiveness of a simple rule-based approach in classifying medical discharge summaries according to indicators for obesity and 15 associated co-morbidities as part of the 2008 i2b2 Obesity Challenge. METHODS The authors applied a rule-based approach that looked for occurrences of morbidity-related keywords and identified the types of assertions in which those keywords occurred. The documents were then classified using a simple scoring algorithm based on a mapping of the assertion types to possible judgment categories. MEASUREMENTS RESULTS for the challenge were evaluated based on macro F-measure. We report micro and macro F-measure results for all morbidities combined and for each morbidity separately. Results Our rule-based approach achieved micro and macro F-measures of 0.97 and 0.77, respectively, ranking fifth out of the entries submitted by 28 teams participating in the classification task based on textual judgments and substantially outperforming the average for the challenge. CONCLUSIONS As shown by its ranking in the challenge results, this approach performed relatively well under conditions in which limited training data existed for some judgment categories. Further, the approach held up well in relation to more complex approaches applied to this classification task. The approach could be enhanced by the addition of expert rules to model more complex medical reasoning. PMID:19390102

  14. Converting a rule-based expert system into a belief network.

    PubMed

    Korver, M; Lucas, P J

    1993-01-01

    The theory of belief networks offers a relatively new approach for dealing with uncertain information in knowledge-based (expert) systems. In contrast with the heuristic techniques for reasoning with uncertainty employed in many rule-based expert systems, the theory of belief networks is mathematically sound, based on techniques from probability theory. It therefore seems attractive to convert existing rule-based expert systems into belief networks. In this article we discuss the design of a belief network reformulation of the diagnostic rule-based expert system HEPAR. For the purpose of this experiment we have studied several typical pieces of medical knowledge represented in the HEPAR system. It turned out that, due to the differences in the type of knowledge represented and in the formalism used to represent uncertainty, much of the medical knowledge required for building the belief network concerned could not be extracted from HEPAR. As a consequence, significant additional knowledge acquisition was required. However, the objects and attributes defined in the HEPAR system, as well as the conditions in production rules mentioning these objects and attributes, were useful for guiding the selection of the statistical variables for building the belief network. The mapping of objects and attributes in HEPAR to statistical variables is discussed in detail. PMID:8289533

  15. Strategies for adding adaptive learning mechanisms to rule-based diagnostic expert systems

    NASA Technical Reports Server (NTRS)

    Stclair, D. C.; Sabharwal, C. L.; Bond, W. E.; Hacke, Keith

    1988-01-01

    Rule-based diagnostic expert systems can be used to perform many of the diagnostic chores necessary in today's complex space systems. These expert systems typically take a set of symptoms as input and produce diagnostic advice as output. The primary objective of such expert systems is to provide accurate and comprehensive advice which can be used to help return the space system in question to nominal operation. The development and maintenance of diagnostic expert systems is time and labor intensive since the services of both knowledge engineer(s) and domain expert(s) are required. The use of adaptive learning mechanisms to increment evaluate and refine rules promises to reduce both time and labor costs associated with such systems. This paper describes the basic adaptive learning mechanisms of strengthening, weakening, generalization, discrimination, and discovery. Next basic strategies are discussed for adding these learning mechanisms to rule-based diagnostic expert systems. These strategies support the incremental evaluation and refinement of rules in the knowledge base by comparing the set of advice given by the expert system (A) with the correct diagnosis (C). Techniques are described for selecting those rules in the in the knowledge base which should participate in adaptive learning. The strategies presented may be used with a wide variety of learning algorithms. Further, these strategies are applicable to a large number of rule-based diagnostic expert systems. They may be used to provide either immediate or deferred updating of the knowledge base.

  16. Semantic Webs and Study Skills.

    ERIC Educational Resources Information Center

    Hoover, John J.; Rabideau, Debra K.

    1995-01-01

    Principles for ensuring effective use of semantic webbing in meeting study skill needs of students with learning problems are noted. Important study skills are listed, along with suggested semantic web topics for which subordinate ideas may be developed. Two semantic webs are presented, illustrating the study skills of multiple choice test-taking…

  17. Semantic Search of Web Services

    ERIC Educational Resources Information Center

    Hao, Ke

    2013-01-01

    This dissertation addresses semantic search of Web services using natural language processing. We first survey various existing approaches, focusing on the fact that the expensive costs of current semantic annotation frameworks result in limited use of semantic search for large scale applications. We then propose a vector space model based service…

  18. Exact hybrid particle/population simulation of rule-based models of biochemical systems.

    PubMed

    Hogg, Justin S; Harris, Leonard A; Stover, Lori J; Nair, Niketh S; Faeder, James R

    2014-04-01

    Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings

  19. Exact Hybrid Particle/Population Simulation of Rule-Based Models of Biochemical Systems

    PubMed Central

    Stover, Lori J.; Nair, Niketh S.; Faeder, James R.

    2014-01-01

    Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This “network-free” approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of “partial network expansion” into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory

  20. Semantator: annotating clinical narratives with semantic web ontologies.

    PubMed

    Song, Dezhao; Chute, Christopher G; Tao, Cui

    2012-01-01

    To facilitate clinical research, clinical data needs to be stored in a machine processable and understandable way. Manual annotating clinical data is time consuming. Automatic approaches (e.g., Natural Language Processing systems) have been adopted to convert such data into structured formats; however, the quality of such automatically extracted data may not always be satisfying. In this paper, we propose Semantator, a semi-automatic tool for document annotation with Semantic Web ontologies. With a loaded free text document and an ontology, Semantator supports the creation/deletion of ontology instances for any document fragment, linking/disconnecting instances with the properties in the ontology, and also enables automatic annotation by connecting to the NCBO annotator and cTAKES. By representing annotations in Semantic Web standards, Semantator supports reasoning based upon the underlying semantics of the owl:disjointWith and owl:equivalentClass predicates. We present discussions based on user experiences of using Semantator. PMID:22779043

  1. Semantator: Annotating Clinical Narratives with Semantic Web Ontologies

    PubMed Central

    Song, Dezhao; Chute, Christopher G.; Tao, Cui

    2012-01-01

    To facilitate clinical research, clinical data needs to be stored in a machine processable and understandable way. Manual annotating clinical data is time consuming. Automatic approaches (e.g., Natural Language Processing systems) have been adopted to convert such data into structured formats; however, the quality of such automatically extracted data may not always be satisfying. In this paper, we propose Semantator, a semi-automatic tool for document annotation with Semantic Web ontologies. With a loaded free text document and an ontology, Semantator supports the creation/deletion of ontology instances for any document fragment, linking/disconnecting instances with the properties in the ontology, and also enables automatic annotation by connecting to the NCBO annotator and cTAKES. By representing annotations in Semantic Web standards, Semantator supports reasoning based upon the underlying semantics of the owl:disjointWith and owl:equivalentClass predicates. We present discussions based on user experiences of using Semantator. PMID:22779043

  2. Universal Semantics in Translation

    ERIC Educational Resources Information Center

    Wang, Zhenying

    2009-01-01

    What and how we translate are questions often argued about. No matter what kind of answers one may give, priority in translation should be granted to meaning, especially those meanings that exist in all concerned languages. In this paper the author defines them as universal sememes, and the study of them as universal semantics, of which…

  3. Latent Semantic Analysis.

    ERIC Educational Resources Information Center

    Dumais, Susan T.

    2004-01-01

    Presents a literature review that covers the following topics related to Latent Semantic Analysis (LSA): (1) LSA overview; (2) applications of LSA, including information retrieval (IR), information filtering, cross-language retrieval, and other IR-related LSA applications; (3) modeling human memory, including the relationship of LSA to other…

  4. Learning Semantic Query Suggestions

    NASA Astrophysics Data System (ADS)

    Meij, Edgar; Bron, Marc; Hollink, Laura; Huurnink, Bouke; de Rijke, Maarten

    An important application of semantic web technology is recognizing human-defined concepts in text. Query transformation is a strategy often used in search engines to derive queries that are able to return more useful search results than the original query and most popular search engines provide facilities that let users complete, specify, or reformulate their queries. We study the problem of semantic query suggestion, a special type of query transformation based on identifying semantic concepts contained in user queries. We use a feature-based approach in conjunction with supervised machine learning, augmenting term-based features with search history-based and concept-specific features. We apply our method to the task of linking queries from real-world query logs (the transaction logs of the Netherlands Institute for Sound and Vision) to the DBpedia knowledge base. We evaluate the utility of different machine learning algorithms, features, and feature types in identifying semantic concepts using a manually developed test bed and show significant improvements over an already high baseline. The resources developed for this paper, i.e., queries, human assessments, and extracted features, are available for download.

  5. Environmental Attitudes Semantic Differential.

    ERIC Educational Resources Information Center

    Mehne, Paul R.; Goulard, Cary J.

    This booklet is an evaluation instrument which utilizes semantic differential data to assess environmental attitudes. Twelve concepts are included: regulated access to beaches, urban planning, dune vegetation, wetlands, future cities, reclaiming wetlands for building development, city parks, commercial development of beaches, existing cities,…

  6. Semantic Space Analyst

    Energy Science and Technology Software Center (ESTSC)

    2004-04-15

    The Semantic Space Analyst (SSA) is software for analyzing a text corpus, discovering relationships among terms, and allowing the user to explore that information in different ways. It includes features for displaying and laying out terms and relationships visually, for generating such maps from manual queries, for discovering differences between corpora. Data can also be exported to Microsoft Excel.

  7. THE TWO-LEVEL THEORY OF VERB MEANING: AN APPROACH TO INTEGRATING THE SEMANTICS OF ACTION WITH THE MIRROR NEURON SYSTEM

    PubMed Central

    Kemmerer, David; Castillo, Javier Gonzalez

    2010-01-01

    Verbs have two separate levels of meaning. One level reflects the uniqueness of every verb and is called the “root.” The other level consists of a more austere representation that is shared by all the verbs in a given class and is called the “event structure template.” We explore the following hypotheses about how, with specific reference to the motor features of action verbs, these two distinct levels of semantic representation might correspond to two distinct levels of the mirror neuron system. Hypothesis 1: Root-level motor features of verb meaning are partially subserved by somatotopically mapped mirror neurons in the left primary motor and/or premotor cortices. Hypothesis 2: Template-level motor features of verb meaning are partially subserved by representationally more schematic mirror neurons in Brodmann area 44 of the left inferior frontal gyrus. Evidence has been accumulating in support of the general neuroanatomical claims made by these two hypotheses—namely, that each level of verb meaning is associated with the designated cortical areas. However, as yet no studies have satisfied all the criteria necessary to support the more specific neurobiological claims made by the two hypotheses—namely, that each level of verb meaning is associated with mirror neurons in the pertinent brain regions. This would require demonstrating that within those regions the same neuronal populations are engaged during (a) the linguistic processing of particular motor features of verb meaning, (b) the execution of actions with the corresponding motor features, and (c) the observation of actions with the corresponding motor features. PMID:18996582

  8. Semantic text mining support for lignocellulose research

    PubMed Central

    2012-01-01

    Background Biofuels produced from biomass are considered to be promising sustainable alternatives to fossil fuels. The conversion of lignocellulose into fermentable sugars for biofuels production requires the use of enzyme cocktails that can efficiently and economically hydrolyze lignocellulosic biomass. As many fungi naturally break down lignocellulose, the identification and characterization of the enzymes involved is a key challenge in the research and development of biomass-derived products and fuels. One approach to meeting this challenge is to mine the rapidly-expanding repertoire of microbial genomes for enzymes with the appropriate catalytic properties. Results Semantic technologies, including natural language processing, ontologies, semantic Web services and Web-based collaboration tools, promise to support users in handling complex data, thereby facilitating knowledge-intensive tasks. An ongoing challenge is to select the appropriate technologies and combine them in a coherent system that brings measurable improvements to the users. We present our ongoing development of a semantic infrastructure in support of genomics-based lignocellulose research. Part of this effort is the automated curation of knowledge from information on fungal enzymes that is available in the literature and genome resources. Conclusions Working closely with fungal biology researchers who manually curate the existing literature, we developed ontological natural language processing pipelines integrated in a Web-based interface to assist them in two main tasks: mining the literature for relevant knowledge, and at the same time providing rich and semantically linked information. PMID:22595090

  9. Neural Substrates of Semantic Prospection – Evidence from the Dementias

    PubMed Central

    Irish, Muireann; Eyre, Nadine; Dermody, Nadene; O’Callaghan, Claire; Hodges, John R.; Hornberger, Michael; Piguet, Olivier

    2016-01-01

    The ability to envisage personally relevant events at a future time point represents an incredibly sophisticated cognitive endeavor and one that appears to be intimately linked to episodic memory integrity. Far less is known regarding the neurocognitive mechanisms underpinning the capacity to envisage non-personal future occurrences, known as semantic future thinking. Moreover the degree of overlap between the neural substrates supporting episodic and semantic forms of prospection remains unclear. To this end, we sought to investigate the capacity for episodic and semantic future thinking in Alzheimer’s disease (n = 15) and disease-matched behavioral-variant frontotemporal dementia (n = 15), neurodegenerative disorders characterized by significant medial temporal lobe (MTL) and frontal pathology. Participants completed an assessment of past and future thinking across personal (episodic) and non-personal (semantic) domains, as part of a larger neuropsychological battery investigating episodic and semantic processing, and their performance was contrasted with 20 age- and education-matched healthy older Controls. Participants underwent whole-brain T1-weighted structural imaging and voxel-based morphometry analysis was conducted to determine the relationship between gray matter integrity and episodic and semantic future thinking. Relative to Controls, both patient groups displayed marked future thinking impairments, extending across episodic and semantic domains. Analyses of covariance revealed that while episodic future thinking deficits could be explained solely in terms of episodic memory proficiency, semantic prospection deficits reflected the interplay between episodic and semantic processing. Distinct neural correlates emerged for each form of future simulation with differential involvement of prefrontal, lateral temporal, and medial temporal regions. Notably, the hippocampus was implicated irrespective of future thinking domain, with the suggestion of

  10. Neural Substrates of Semantic Prospection - Evidence from the Dementias.

    PubMed

    Irish, Muireann; Eyre, Nadine; Dermody, Nadene; O'Callaghan, Claire; Hodges, John R; Hornberger, Michael; Piguet, Olivier

    2016-01-01

    The ability to envisage personally relevant events at a future time point represents an incredibly sophisticated cognitive endeavor and one that appears to be intimately linked to episodic memory integrity. Far less is known regarding the neurocognitive mechanisms underpinning the capacity to envisage non-personal future occurrences, known as semantic future thinking. Moreover the degree of overlap between the neural substrates supporting episodic and semantic forms of prospection remains unclear. To this end, we sought to investigate the capacity for episodic and semantic future thinking in Alzheimer's disease (n = 15) and disease-matched behavioral-variant frontotemporal dementia (n = 15), neurodegenerative disorders characterized by significant medial temporal lobe (MTL) and frontal pathology. Participants completed an assessment of past and future thinking across personal (episodic) and non-personal (semantic) domains, as part of a larger neuropsychological battery investigating episodic and semantic processing, and their performance was contrasted with 20 age- and education-matched healthy older Controls. Participants underwent whole-brain T1-weighted structural imaging and voxel-based morphometry analysis was conducted to determine the relationship between gray matter integrity and episodic and semantic future thinking. Relative to Controls, both patient groups displayed marked future thinking impairments, extending across episodic and semantic domains. Analyses of covariance revealed that while episodic future thinking deficits could be explained solely in terms of episodic memory proficiency, semantic prospection deficits reflected the interplay between episodic and semantic processing. Distinct neural correlates emerged for each form of future simulation with differential involvement of prefrontal, lateral temporal, and medial temporal regions. Notably, the hippocampus was implicated irrespective of future thinking domain, with the suggestion of

  11. Applications and Methods Utilizing the Simple Semantic Web Architecture and Protocol (SSWAP) for Bioinformatics Resource Discovery and Disparate Data and Service Integration

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of scientific data between information resources difficu...

  12. Semantic Annotation of Existing Geo-Datasets a Case Study of Disaster Response in Netherlands

    NASA Astrophysics Data System (ADS)

    Mobasheri, A.; van Oosterom, P.; Zlatanova, S.; Bakillah, M.

    2013-05-01

    Use of relevant geo-information is one of the important issues for performing different tasks and processes in disaster response phase. In order to save time and cost, services could be employed for integrating and extracting relevant up-to-date geo-information. For this purpose, semantics of geo-information should be explicitly defined. This paper presents our initial results in applying an approach for semantic annotation of existing geo-datasets. In this research the process of injecting semantic descriptions into geodatasets (information integration) is called semantic annotation. A web system architecture is presented and the process of semantic annotation is presented by using the Meta-Annotation approach. The approach is elaborated by providing an example in disaster response which utilizes geo-datasets in CityGML format and further two languages of semantic web technology: RDF and Notation3.

  13. Taxonomy, Ontology and Semantics at Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Berndt, Sarah Ann

    2011-01-01

    At NASA Johnson Space Center (JSC), the Chief Knowledge Officer has been developing the JSC Taxonomy to capitalize on the accomplishments of yesterday while maintaining the flexibility needed for the evolving information environment of today. A clear vision and scope for the semantic system is integral to its success. The vision for the JSC Taxonomy is to connect information stovepipes to present a unified view for information and knowledge across the Center, across organizations, and across decades. Semantic search at JSC means seemless integration of disparate information sets into a single interface. Ever increasing use, interest, and organizational participation mark successful integration and provide the framework for future application.

  14. UMLS as Knowledge Base-A Rule-Based Expert System Approach to Controlled Medical Vocabulary Management

    PubMed Central

    Cimino, James J.; Hripcsak, George; Johnson, Stephen B.; Friedman, Carol; Fink, Daniel J.; Clayton, Paul D.

    1990-01-01

    The National Library of Medicine is developing a Unified Medical Language System (UMLS) which addresses the need for integration of several large, nationally accepted vocabularies. This is important to the clinical information system under development at the Columbia-Presbyterian Medical Center (CPMC). We are using UMLS components as the core of our effort to integrate existing local CPMC vocabularies which are not among the source vocabularies of the UMLS. We are also using the UMLS to build a knowledge base of vocabulary structure and content such that logical rules can be developed to assist in the management of our integrated vocabularies. At present, the UMLS Semantic Network is used to organize terms which describe laboratory procedures. We have developed a set of rules for identifying undesirable conditions in our vocabulary. We have applied these rules to 526 laboratory test terms and have found ten cases (2%) of definite redundancy and sixty-eight cases (13%) of potential redundancy. The rules have also been used to organize the terminology in new ways that facilitate its management. Using the UMLS model as a vocabulary knowledge base allows us to apply an expert system approach to vocabulary integration and management.

  15. Rule-based modeling: a computational approach for studying biomolecular site dynamics in cell signaling systems

    PubMed Central

    Chylek, Lily A.; Harris, Leonard A.; Tung, Chang-Shung; Faeder, James R.; Lopez, Carlos F.

    2013-01-01

    Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and post-translational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). PMID:24123887

  16. A conceptual model to empower software requirements conflict detection and resolution with rule-based reasoning

    NASA Astrophysics Data System (ADS)

    Ahmad, Sabrina; Jalil, Intan Ermahani A.; Ahmad, Sharifah Sakinah Syed

    2016-08-01

    It is seldom technical issues which impede the process of eliciting software requirements. The involvement of multiple stakeholders usually leads to conflicts and therefore the need of conflict detection and resolution effort is crucial. This paper presents a conceptual model to further improve current efforts. Hence, this paper forwards an improved conceptual model to assist the conflict detection and resolution effort which extends the model ability and improves overall performance. The significant of the new model is to empower the automation of conflicts detection and its severity level with rule-based reasoning.

  17. Distinct pathways for rule-based retrieval and spatial mapping of memory representations in hippocampal neurons

    PubMed Central

    Navawongse, Rapeechai; Eichenbaum, Howard

    2013-01-01

    Hippocampal neurons encode events within the context in which they occurred, a fundamental feature of episodic memory. Here we explored the sources of event and context information represented by hippocampal neurons during the retrieval of object associations in rats. Temporary inactivation of the medial prefrontal cortex differentially reduced the selectivity of rule-based object associations represented by hippocampal neuronal firing patterns but did not affect spatial firing patterns. By contrast, inactivation of the medial entorhinal cortex resulted in a pervasive reorganization of hippocampal mappings of spatial context and events. These results suggest distinct and cooperative prefrontal and medial temporal mechanisms in memory representation. PMID:23325238

  18. Approach to verifying completeness and consistency in a rule-based expert system

    SciTech Connect

    Suwa, M.; Scott, A.C.; Shortliffe, E.H.

    1982-08-01

    We describe a program for verifying that a set of rules in an expert system comprehensively spans the knowledge of a specialized domain. The program has been devised and tested within the context of the ONCOCIN System, a rule-based consultant for clinical oncology. The stylized format of ONCOCIN's rules has allowed the automatic detection of a number of common errors as the knowledge base has been developed. This capability suggests a general mechanism for correcting many problems with knowledge base completeness and consistency before they can cause performance errors.

  19. Rule-based induction method for haplotype comparison and identification of candidate disease loci

    PubMed Central

    2012-01-01

    There is a need for methods that are able to identify rare variants that cause low or moderate penetrance disease susceptibility. To answer this need, we introduce a rule-based haplotype comparison method, Haplous, which identifies haplotypes within multiple samples from phased genotype data and compares them within and between sample groups. We demonstrate that Haplous is able to accurately identify haplotypes that are identical by descent, exclude common haplotypes in the studied population and select rare haplotypes from the data. Our analysis of three families with multiple individuals affected by lymphoma identified several interesting haplotypes shared by distantly related patients. PMID:22429919

  20. Semantic interoperability between clinical and public health information systems for improving public health services.

    PubMed

    Lopez, Diego M; Blobel, Bernd G M E

    2007-01-01

    Improving public health services requires comprehensively integrating all services including medical, social, community, and public health ones. Therefore, developing integrated health information services has to start considering business process, rules and information semantics of involved domains. The paper proposes a business and information architecture for the specification of a future-proof national integrated system, concretely the requirements for semantic integration between public health surveillance and clinical information systems. The architecture is a semantically interoperable approach because it describes business process, rules and information semantics based on national policy documents and expressed in a standard language such us the Unified Modeling Language UML. Having the enterprise and information models formalized, semantically interoperable Health IT components/services development is supported. PMID:17901617

  1. From Data to Semantic Information

    NASA Astrophysics Data System (ADS)

    Floridi, Luciano

    2003-06-01

    There is no consensus yet on the definition of semantic information. This paper contributes to the current debate by criticising and revising the Standard Definition of semantic Information (SDI) as meaningful data, in favour of the Dretske-Grice approach: meaningful and well-formed data constitute semantic information only if they also qualify as contingently truthful. After a brief introduction, SDI is criticised for providing necessary but insufficient conditions for the definition of semantic information. SDI is incorrect because truth-values do not supervene on semantic information, and misinformation (that is, false semantic information) is not a type of semantic information, but pseudo-information, that is not semantic information at all. This is shown by arguing that none of the reasons for interpreting misinformation as a type of semantic information is convincing, whilst there are compelling reasons to treat it as pseudo-information. As a consequence, SDI is revised to include a necessary truth-condition. The last section summarises the main results of the paper and indicates the important implications of the revised definition for the analysis of the deflationary theories of truth, the standard definition of knowledge and the classic, quantitative theory of semantic information.

  2. Predicting the relatiave vulnerability of near-coastal species to climate change using a rule-based ecoinformatics approach

    EPA Science Inventory

    Background/Questions/Methods Near-coastal species are threatened by multiple climate change drivers, including temperature increases, ocean acidification, and sea level rise. To identify vulnerable habitats, geographic regions, and species, we developed a sequential, rule-based...

  3. Protein interaction sentence detection using multiple semantic kernels

    PubMed Central

    2011-01-01

    Background Detection of sentences that describe protein-protein interactions (PPIs) in biomedical publications is a challenging and unresolved pattern recognition problem. Many state-of-the-art approaches for this task employ kernel classification methods, in particular support vector machines (SVMs). In this work we propose a novel data integration approach that utilises semantic kernels and a kernel classification method that is a probabilistic analogue to SVMs. Semantic kernels are created from statistical information gathered from large amounts of unlabelled text using lexical semantic models. Several semantic kernels are then fused into an overall composite classification space. In this initial study, we use simple features in order to examine whether the use of combinations of kernels constructed using word-based semantic models can improve PPI sentence detection. Results We show that combinations of semantic kernels lead to statistically significant improvements in recognition rates and receiver operating characteristic (ROC) scores over the plain Gaussian kernel, when applied to a well-known labelled collection of abstracts. The proposed kernel composition method also allows us to automatically infer the most discriminative kernels. Conclusions The results from this paper indicate that using semantic information from unlabelled text, and combinations of such information, can be valuable for classification of short texts such as PPI sentences. This study, however, is only a first step in evaluation of semantic kernels and probabilistic multiple kernel learning in the context of PPI detection. The method described herein is modular, and can be applied with a variety of feature types, kernels, and semantic models, in order to facilitate full extraction of interacting proteins. PMID:21569604

  4. The value of the Semantic Web in the laboratory.

    PubMed

    Frey, Jeremy G

    2009-06-01

    The Semantic Web is beginning to impact on the wider chemical and physical sciences, beyond the earlier adopted bio-informatics. While useful in large-scale data driven science with automated processing, these technologies can also help integrate the work of smaller scale laboratories producing diverse data. The semantics aid the discovery, reliable re-use of data, provide improved provenance and facilitate automated processing by increased resilience to changes in presentation and reduced ambiguity. The Semantic Web, its tools and collections are not yet competitive with well-established solutions to current problems. It is in the reduced cost of instituting solutions to new problems that the versatility of Semantic Web-enabled data and resources will make their mark once the more general-purpose tools are more available. PMID:19508917

  5. Evaluation of a UMLS Auditing Process of Semantic Type Assignments

    PubMed Central

    Gu, Huanying; Hripcsak, George; Chen, Yan; Morrey, C. Paul; Elhanan, Gai; Cimino, James J.; Geller, James; Perl, Yehoshua

    2007-01-01

    The UMLS is a terminological system that integrates many source terminologies. Each concept in the UMLS is assigned one or more semantic types from the Semantic Network, an upper level ontology for biomedicine. Due to the complexity of the UMLS, errors exist in the semantic type assignments. Finding assignment errors may unearth modeling errors. Even with sophisticated tools, discovering assignment errors requires manual review. In this paper we describe the evaluation of an auditing project of UMLS semantic type assignments. We studied the performance of the auditors who reviewed potential errors. We found that four auditors, interacting according to a multi-step protocol, identified a high rate of errors (one or more errors in 81% of concepts studied) and that results were sufficiently reliable (0.67 to 0.70) for the two most common types of errors. However, reliability was low for each individual auditor, suggesting that review of potential errors is resource-intensive. PMID:18693845

  6. Evaluation of a UMLS Auditing Process of Semantic Type Assignments.

    PubMed

    Gu, Huanying Helen; Hripcsak, George; Chen, Yan; Morrey, C Paul; Elhanan, Gai; Cimino, James; Geller, James; Perl, Yehoshua

    2007-01-01

    The UMLS is a terminological system that integrates many source terminologies. Each concept in the UMLS is assigned one or more semantic types from the Semantic Network, an upper level ontology for biomedicine. Due to the complexity of the UMLS, errors exist in the semantic type assignments. Finding assignment errors may unearth modeling errors. Even with sophisticated tools, discovering assignment errors requires manual review. In this paper we describe the evaluation of an auditing project of UMLS semantic type assignments. We studied the performance of the auditors who reviewed potential errors. We found that four auditors, interacting according to a multi-step protocol, identified a high rate of errors (one or more errors in 81% of concepts studied) and that results were sufficiently reliable (0.67 to 0.70) for the two most common types of errors. However, reliability was low for each individual auditor, suggesting that review of potential errors is resource-intensive. PMID:18693845

  7. A model-based simulator for testing rule-based decision support systems for mechanical ventilation of ARDS patients.

    PubMed Central

    Sailors, R. M.; East, T. D.

    1994-01-01

    A model-based simulator was developed for testing rule-based decision support systems that manages ventilator therapy of patients with the Adult Respiratory Distress Syndrome (ARDS). The simulator is based on a multi-compartment model of the human body and mathematical models of the gas exchange abnormalities associated with ARDS. Initial testing of this system indicates that model-based simulators are a viable tool for testing rule-based expert systems used in health-care. PMID:7949849

  8. Living With Semantic Dementia

    PubMed Central

    Sage, Karen; Wilkinson, Ray; Keady, John

    2014-01-01

    Semantic dementia is a variant of frontotemporal dementia and is a recently recognized diagnostic condition. There has been some research quantitatively examining care partner stress and burden in frontotemporal dementia. There are, however, few studies exploring the subjective experiences of family members caring for those with frontotemporal dementia. Increased knowledge of such experiences would allow service providers to tailor intervention, support, and information better. We used a case study design, with thematic narrative analysis applied to interview data, to describe the experiences of a wife and son caring for a husband/father with semantic dementia. Using this approach, we identified four themes: (a) living with routines, (b) policing and protecting, (c) making connections, and (d) being adaptive and flexible. Each of these themes were shared and extended, with the importance of routines in everyday life highlighted. The implications for policy, practice, and research are discussed. PMID:24532121

  9. Semantic interpretation of nominalizations

    SciTech Connect

    Hull, R.D.; Gomez, F.

    1996-12-31

    A computational approach to the semantic interpretation of nominalizations is described. Interpretation of normalizations involves three tasks: deciding whether the normalization is being used in a verbal or non-verbal sense; disambiguating the normalized verb when a verbal sense is used; and determining the fillers of the thematic roles of the verbal concept or predicate of the nominalization. A verbal sense can be recognized by the presence of modifiers that represent the arguments of the verbal concept. It is these same modifiers which provide the semantic clues to disambiguate the normalized verb. In the absence of explicit modifiers, heuristics are used to discriminate between verbal and non-verbal senses. A correspondence between verbs and their nominalizations is exploited so that only a small amount of additional knowledge is needed to handle the nominal form. These methods are tested in the domain of encyclopedic texts and the results are shown.

  10. Practical Semantic Astronomy

    NASA Astrophysics Data System (ADS)

    Graham, Matthew; Gray, N.; Burke, D.

    2010-01-01

    Many activities in the era of data-intensive astronomy are predicated upon some transference of domain knowledge and expertise from human to machine. The semantic infrastructure required to support this is no longer a pipe dream of computer science but a set of practical engineering challenges, more concerned with deployment and performance details than AI abstractions. The application of such ideas promises to help in such areas as contextual data access, exploiting distributed annotation and heterogeneous sources, and intelligent data dissemination and discovery. In this talk, we will review the status and use of semantic technologies in astronomy, particularly to address current problems in astroinformatics, with such projects as SKUA and AstroCollation.

  11. Meaningful Physical Changes Mediate Lexical-Semantic Integration: Top-Down and Form-Based Bottom-Up Information Sources Interact in the N400

    ERIC Educational Resources Information Center

    Lotze, Netaya; Tune, Sarah; Schlesewsky, Matthias; Bornkessel-Schlesewsky, Ina

    2011-01-01

    Models of how the human brain reconstructs an intended meaning from a linguistic input often draw upon the N400 event-related potential (ERP) component as evidence. Current accounts of the N400 emphasise either the role of contextually induced lexical preactivation of a critical word (Lau, Phillips, & Poeppel, 2008) or the ease of integration into…

  12. Parameters of Semantic Multisensory Integration Depend on Timing and Modality Order among People on the Autism Spectrum: Evidence from Event-Related Potentials

    ERIC Educational Resources Information Center

    Russo, N.; Mottron, L.; Burack, J. A.; Jemel, B.

    2012-01-01

    Individuals with autism spectrum disorders (ASD) report difficulty integrating simultaneously presented visual and auditory stimuli (Iarocci & McDonald, 2006), albeit showing enhanced perceptual processing of unisensory stimuli, as well as an enhanced role of perception in higher-order cognitive tasks (Enhanced Perceptual Functioning (EPF) model;…

  13. Towards rule-based metabolic databases: a requirement analysis based on KEGG.

    PubMed

    Richter, Stephan; Fetzer, Ingo; Thullner, Martin; Centler, Florian; Dittrich, Peter

    2015-01-01

    Knowledge of metabolic processes is collected in easily accessable online databases which are increasing rapidly in content and detail. Using these databases for the automatic construction of metabolic network models requires high accuracy and consistency. In this bipartite study we evaluate current accuracy and consistency problems using the KEGG database as a prominent example and propose design principles for dealing with such problems. In the first half, we present our computational approach for classifying inconsistencies and provide an overview of the classes of inconsistencies we identified. We detected inconsistencies both for database entries referring to substances and entries referring to reactions. In the second part, we present strategies to deal with the detected problem classes. We especially propose a rule-based database approach which allows for the inclusion of parameterised molecular species and parameterised reactions. Detailed case-studies and a comparison of explicit networks from KEGG with their anticipated rule-based representation underline the applicability and scalability of this approach. PMID:26547981

  14. Automated detection of pain from facial expressions: a rule-based approach using AAM

    NASA Astrophysics Data System (ADS)

    Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.

    2012-02-01

    In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.

  15. A fuzzy rule based metamodel for monthly catchment nitrate fate simulations

    NASA Astrophysics Data System (ADS)

    van der Heijden, S.; Haberlandt, U.

    2015-12-01

    The high complexity of nitrate dynamics and corresponding deterministic models make it very appealing to employ easy, fast, and parsimonious modelling alternatives for decision support. This study presents a fuzzy rule based metamodel consisting of eight fuzzy modules, which is able to simulate nitrate fluxes in large watersheds from their diffuse sources via surface runoff, interflow, and base flow to the catchment outlet. The fuzzy rules are trained on a database established with a calibrated SWAT model for an investigation area of 1000 km2. The metamodel performs well on this training area and on two out of three validation areas in different landscapes, with a Nash-Sutcliffe coefficient of around 0.5-0.7 for the monthly nitrate calculations. The fuzzy model proves to be fast, requires only few readily available input data, and the rule based model structure facilitates a common-sense interpretation of the model, which deems the presented approach suitable for the development of decision support tools.

  16. A Rule Based Approach to ISS Interior Volume Control and Layout

    NASA Technical Reports Server (NTRS)

    Peacock, Brian; Maida, Jim; Fitts, David; Dory, Jonathan

    2001-01-01

    Traditional human factors design involves the development of human factors requirements based on a desire to accommodate a certain percentage of the intended user population. As the product is developed human factors evaluation involves comparison between the resulting design and the specifications. Sometimes performance metrics are involved that allow leniency in the design requirements given that the human performance result is satisfactory. Clearly such approaches may work but they give rise to uncertainty and negotiation. An alternative approach is to adopt human factors design rules that articulate a range of each design continuum over which there are varying outcome expectations and interactions with other variables, including time. These rules are based on a consensus of human factors specialists, designers, managers and customers. The International Space Station faces exactly this challenge in interior volume control, which is based on anthropometric, performance and subjective preference criteria. This paper describes the traditional approach and then proposes a rule-based alternative. The proposed rules involve spatial, temporal and importance dimensions. If successful this rule-based concept could be applied to many traditional human factors design variables and could lead to a more effective and efficient contribution of human factors input to the design process.

  17. Fuzzy rule based estimation of agricultural diffuse pollution concentration in streams.

    PubMed

    Singh, Raj Mohan

    2008-04-01

    Outflow from the agricultural fields carries diffuse pollutants like nutrients, pesticides, herbicides etc. and transports the pollutants into the nearby streams. It is a matter of serious concern for water managers and environmental researchers. The application of chemicals in the agricultural fields, and transport of these chemicals into streams are uncertain that cause complexity in reliable stream quality predictions. The chemical characteristics of applied chemical, percentage of area under the chemical application etc. are some of the main inputs that cause pollution concentration as output in streams. Each of these inputs and outputs may contain measurement errors. Fuzzy rule based model based on fuzzy sets suits to address uncertainties in inputs by incorporating overlapping membership functions for each of inputs even for limited data availability situations. In this study, the property of fuzzy sets to address the uncertainty in input-output relationship is utilized to obtain the estimate of concentrations of a herbicide, atrazine, in a stream. The data of White river basin, a part of the Mississippi river system, is used for developing the fuzzy rule based models. The performance of the developed methodology is found encouraging. PMID:19295100

  18. Net-Help: An On-Line Rule Based Help System For The Network

    NASA Astrophysics Data System (ADS)

    Pitchai, Anandhi; Chaganty, Srinivas; Morgan, Thomas W.

    1988-03-01

    The application of expert systems to network management is a promising area. Much work is in progress in the area of network diagnoses and maintenance. Here we explore the possibility of employing expert systems in a new way, namely in providing on-line help to users in a network environment. Since the knowledge base can be quite large and incrementally expanded, expert system techniques are ideal. This paper describes a research effort to design and develop a prototype for such a rule based expert system. As an initial step Net-Help version 1 has been developed for users on a UNIX host and networked with other hosts using ethernet. The paper discusses the features and the usefulness of the system. It also deals with the implementation of the rule based Net-Help system on an AT&T 3B2 computer using 0PS83 production system language. A sample session and the related tree structure is included to illustrate the user interface and logical structure. Net-Help will serve as an on-line consultant which is interactive on both the user end and the network end. Network information is acquired as and when it is needed by Net-Help interactions. This is mainly done for validation checks performed before suggesting commands such as file transfer. Similar tools can also be used for tutoring purposes. Net-Help can be easily expanded to provide help in non-Unix environments or when computers are networked using a different protocol.

  19. A multilayer perceptron solution to the match phase problem in rule-based artificial intelligence systems

    NASA Technical Reports Server (NTRS)

    Sartori, Michael A.; Passino, Kevin M.; Antsaklis, Panos J.

    1992-01-01

    In rule-based AI planning, expert, and learning systems, it is often the case that the left-hand-sides of the rules must be repeatedly compared to the contents of some 'working memory'. The traditional approach to solve such a 'match phase problem' for production systems is to use the Rete Match Algorithm. Here, a new technique using a multilayer perceptron, a particular artificial neural network model, is presented to solve the match phase problem for rule-based AI systems. A syntax for premise formulas (i.e., the left-hand-sides of the rules) is defined, and working memory is specified. From this, it is shown how to construct a multilayer perceptron that finds all of the rules which can be executed for the current situation in working memory. The complexity of the constructed multilayer perceptron is derived in terms of the maximum number of nodes and the required number of layers. A method for reducing the number of layers to at most three is also presented.

  20. Mapping Rule-Based And Stochastic Constraints To Connection Architectures: Implication For Hierarchical Image Processing

    NASA Astrophysics Data System (ADS)

    Miller, Michael I.; Roysam, Badrinath; Smith, Kurt R.

    1988-10-01

    Essential to the solution of ill posed problems in vision and image processing is the need to use object constraints in the reconstruction. While Bayesian methods have shown the greatest promise, a fundamental difficulty has persisted in that many of the available constraints are in the form of deterministic rules rather than as probability distributions and are thus not readily incorporated as Bayesian priors. In this paper, we propose a general method for mapping a large class of rule-based constraints to their equivalent stochastic Gibbs' distribution representation. This mapping allows us to solve stochastic estimation problems over rule-generated constraint spaces within a Bayesian framework. As part of this approach we derive a method based on Langevin's stochastic differential equation and a regularization technique based on the classical autologistic transfer function that allows us to update every site simultaneously regardless of the neighbourhood structure. This allows us to implement a completely parallel method for generating the constraint sets corresponding to the regular grammar languages on massively parallel networks. We illustrate these ideas by formulating the image reconstruction problem based on a hierarchy of rule-based and stochastic constraints, and derive a fully parallelestimator structure. We also present results computed on the AMT DAP500 massively parallel digital computer, a mesh-connected 32x32 array of processing elements which are configured in a Single-Instruction, Multiple Data stream architecture.

  1. Models of Relevant Cue Integration in Name Retrieval

    ERIC Educational Resources Information Center

    Lombardi, Luigi; Sartori, Giuseppe

    2007-01-01

    Semantic features have different levels of importance in indexing a target concept. The article proposes that semantic relevance, an algorithmically derived measure based on concept descriptions, may efficiently capture the relative importance of different semantic features. Three models of how semantic features are integrated in terms of…

  2. Semantic Enhancement for Enterprise Data Management

    NASA Astrophysics Data System (ADS)

    Ma, Li; Sun, Xingzhi; Cao, Feng; Wang, Chen; Wang, Xiaoyuan; Kanellos, Nick; Wolfson, Dan; Pan, Yue

    Taking customer data as an example, the paper presents an approach to enhance the management of enterprise data by using Semantic Web technologies. Customer data is the most important kind of core business entity a company uses repeatedly across many business processes and systems, and customer data management (CDM) is becoming critical for enterprises because it keeps a single, complete and accurate record of customers across the enterprise. Existing CDM systems focus on integrating customer data from all customer-facing channels and front and back office systems through multiple interfaces, as well as publishing customer data to different applications. To make the effective use of the CDM system, this paper investigates semantic query and analysis over the integrated and centralized customer data, enabling automatic classification and relationship discovery. We have implemented these features over IBM Websphere Customer Center, and shown the prototype to our clients. We believe that our study and experiences are valuable for both Semantic Web community and data management community.

  3. Extraction Of Adverse Events From Clinical Documents To Support Decision Making Using Semantic Preprocessing.

    PubMed

    Gaebel, Jan; Kolter, Till; Arlt, Felix; Denecke, Kerstin

    2015-01-01

    Clinical documentation is usually stored in unstructured format in electronic health records (EHR). Processing the information is inconvenient and time consuming and should be enhanced by computer systems. In this paper, a rule-based method is introduced that identifies adverse events documented in the EHR that occurred during treatment. For this purpose, clinical documents are transformed into a semantic structure from which adverse events are extracted. The method is evaluated in a user study with neurosurgeons. In comparison to a bag of word classification using support vector machines, our approach achieved comparably good results of 65% recall and 78% precision. In conclusion, the rule-based method generates promising results that can support physicians' decision making. Because of the structured format the data can be reused for other purposes as well. PMID:26262330

  4. Graph Mining Meets the Semantic Web

    SciTech Connect

    Lee, Sangkeun; Sukumar, Sreenivas R; Lim, Seung-Hwan

    2015-01-01

    The Resource Description Framework (RDF) and SPARQL Protocol and RDF Query Language (SPARQL) were introduced about a decade ago to enable flexible schema-free data interchange on the Semantic Web. Today, data scientists use the framework as a scalable graph representation for integrating, querying, exploring and analyzing data sets hosted at different sources. With increasing adoption, the need for graph mining capabilities for the Semantic Web has emerged. We address that need through implementation of three popular iterative Graph Mining algorithms (Triangle count, Connected component analysis, and PageRank). We implement these algorithms as SPARQL queries, wrapped within Python scripts. We evaluate the performance of our implementation on 6 real world data sets and show graph mining algorithms (that have a linear-algebra formulation) can indeed be unleashed on data represented as RDF graphs using the SPARQL query interface.

  5. Using semantic information for processing negation and disjunction in logic programs

    SciTech Connect

    Gaasterland, T. ); Lobo, J. )

    1993-01-01

    There are many applications in which integrity constraints can play an important role. An example is the semantic query optimization method developed by Chakravarthy, Grant, and Minker for definite deductive databases. They use integrity constraints during query processing to prevent the exploration of search space that is bound to fail. In this paper, the authors generalize the semantic query optimization method to apply to negated atoms. The generalized method is referred to as semantic compilation. They show that semantic compilation provides an alternative search space for negative query literals. They also show how semantic compilation can be used to transform a disjunctive database with or without functions and denial constraints without negation into a new disjunctive database that complies with the integrity constraints.

  6. Using semantic information for processing negation and disjunction in logic programs

    SciTech Connect

    Gaasterland, T.; Lobo, J.

    1993-07-01

    There are many applications in which integrity constraints can play an important role. An example is the semantic query optimization method developed by Chakravarthy, Grant, and Minker for definite deductive databases. They use integrity constraints during query processing to prevent the exploration of search space that is bound to fail. In this paper, the authors generalize the semantic query optimization method to apply to negated atoms. The generalized method is referred to as semantic compilation. They show that semantic compilation provides an alternative search space for negative query literals. They also show how semantic compilation can be used to transform a disjunctive database with or without functions and denial constraints without negation into a new disjunctive database that complies with the integrity constraints.

  7. Semi-automatic conversion of BioProp semantic annotation to PASBio annotation

    PubMed Central

    Tsai, Richard Tzong-Han; Dai, Hong-Jie; Huang, Chi-Hsin; Hsu, Wen-Lian

    2008-01-01

    Background Semantic role labeling (SRL) is an important text analysis technique. In SRL, sentences are represented by one or more predicate-argument structures (PAS). Each PAS is composed of a predicate (verb) and several arguments (noun phrases, adverbial phrases, etc.) with different semantic roles, including main arguments (agent or patient) as well as adjunct arguments (time, manner, or location). PropBank is the most widely used PAS corpus and annotation format in the newswire domain. In the biomedical field, however, more detailed and restrictive PAS annotation formats such as PASBio are popular. Unfortunately, due to the lack of an annotated PASBio corpus, no publicly available machine-learning (ML) based SRL systems based on PASBio have been developed. In previous work, we constructed a biomedical corpus based on the PropBank standard called BioProp, on which we developed an ML-based SRL system, BIOSMILE. In this paper, we aim to build a system to convert BIOSMILE's BioProp annotation output to PASBio annotation. Our system consists of BIOSMILE in combination with a BioProp-PASBio rule-based converter, and an additional semi-automatic rule generator. Results Our first experiment evaluated our rule-based converter's performance independently from BIOSMILE performance. The converter achieved an F-score of 85.29%. The second experiment evaluated combined system (BIOSMILE + rule-based converter). The system achieved an F-score of 69.08% for PASBio's 29 verbs. Conclusion Our approach allows PAS conversion between BioProp and PASBio annotation using BIOSMILE alongside our newly developed semi-automatic rule generator and rule-based converter. Our system can match the performance of other state-of-the-art domain-specific ML-based SRL systems and can be easily customized for PASBio application development. PMID:19091017

  8. Redundancy in perceptual and linguistic experience: comparing feature-based and distributional models of semantic representation.

    PubMed

    Riordan, Brian; Jones, Michael N

    2011-04-01

    Since their inception, distributional models of semantics have been criticized as inadequate cognitive theories of human semantic learning and representation. A principal challenge is that the representations derived by distributional models are purely symbolic and are not grounded in perception and action; this challenge has led many to favor feature-based models of semantic representation. We argue that the amount of perceptual and other semantic information that can be learned from purely distributional statistics has been underappreciated. We compare the representations of three feature-based and nine distributional models using a semantic clustering task. Several distributional models demonstrated semantic clustering comparable with clustering-based on feature-based representations. Furthermore, when trained on child-directed speech, the same distributional models perform as well as sensorimotor-based feature representations of children's lexical semantic knowledge. These results suggest that, to a large extent, information relevant for extracting semantic categories is redundantly coded in perceptual and linguistic experience. Detailed analyses of the semantic clusters of the feature-based and distributional models also reveal that the models make use of complementary cues to semantic organization from the two data streams. Rather than conceptualizing feature-based and distributional models as competing theories, we argue that future focus should be on understanding the cognitive mechanisms humans use to integrate the two sources. PMID:25164298

  9. A federated semantic metadata registry framework for enabling interoperability across clinical research and care domains.

    PubMed

    Sinaci, A Anil; Laleci Erturkmen, Gokce B

    2013-10-01

    In order to enable secondary use of Electronic Health Records (EHRs) by bridging the interoperability gap between clinical care and research domains, in this paper, a unified methodology and the supporting framework is introduced which brings together the power of metadata registries (MDR) and semantic web technologies. We introduce a federated semantic metadata registry framework by extending the ISO/IEC 11179 standard, and enable integration of data element registries through Linked Open Data (LOD) principles where each Common Data Element (CDE) can be uniquely referenced, queried and processed to enable the syntactic and semantic interoperability. Each CDE and their components are maintained as LOD resources enabling semantic links with other CDEs, terminology systems and with implementation dependent content models; hence facilitating semantic search, much effective reuse and semantic interoperability across different application domains. There are several important efforts addressing the semantic interoperability in healthcare domain such as IHE DEX profile proposal, CDISC SHARE and CDISC2RDF. Our architecture complements these by providing a framework to interlink existing data element registries and repositories for multiplying their potential for semantic interoperability to a greater extent. Open source implementation of the federated semantic MDR framework presented in this paper is the core of the semantic interoperability layer of the SALUS project which enables the execution of the post marketing safety analysis studies on top of existing EHR systems. PMID:23751263

  10. Rule-based knowledge aggregation for large-scale protein sequence analysis of influenza A viruses

    PubMed Central

    Miotto, Olivo; Tan, Tin Wee; Brusic, Vladimir

    2008-01-01

    Background The explosive growth of biological data provides opportunities for new statistical and comparative analyses of large information sets, such as alignments comprising tens of thousands of sequences. In such studies, sequence annotations frequently play an essential role, and reliable results depend on metadata quality. However, the semantic heterogeneity and annotation inconsistencies in biological databases greatly increase the complexity of aggregating and cleaning metadata. Manual curation of datasets, traditionally favoured by life scientists, is impractical for studies involving thousands of records. In this study, we investigate quality issues that affect major public databases, and quantify the effectiveness of an automated metadata extraction approach that combines structural and semantic rules. We applied this approach to more than 90,000 influenza A records, to annotate sequences with protein name, virus subtype, isolate, host, geographic origin, and year of isolation. Results Over 40,000 annotated Influenza A protein sequences were collected by combining information from more than 90,000 documents from NCBI public databases. Metadata values were automatically extracted, aggregated and reconciled from several document fields by applying user-defined structural rules. For each property, values were recovered from ≥88.8% of records, with accuracy exceeding 96% in most cases. Because of semantic heterogeneity, each property required up to six different structural rules to be combined. Significant quality differences between databases were found: GenBank documents yield values more reliably than documents extracted from GenPept. Using a simple set of semantic rules and a reasoner, we reconstructed relationships between sequences from the same isolate, thus identifying 7640 isolates. Validation of isolate metadata against a simple ontology highlighted more than 400 inconsistencies, leading to over 3,000 property value corrections. Conclusion To

  11. A combined hydrological and fuzzy rule based model for catchment scale nitrate dynamics

    NASA Astrophysics Data System (ADS)

    Shrestha, R. R.; Bardossy, A.; Rode, M.

    2006-12-01

    The diffuse nitrate pollution in rivers is driven by a complex interaction of hydrological and bio-chemical processes. Due to limited process understanding and restricted data availability, physically based approaches for the simulation of these dynamics are still associated with large uncertainties. Therefore we developed a combined deterministic - data driven approach, which consists of a spatially distributed water balance model WaSiM-ETH for the simulation of the different hydrological flow components and a fuzzy rule based model (FRBM) for the simulation of nitrate concentration in the river. Hydrological flow components are considered as dominant driving variables for the dynamic behaviour of nitrate concentrations in surface water The TOPMODEL approach is used for water balance and simulation of runoff components in the WaSiM-ETH model. The model is calibrated using an automatic parameter estimation program PEST. The simulated subsurface and surface runoff components are taken as input variables for the fuzzy rule based nitrate transport model. In addition to these runoff components, mean air temperature is included as an input variable for the consideration of seasonal variability. The FRBM consists of Mamdani type "IF- THEN" fuzzy rule system with triangular membership function in both the input and the output. 13 rule systems are used for the representation of the dynamics of the nitrate concentration in the river. The fuzzy rules are derived from a combination of expert knowledge and the input - output data using the simulated annealing optimization algorithm. The study was undertaken using 6 years of daily hydrological and nutrient time series data from the Weida catchment, which is a 100 km2 subcatchment of the Weisse Elster in the Elbe river basin, Germany. The results of the study show that the combined deterministic - fuzzy rule based model can give a good simulation of catchment scale nitrate dynamics. The WaSiM-ETH model produced a very good match

  12. Non-semantic contributions to "semantic" redundancy gain.

    PubMed

    Shepherdson, Peter; Miller, Jeff

    2016-08-01

    Recently, two groups of researchers have reported redundancy gains (enhanced performance with multiple, redundant targets) in tasks requiring semantic categorization. Here we report two experiments aimed at determining whether the gains found by one of these groups resulted from some form of semantic coactivation. We asked undergraduate psychology students to complete choice RT tasks requiring the semantic categorization of visually presented words, and compared performance with redundant targets from the same semantic category to performance with redundant targets from different semantic categories. If the redundancy gains resulted from the combination of information at a semantic level, they should have been greater in the former than the latter situation. However, our results showed no significant differences in redundancy gain (for latency and accuracy) between same-category and different-category conditions, despite gains appearing in both conditions. Thus, we suggest that redundancy gain in the semantic categorization task may result entirely from statistical facilitation or combination of information at non-semantic levels. PMID:26339718

  13. SAS- Semantic Annotation Service for Geoscience resources on the web

    NASA Astrophysics Data System (ADS)

    Elag, M.; Kumar, P.; Marini, L.; Li, R.; Jiang, P.

    2015-12-01

    There is a growing need for increased integration across the data and model resources that are disseminated on the web to advance their reuse across different earth science applications. Meaningful reuse of resources requires semantic metadata to realize the semantic web vision for allowing pragmatic linkage and integration among resources. Semantic metadata associates standard metadata with resources to turn them into semantically-enabled resources on the web. However, the lack of a common standardized metadata framework as well as the uncoordinated use of metadata fields across different geo-information systems, has led to a situation in which standards and related Standard Names abound. To address this need, we have designed SAS to provide a bridge between the core ontologies required to annotate resources and information systems in order to enable queries and analysis over annotation from a single environment (web). SAS is one of the services that are provided by the Geosematnic framework, which is a decentralized semantic framework to support the integration between models and data and allow semantically heterogeneous to interact with minimum human intervention. Here we present the design of SAS and demonstrate its application for annotating data and models. First we describe how predicates and their attributes are extracted from standards and ingested in the knowledge-base of the Geosemantic framework. Then we illustrate the application of SAS in annotating data managed by SEAD and annotating simulation models that have web interface. SAS is a step in a broader approach to raise the quality of geoscience data and models that are published on the web and allow users to better search, access, and use of the existing resources based on standard vocabularies that are encoded and published using semantic technologies.

  14. The Semantic Mapping of Archival Metadata to the CIDOC CRM Ontology

    ERIC Educational Resources Information Center

    Bountouri, Lina; Gergatsoulis, Manolis

    2011-01-01

    In this article we analyze the main semantics of archival description, expressed through Encoded Archival Description (EAD). Our main target is to map the semantics of EAD to the CIDOC Conceptual Reference Model (CIDOC CRM) ontology as part of a wider integration architecture of cultural heritage metadata. Through this analysis, it is concluded…

  15. EIIS: An Educational Information Intelligent Search Engine Supported by Semantic Services

    ERIC Educational Resources Information Center

    Huang, Chang-Qin; Duan, Ru-Lin; Tang, Yong; Zhu, Zhi-Ting; Yan, Yong-Jian; Guo, Yu-Qing

    2011-01-01

    The semantic web brings a new opportunity for efficient information organization and search. To meet the special requirements of the educational field, this paper proposes an intelligent search engine enabled by educational semantic support service, where three kinds of searches are integrated into Educational Information Intelligent Search (EIIS)…

  16. Neural changes associated with semantic processing in healthy aging despite intact behavioral performance.

    PubMed

    Lacombe, Jacinthe; Jolicoeur, Pierre; Grimault, Stephan; Pineault, Jessica; Joubert, Sven

    2015-10-01

    Semantic memory recruits an extensive neural network including the left inferior prefrontal cortex (IPC) and the left temporoparietal region, which are involved in semantic control processes, as well as the anterior temporal lobe region (ATL) which is considered to be involved in processing semantic information at a central level. However, little is known about the underlying neuronal integrity of the semantic network in normal aging. Young and older healthy adults carried out a semantic judgment task while their cortical activity was recorded using magnetoencephalography (MEG). Despite equivalent behavioral performance, young adults activated the left IPC to a greater extent than older adults, while the latter group recruited the temporoparietal region bilaterally and the left ATL to a greater extent than younger adults. Results indicate that significant neuronal changes occur in normal aging, mainly in regions underlying semantic control processes, despite an apparent stability in performance at the behavioral level. PMID:26282079

  17. From Science to e-Science to Semantic e-Science: A Heliosphysics Case Study

    NASA Technical Reports Server (NTRS)

    Narock, Thomas; Fox, Peter

    2011-01-01

    The past few years have witnessed unparalleled efforts to make scientific data web accessible. The Semantic Web has proven invaluable in this effort; however, much of the literature is devoted to system design, ontology creation, and trials and tribulations of current technologies. In order to fully develop the nascent field of Semantic e-Science we must also evaluate systems in real-world settings. We describe a case study within the field of Heliophysics and provide a comparison of the evolutionary stages of data discovery, from manual to semantically enable. We describe the socio-technical implications of moving toward automated and intelligent data discovery. In doing so, we highlight how this process enhances what is currently being done manually in various scientific disciplines. Our case study illustrates that Semantic e-Science is more than just semantic search. The integration of search with web services, relational databases, and other cyberinfrastructure is a central tenet of our case study and one that we believe has applicability as a generalized research area within Semantic e-Science. This case study illustrates a specific example of the benefits, and limitations, of semantically replicating data discovery. We show examples of significant reductions in time and effort enable by Semantic e-Science; yet, we argue that a "complete" solution requires integrating semantic search with other research areas such as data provenance and web services.

  18. Description of a rule-based system for the i2b2 challenge in natural language processing for clinical data.

    PubMed

    Childs, Lois C; Enelow, Robert; Simonsen, Lone; Heintzelman, Norris H; Kowalski, Kimberly M; Taylor, Robert J

    2009-01-01

    The Obesity Challenge, sponsored by Informatics for Integrating Biology and the Bedside (i2b2), a National Center for Biomedical Computing, asked participants to build software systems that could "read" a patient's clinical discharge summary and replicate the judgments of physicians in evaluating presence or absence of obesity and 15 comorbidities. The authors describe their methodology and discuss the results of applying Lockheed Martin's rule-based natural language processing (NLP) capability, ClinREAD. We tailored ClinREAD with medical domain expertise to create assigned default judgments based on the most probable results as defined in the ground truth. It then used rules to collect evidence similar to the evidence that the human judges likely relied upon, and applied a logic module to weigh the strength of all evidence collected to arrive at final judgments. The Challenge results suggest that rule-based systems guided by human medical expertise are capable of solving complex problems in machine processing of medical text. PMID:19390103

  19. Description of a Rule-based System for the i2b2 Challenge in Natural Language Processing for Clinical Data

    PubMed Central

    Childs, Lois C.; Enelow, Robert; Simonsen, Lone; Heintzelman, Norris H.; Kowalski, Kimberly M.; Taylor, Robert J.

    2009-01-01

    The Obesity Challenge, sponsored by Informatics for Integrating Biology and the Bedside (i2b2), a National Center for Biomedical Computing, asked participants to build software systems that could “read” a patient's clinical discharge summary and replicate the judgments of physicians in evaluating presence or absence of obesity and 15 comorbidities. The authors describe their methodology and discuss the results of applying Lockheed Martin's rule-based natural language processing (NLP) capability, ClinREAD. We tailored ClinREAD with medical domain expertise to create assigned default judgments based on the most probable results as defined in the ground truth. It then used rules to collect evidence similar to the evidence that the human judges likely relied upon, and applied a logic module to weigh the strength of all evidence collected to arrive at final judgments. The Challenge results suggest that rule-based systems guided by human medical expertise are capable of solving complex problems in machine processing of medical text. PMID:19390103

  20. Modeling for (physical) biologists: an introduction to the rule-based approach

    PubMed Central

    Chylek, Lily A; Harris, Leonard A; Faeder, James R; Hlavacek, William S

    2015-01-01

    Models that capture the chemical kinetics of cellular regulatory networks can be specified in terms of rules for biomolecular interactions. A rule defines a generalized reaction, meaning a reaction that permits multiple reactants, each capable of participating in a characteristic transformation and each possessing certain, specified properties, which may be local, such as the state of a particular site or domain of a protein. In other words, a rule defines a transformation and the properties that reactants must possess to participate in the transformation. A rule also provides a rate law. A rule-based approach to modeling enables consideration of mechanistic details at the level of functional sites of biomolecules and provides a facile and visual means for constructing computational models, which can be analyzed to study how system-level behaviors emerge from component interactions. PMID:26178138

  1. Reliability Assessment and Robustness Study for Key Navigation Components using Belief Rule Based System

    NASA Astrophysics Data System (ADS)

    You, Yuan; Wang, Liuying; Chang, Leilei; Ling, Xiaodong; Sun, Nan

    2016-02-01

    The gyro device is the key navigation component for maritime tracking and control, and gyro shift is the key factor which influences the performance of the gyro device, which makes conducting the reliability analysis on the gyro device very important. For the gyro device reliability analysis, the residual life probability prediction plays an essential role although it requires a complex process adapted by existed studies. In this study the Belief Rule Base (BRB) system is applied to model the relationship between the time as the input and the residual life probability as the output. Two scenarios are designed to study the robustness of the proposed BRB prediction model. The comparative results show that the BRB prediction model performs better in Scenario II when new the referenced values are predictable.

  2. Rule-based modelling and simulation of biochemical systems with molecular finite automata.

    PubMed

    Yang, J; Meng, X; Hlavacek, W S

    2010-11-01

    The authors propose a theoretical formalism, molecular finite automata (MFA), to describe individual proteins as rule-based computing machines. The MFA formalism provides a framework for modelling individual protein behaviours and systems-level dynamics via construction of programmable and executable machines. Models specified within this formalism explicitly represent the context-sensitive dynamics of individual proteins driven by external inputs and represent protein-protein interactions as synchronised machine reconfigurations. Both deterministic and stochastic simulations can be applied to quantitatively compute the dynamics of MFA models. They apply the MFA formalism to model and simulate a simple example of a signal-transduction system that involves an MAP kinase cascade and a scaffold protein. PMID:21073243

  3. Modeling for (physical) biologists: an introduction to the rule-based approach

    NASA Astrophysics Data System (ADS)

    Chylek, Lily A.; Harris, Leonard A.; Faeder, James R.; Hlavacek, William S.

    2015-07-01

    Models that capture the chemical kinetics of cellular regulatory networks can be specified in terms of rules for biomolecular interactions. A rule defines a generalized reaction, meaning a reaction that permits multiple reactants, each capable of participating in a characteristic transformation and each possessing certain, specified properties, which may be local, such as the state of a particular site or domain of a protein. In other words, a rule defines a transformation and the properties that reactants must possess to participate in the transformation. A rule also provides a rate law. A rule-based approach to modeling enables consideration of mechanistic details at the level of functional sites of biomolecules and provides a facile and visual means for constructing computational models, which can be analyzed to study how system-level behaviors emerge from component interactions.

  4. Modeling for (physical) biologists: an introduction to the rule-based approach.

    PubMed

    Chylek, Lily A; Harris, Leonard A; Faeder, James R; Hlavacek, William S

    2015-07-01

    Models that capture the chemical kinetics of cellular regulatory networks can be specified in terms of rules for biomolecular interactions. A rule defines a generalized reaction, meaning a reaction that permits multiple reactants, each capable of participating in a characteristic transformation and each possessing certain, specified properties, which may be local, such as the state of a particular site or domain of a protein. In other words, a rule defines a transformation and the properties that reactants must possess to participate in the transformation. A rule also provides a rate law. A rule-based approach to modeling enables consideration of mechanistic details at the level of functional sites of biomolecules and provides a facile and visual means for constructing computational models, which can be analyzed to study how system-level behaviors emerge from component interactions. PMID:26178138

  5. Rule-Based vs. Behavior-Based Self-Deployment for Mobile Wireless Sensor Networks

    PubMed Central

    Urdiales, Cristina; Aguilera, Francisco; González-Parada, Eva; Cano-García, Jose; Sandoval, Francisco

    2016-01-01

    In mobile wireless sensor networks (MWSN), nodes are allowed to move autonomously for deployment. This process is meant: (i) to achieve good coverage; and (ii) to distribute the communication load as homogeneously as possible. Rather than optimizing deployment, reactive algorithms are based on a set of rules or behaviors, so nodes can determine when to move. This paper presents an experimental evaluation of both reactive deployment approaches: rule-based and behavior-based ones. Specifically, we compare a backbone dispersion algorithm with a social potential fields algorithm. Most tests are done under simulation for a large number of nodes in environments with and without obstacles. Results are validated using a small robot network in the real world. Our results show that behavior-based deployment tends to provide better coverage and communication balance, especially for a large number of nodes in areas with obstacles. PMID:27399709

  6. Rule-Based vs. Behavior-Based Self-Deployment for Mobile Wireless Sensor Networks.

    PubMed

    Urdiales, Cristina; Aguilera, Francisco; González-Parada, Eva; Cano-García, Jose; Sandoval, Francisco

    2016-01-01

    In mobile wireless sensor networks (MWSN), nodes are allowed to move autonomously for deployment. This process is meant: (i) to achieve good coverage; and (ii) to distribute the communication load as homogeneously as possible. Rather than optimizing deployment, reactive algorithms are based on a set of rules or behaviors, so nodes can determine when to move. This paper presents an experimental evaluation of both reactive deployment approaches: rule-based and behavior-based ones. Specifically, we compare a backbone dispersion algorithm with a social potential fields algorithm. Most tests are done under simulation for a large number of nodes in environments with and without obstacles. Results are validated using a small robot network in the real world. Our results show that behavior-based deployment tends to provide better coverage and communication balance, especially for a large number of nodes in areas with obstacles. PMID:27399709

  7. A Rule-Based Modeling for the Description of Flexible and Self-healing Business Processes

    NASA Astrophysics Data System (ADS)

    Boukhebouze, Mohamed; Amghar, Youssef; Benharkat, Aïcha-Nabila; Maamar, Zakaria

    In this paper we discuss the importance of ensuring that business processes are label robust and agile at the same time robust and agile. To this end, we consider reviewing the way business processes are managed. For instance we consider offering a flexible way to model processes so that changes in regulations are handled through some self-healing mechanisms. These changes may raise exceptions at run-time if not properly reflected on these processes. To this end we propose a new rule based model that adopts the ECA rules and is built upon formal tools. The business logic of a process can be summarized with a set of rules that implement an organization’s policies. Each business rule is formalized using our ECAPE formalism (Event-Condition-Action-Post condition- post Event). This formalism allows translating a process into a graph of rules that is analyzed in terms of reliably and flexibility.

  8. ChemTok: A New Rule Based Tokenizer for Chemical Named Entity Recognition.

    PubMed

    Akkasi, Abbas; Varoğlu, Ekrem; Dimililer, Nazife

    2016-01-01

    Named Entity Recognition (NER) from text constitutes the first step in many text mining applications. The most important preliminary step for NER systems using machine learning approaches is tokenization where raw text is segmented into tokens. This study proposes an enhanced rule based tokenizer, ChemTok, which utilizes rules extracted mainly from the train data set. The main novelty of ChemTok is the use of the extracted rules in order to merge the tokens split in the previous steps, thus producing longer and more discriminative tokens. ChemTok is compared to the tokenization methods utilized by ChemSpot and tmChem. Support Vector Machines and Conditional Random Fields are employed as the learning algorithms. The experimental results show that the classifiers trained on the output of ChemTok outperforms all classifiers trained on the output of the other two tokenizers in terms of classification performance, and the number of incorrectly segmented entities. PMID:26942193

  9. A rule-based screening environmental risk assessment tool derived from EUSES.

    PubMed

    Verdonck, Frederik A M; Boeije, Geert; Vandenberghe, Veronique; Comber, Mike; de Wolf, Watze; Feijtel, Tom; Holt, Martin; Koch, Volker; Lecloux, André; Siebel-Sauer, Angela; Vanrolleghem, Peter A

    2005-03-01

    Within the context and scope of the forthcoming European Union chemical regulations (REACH), there is a need to be able to prioritise the chemicals for evaluation. Therefore, a simple, pragmatic and adequately conservative approach for the identification of substances of very low or no immediate concern at an early stage is presented. The fundamental principles and basic concepts are derived from the EU Technical Guidance Document and EUSES, and are translated into an easy-to-use rule-based system. For this development, the effect on risk characterisation ratios (RCRs) of the key environmental parameters in EUSES was quantified (taking into account several standardised chemical release scenarios). Using statistical analysis, ranges were identified for each key parameter, within which the end result of the assessment was not significantly affected. This information was then translated into a lookup table from which environmental risk characterisation ratios can be directly read as a function of a few parameters. PMID:15667838

  10. ChemTok: A New Rule Based Tokenizer for Chemical Named Entity Recognition

    PubMed Central

    Akkasi, Abbas; Varoğlu, Ekrem; Dimililer, Nazife

    2016-01-01

    Named Entity Recognition (NER) from text constitutes the first step in many text mining applications. The most important preliminary step for NER systems using machine learning approaches is tokenization where raw text is segmented into tokens. This study proposes an enhanced rule based tokenizer, ChemTok, which utilizes rules extracted mainly from the train data set. The main novelty of ChemTok is the use of the extracted rules in order to merge the tokens split in the previous steps, thus producing longer and more discriminative tokens. ChemTok is compared to the tokenization methods utilized by ChemSpot and tmChem. Support Vector Machines and Conditional Random Fields are employed as the learning algorithms. The experimental results show that the classifiers trained on the output of ChemTok outperforms all classifiers trained on the output of the other two tokenizers in terms of classification performance, and the number of incorrectly segmented entities. PMID:26942193

  11. Connecting the dots: rule-based decision support systems in the modern EMR era.

    PubMed

    Herasevich, Vitaly; Kor, Daryl J; Subramanian, Arun; Pickering, Brian W

    2013-08-01

    The intensive care unit (ICU) environment is rich in both medical device and electronic medical record (EMR) data. The ICU patient population is particularly vulnerable to medical error or delayed medical intervention both of which are associated with excess morbidity, mortality and cost. The development and deployment of smart alarms, computerized decision support systems (DSS) and "sniffers" within ICU clinical information systems has the potential to improve the safety and outcomes of critically ill hospitalized patients. However, the current generations of alerts, run largely through bedside monitors, are far from ideal and rarely support the clinician in the early recognition of complex physiologic syndromes or deviations from expected care pathways. False alerts and alert fatigue remain prevalent. In the coming era of widespread EMR implementation novel medical informatics methods may be adaptable to the development of next generation, rule-based DSS. PMID:23456293

  12. A rule-based decision support application for laboratory investigations management.

    PubMed Central

    Boon-Falleur, L.; Sokal, E.; Peters, M.; Ketelslegers, J. M.

    1995-01-01

    The appropriate management of clinical laboratory requests in specialised clinical units often requires the adherence to pre-defined protocols. We evaluated the impact of a rule-based expert system for clinical laboratory investigations management in a pediatric liver transplantation unit of our hospital. After one year, we observed an overall reduction in laboratory resources consumption for transplanted patients (-27%) and a decrease in the percentage of "STAT" requested tests (-44%). The percentage of tests ordered in agreement with the protocols for those patients increased from 33% before the introduction of the expert system to 45% when the system was used. The system was perceived by the clinicians as increasing the overall benefits in use of clinical resources, improving the laboratory data management, and saving time for the execution of laboratory ancillary tasks. PMID:8563292

  13. XCUT: A rule-based expert system for the automated process planning of machined parts

    SciTech Connect

    Brooks, S.L.; Hummel, K.E.; Wolf, M.L.

    1987-06-01

    Automated process planning is becoming a popular research and development topic in engineering and applied artificial intelligence. It is generally defined as the automatic planning of the manufacturing procedures for producing a part from a CAD based product definition. An automated process planning system, XCUT, is currently being developed using rule-based expert system techniques. XCUT will generate process plans for the production of machined piece-parts, given a geometric description of a part's features. The system currently is focused on operation planning for prismatic parts on multi-axis CNC milling machines. To date, moderately complex 2-1/2D prismatic parts have successfully been planned for with approximately 300 rules in the knowledge base. This paper will describe the XCUT system, system architecture, knowledge representation, plan development sequence, and issues in applying expert system technology to automated process planning. 16 refs.

  14. Knowledge acquisition and knowledge representation in a rule-based expert system.

    PubMed

    Chang, B L; Hirsch, M

    1991-01-01

    It is important to understand and describe how nurses make diagnostic decisions. This article describes the process by which knowledge is acquired and represented in a rule-based expert system for nursing diagnosis. Knowledge acquisition was obtained by tapping the expertise of clinical nurse specialists who were able to articulate the elements present in their diagnostic decisions. Knowledge representation was achieved using a commercially-available software package. VP Expert (Berkeley, CA). The clinical nurse specialists contributed many of the heuristics in the determination of self-care deficit as a nursing diagnosis. Three models of rules for the determination of self-care deficit, bathing are provided. These models represent a method for discriminating between levels of patient dependence. A description of rules that define specific causes, such as immobility are also included. They will be tested in the clinical setting. PMID:1933658

  15. A rule-based steam distribution system for petrochemical plant operation

    SciTech Connect

    Yi, H.S.; Yeo, Y.K.; Kim, J.K.; Kim, M.K.; Kang, S.S.

    1998-03-01

    A rule-based expert system for the optimal operation of plantwide steam distribution systems is proposed to minimize the net cost of providing energy to the plant. The system is based on the steady-state modeling and simulation of steam generation processes and steam distribution networks. Modeling of steam generation processes and steam distribution networks was performed based on actual plant operation data. Heuristic operational knowledge obtained from experienced plant engineers is incorporated in the form of IF-THEN rules. The proposed system could provide operational information when there were changes in the grade and amount of steam demand.The letdown amount from the very high pressure steam (VS) header and the amount of VS produced at the boiler showed good agreement with those of actual operational data. The prediction of an increase of boiler load caused by self-consumed steam made it possible to prevent an unexpected sudden increase of electricity demand.

  16. Fuzzy rule-based expert system for short-range seismic prediction

    NASA Astrophysics Data System (ADS)

    Klose, Christian D.

    2002-04-01

    In Switzerland, the 57 km long Gotthard base tunnel is being built as part of the AlpTransit project with a maximum overburden of 2000 m. Because of the difficult geological situation, an exploratory tunnel was set up to explore the Piora-Mulde, a possibly unstable carbonate rock zone in south Central Switzerland. A geophysical forecast system called tunnel seismic prediction has been designed by Amberg Measuring Technique Ltd. to explore the geology during the tunnelling process. But several difficulties occurred in interpretation of the seismic data in terms of geological forecasts. This paper describes a simple approach to geological interpretation of seismic diffraction stack images by using fuzzy logic. A fuzzy rule-based expert system was developed to classify lithologic and tectonic rock units by interpreting those seismic images.

  17. MYCIN and NEOMYCIN: two approaches to generating explanations in rule-based expert systems.

    PubMed

    Sotos, J G

    1990-10-01

    The prototypical rule-based expert system is MYCIN, a computer program developed in the 1970's to diagnose and recommend therapy for serious infections. MYCIN is able to explain its reasoning at any point in a consultation by listing the rules it has under consideration at that moment. However, when MYCIN's rules were used as the subject matter for a computerized infectious disease tutoring system, it became apparent that these rules contained implicit knowledge about how to perform diagnostic tasks and that this knowledge was inaccessible to the explanation system and, therefore, to students. This paper briefly describes NEOMYCIN, an expert system that makes this implicit knowledge explicit, and shows the effect that this reconfiguration of knowledge has on generating explanations. PMID:2241738

  18. The diagnosis of microcytic anemia by a rule-based expert system using VP-Expert.

    PubMed

    O'Connor, M L; McKinney, T

    1989-09-01

    We describe our experience in creating a rule-based expert system for the interpretation of microcytic anemia using the expert system development tool, VP-Expert, running on an IBM personal computer. VP-Expert processes data (complete blood cell count results, age, and sex) according to a set of user-written logic rules (our program) to reach conclusions as to the following causes of microcytic anemia: alpha- and beta-thalassemia trait, iron deficiency, and anemia of chronic disease. Our expert system was tested using previously interpreted complete blood cell count data. In most instances, there was good agreement between the expert system and its pathologist-author, but many discrepancies were found in the interpretation of anemia of chronic disease. We conclude that VP-Expert has a useful level of power and flexibility, yet is simple enough that individuals with modest programming experience can create their own expert systems. Limitations of such expert systems are discussed. PMID:2774865

  19. A token-flow paradigm for verification of rule-based expert systems.

    PubMed

    Wu, C H; Lee, S J

    2000-01-01

    This paper presents a novel approach to the verification of rule-based systems (RBSs). A graph structure, called the rule-dependency graph (RDG), is introduced to describe the dependency relationship among the rules of an RBS, in which each type of improper knowledge forms a specific topological structure. Knowledge verification is then performed by searching for such topological structures through a token-flow paradigm. An algorithm is provided, which automatically generates a minimally sufficient set of literals as test tokens in the detection procedure. The proposed scheme can be applied to rules of non-Horn clause form in both propositional and first-order logic, and restrictions imposed by other graph-based approaches can be avoided. Furthermore, explicit and potential anomalies of RBSs can be correctly found, and efficient run-time validation is made possible. PMID:18252394

  20. Auto-control of pumping operations in sewerage systems by rule-based fuzzy neural networks

    NASA Astrophysics Data System (ADS)

    Chiang, Y.-M.; Chang, L.-C.; Tsai, M.-J.; Wang, Y.-F.; Chang, F.-J.

    2011-01-01

    Pumping stations play an important role in flood mitigation in metropolitan areas. The existing sewerage systems, however, are facing a great challenge of fast rising peak flow resulting from urbanization and climate change. It is imperative to construct an efficient and accurate operating prediction model for pumping stations to simulate the drainage mechanism for discharging the rainwater in advance. In this study, we propose two rule-based fuzzy neural networks, adaptive neuro-fuzzy inference system (ANFIS) and counterpropagation fuzzy neural network for on-line predicting of the number of open and closed pumps of a pivotal pumping station in Taipei city up to a lead time of 20 min. The performance of ANFIS outperforms that of CFNN in terms of model efficiency, accuracy, and correctness. Furthermore, the results not only show the predictive water levels do contribute to the successfully operating pumping stations but also demonstrate the applicability and reliability of ANFIS in automatically controlling the urban sewerage systems.

  1. Auto-control of pumping operations in sewerage systems by rule-based fuzzy neural networks

    NASA Astrophysics Data System (ADS)

    Chiang, Y.-M.; Chang, L.-C.; Tsai, M.-J.; Wang, Y.-F.; Chang, F.-J.

    2010-09-01

    Pumping stations play an important role in flood mitigation in metropolitan areas. The existing sewerage systems, however, are facing a great challenge of fast rising peak flow resulting from urbanization and climate change. It is imperative to construct an efficient and accurate operating prediction model for pumping stations to simulate the drainage mechanism for discharging the rainwater in advance. In this study, we propose two rule-based fuzzy neural networks, adaptive neuro-fuzzy inference system (ANFIS) and counterpropagatiom fuzzy neural network (CFNN) for on-line predicting of the number of open and closed pumps of a pivotal pumping station in Taipei city up to a lead time of 20 min. The performance of ANFIS outperforms that of CFNN in terms of model efficiency, accuracy, and correctness. Furthermore, the results not only show the predictive water levels do contribute to the successfully operating pumping stations but also demonstrate the applicability and reliability of ANFIS in automatically controlling the urban sewerage systems.

  2. Semantic Knowledge for Famous Names in Mild Cognitive Impairment

    PubMed Central

    Seidenberg, Michael; Guidotti, Leslie; Nielson, Kristy A.; Woodard, John L.; Durgerian, Sally; Zhang, Qi; Gander, Amelia; Antuono, Piero; Rao, Stephen M.

    2008-01-01

    Person identification represents a unique category of semantic knowledge that is commonly impaired in Alzheimer's Disease (AD), but has received relatively little investigation in patients with Mild Cognitive Impairment (MCI). The current study examined the retrieval of semantic knowledge for famous names from three time epochs (recent, remote, and enduring) in two participant groups; 23 aMCI patients and 23 healthy elderly controls. The aMCI group was less accurate and produced less semantic knowledge than controls for famous names. Names from the enduring period were recognized faster than both recent and remote names in both groups, and remote names were recognized more quickly than recent names. Episodic memory performance was correlated with greater semantic knowledge particularly for recent names. We suggest that the anterograde memory deficits in the aMCI group interferes with learning of recent famous names and as a result produces difficulties with updating and integrating new semantic information with previously stored information. The implications of these findings for characterizing semantic memory deficits in MCI are discussed. PMID:19128524

  3. Accurate crop classification using hierarchical genetic fuzzy rule-based systems

    NASA Astrophysics Data System (ADS)

    Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.

    2014-10-01

    This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

  4. Semantic Research for Digital Libraries.

    ERIC Educational Resources Information Center

    Chen, Hsinchun

    1999-01-01

    Discusses the need for semantic research in digital libraries to help overcome interoperability problems. Highlights include federal initiatives; semantic analysis; knowledge representations; human-computer interactions and information visualization; and the University of Illinois DLI (Digital Libraries Initiative) project through partnership with…

  5. Semantic Analysis in Machine Translation.

    ERIC Educational Resources Information Center

    Skorokhodko, E. F.

    1970-01-01

    In many cases machine-translation does not produce satisfactory results within the framework of purely formal (morphological and syntaxic) analysis, particularly, in the case of syntaxic and lexical homonomy. An algorithm for syntaxic-semantic analysis is proposed, and its principles of operation are described. The syntaxico-semantic structure is…

  6. Semantic Feature Distinctiveness and Frequency

    ERIC Educational Resources Information Center

    Lamb, Katherine M.

    2012-01-01

    Lexical access is the process in which basic components of meaning in language, the lexical entries (words) are activated. This activation is based on the organization and representational structure of the lexical entries. Semantic features of words, which are the prominent semantic characteristics of a word concept, provide important information…

  7. Semantic Tools in Information Retrieval.

    ERIC Educational Resources Information Center

    Rubinoff, Morris; Stone, Don C.

    This report discusses the problem of the meansings of words used in information retrieval systems, and shows how semantic tools can aid in the communication which takes place between indexers and searchers via index terms. After treating the differing use of semantic tools in different types of systems, two tools (classification tables and…

  8. Semantic Processing of Mathematical Gestures

    ERIC Educational Resources Information Center

    Lim, Vanessa K.; Wilson, Anna J.; Hamm, Jeff P.; Phillips, Nicola; Iwabuchi, Sarina J.; Corballis, Michael C.; Arzarello, Ferdinando; Thomas, Michael O. J.

    2009-01-01

    Objective: To examine whether or not university mathematics students semantically process gestures depicting mathematical functions (mathematical gestures) similarly to the way they process action gestures and sentences. Semantic processing was indexed by the N400 effect. Results: The N400 effect elicited by words primed with mathematical gestures…

  9. The semantic planetary data system

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel; Kelly, Sean; Mattmann, Chris

    2005-01-01

    This paper will provide a brief overview of the PDS data model and the PDS catalog. It will then describe the implentation of the Semantic PDS including the development of the formal ontology, the generation of RDFS/XML and RDF/XML data sets, and the buiding of the semantic search application.

  10. eFSM--a novel online neural-fuzzy semantic memory model.

    PubMed

    Tung, Whye Loon; Quek, Chai

    2010-01-01

    Fuzzy rule-based systems (FRBSs) have been successfully applied to many areas. However, traditional fuzzy systems are often manually crafted, and their rule bases that represent the acquired knowledge are static and cannot be trained to improve the modeling performance. This subsequently leads to intensive research on the autonomous construction and tuning of a fuzzy system directly from the observed training data to address the knowledge acquisition bottleneck, resulting in well-established hybrids such as neural-fuzzy systems (NFSs) and genetic fuzzy systems (GFSs). However, the complex and dynamic nature of real-world problems demands that fuzzy rule-based systems and models be able to adapt their parameters and ultimately evolve their rule bases to address the nonstationary (time-varying) characteristics of their operating environments. Recently, considerable research efforts have been directed to the study of evolving Tagaki-Sugeno (T-S)-type NFSs based on the concept of incremental learning. In contrast, there are very few incremental learning Mamdani-type NFSs reported in the literature. Hence, this paper presents the evolving neural-fuzzy semantic memory (eFSM) model, a neural-fuzzy Mamdani architecture with a data-driven progressively adaptive structure (i.e., rule base) based on incremental learning. Issues related to the incremental learning of the eFSM rule base are carefully investigated, and a novel parameter learning approach is proposed for the tuning of the fuzzy set parameters in eFSM. The proposed eFSM model elicits highly interpretable semantic knowledge in the form of Mamdani-type if-then fuzzy rules from low-level numeric training data. These Mamdani fuzzy rules define the computing structure of eFSM and are incrementally learned with the arrival of each training data sample. New rules are constructed from the emergence of novel training data and obsolete fuzzy rules that no longer describe the recently observed data trends are pruned. This

  11. A Semantic Grid Oriented to E-Tourism

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao Ming

    With increasing complexity of tourism business models and tasks, there is a clear need of the next generation e-Tourism infrastructure to support flexible automation, integration, computation, storage, and collaboration. Currently several enabling technologies such as semantic Web, Web service, agent and grid computing have been applied in the different e-Tourism applications, however there is no a unified framework to be able to integrate all of them. So this paper presents a promising e-Tourism framework based on emerging semantic grid, in which a number of key design issues are discussed including architecture, ontologies structure, semantic reconciliation, service and resource discovery, role based authorization and intelligent agent. The paper finally provides the implementation of the framework.

  12. Discriminative Semantic Subspace Analysis for Relevance Feedback.

    PubMed

    Zhang, Lining; Shum, Hubert P H; Shao, Ling

    2016-03-01

    Content-based image retrieval (CBIR) has attracted much attention during the past decades for its potential practical applications to image database management. A variety of relevance feedback (RF) schemes have been designed to bridge the gap between low-level visual features and high-level semantic concepts for an image retrieval task. In the process of RF, it would be impractical or too expensive to provide explicit class label information for each image. Instead, similar or dissimilar pairwise constraints between two images can be acquired more easily. However, most of the conventional RF approaches can only deal with training images with explicit class label information. In this paper, we propose a novel discriminative semantic subspace analysis (DSSA) method, which can directly learn a semantic subspace from similar and dissimilar pairwise constraints without using any explicit class label information. In particular, DSSA can effectively integrate the local geometry of labeled similar images, the discriminative information between labeled similar and dissimilar images, and the local geometry of labeled and unlabeled images together to learn a reliable subspace. Compared with the popular distance metric analysis approaches, our method can also learn a distance metric but perform more effectively when dealing with high-dimensional images. Extensive experiments on both the synthetic data sets and a real-world image database demonstrate the effectiveness of the proposed scheme in improving the performance of the CBIR. PMID:26780793

  13. Semantic Web for Manufacturing Web Services

    SciTech Connect

    Kulvatunyou, Boonserm; Ivezic, Nenad

    2002-06-01

    As markets become unexpectedly turbulent with a shortened product life cycle and a power shift towards buyers, the need for methods to rapidly and cost-effectively develop products, production facilities and supporting software is becoming urgent. The use of a virtual enterprise plays a vital role in surviving turbulent markets. However, its success requires reliable and large-scale interoperation among trading partners via a semantic web of trading partners' services whose properties, capabilities, and interfaces are encoded in an unambiguous as well as computer-understandable form. This paper demonstrates a promising approach to integration and interoperation between a design house and a manufacturer by developing semantic web services for business and engineering transactions. To this end, detailed activity and information flow diagrams are developed, in which the two trading partners exchange messages and documents. The properties and capabilities of the manufacturer sites are defined using DARPA Agent Markup Language (DAML) ontology definition language. The prototype development of semantic webs shows that enterprises can widely interoperate in an unambiguous and autonomous manner; hence, virtual enterprise is realizable at a low cost.

  14. Latent semantic analysis.

    PubMed

    Evangelopoulos, Nicholas E

    2013-11-01

    This article reviews latent semantic analysis (LSA), a theory of meaning as well as a method for extracting that meaning from passages of text, based on statistical computations over a collection of documents. LSA as a theory of meaning defines a latent semantic space where documents and individual words are represented as vectors. LSA as a computational technique uses linear algebra to extract dimensions that represent that space. This representation enables the computation of similarity among terms and documents, categorization of terms and documents, and summarization of large collections of documents using automated procedures that mimic the way humans perform similar cognitive tasks. We present some technical details, various illustrative examples, and discuss a number of applications from linguistics, psychology, cognitive science, education, information science, and analysis of textual data in general. WIREs Cogn Sci 2013, 4:683-692. doi: 10.1002/wcs.1254 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. PMID:26304272

  15. "Pre-Semantic" Cognition Revisited: Critical Differences between Semantic Aphasia and Semantic Dementia

    ERIC Educational Resources Information Center

    Jefferies, Elizabeth; Rogers, Timothy T.; Hopper, Samantha; Lambon Ralph, Matthew A.

    2010-01-01

    Patients with semantic dementia show a specific pattern of impairment on both verbal and non-verbal "pre-semantic" tasks, e.g., reading aloud, past tense generation, spelling to dictation, lexical decision, object decision, colour decision and delayed picture copying. All seven tasks are characterised by poorer performance for items that are…

  16. Semantic information extracting system for classification of radiological reports in radiology information system (RIS)

    NASA Astrophysics Data System (ADS)

    Shi, Liehang; Ling, Tonghui; Zhang, Jianguo

    2016-03-01

    Radiologists currently use a variety of terminologies and standards in most hospitals in China, and even there are multiple terminologies being used for different sections in one department. In this presentation, we introduce a medical semantic comprehension system (MedSCS) to extract semantic information about clinical findings and conclusion from free text radiology reports so that the reports can be classified correctly based on medical terms indexing standards such as Radlex or SONMED-CT. Our system (MedSCS) is based on both rule-based methods and statistics-based methods which improve the performance and the scalability of MedSCS. In order to evaluate the over all of the system and measure the accuracy of the outcomes, we developed computation methods to calculate the parameters of precision rate, recall rate, F-score and exact confidence interval.

  17. X-Informatics: Practical Semantic Science

    NASA Astrophysics Data System (ADS)

    Borne, K. D.

    2009-12-01

    The discipline of data science is merging with multiple science disciplines to form new X-informatics research disciplines. They are almost too numerous to name, but they include geoinformatics, bioinformatics, cheminformatics, biodiversity informatics, ecoinformatics, materials informatics, and the emerging discipline of astroinformatics. Within any X-informatics discipline, the information granules are unique to that discipline -- e.g., gene sequences in bio, the sky object in astro, and the spatial object in geo (such as points, lines, and polygons in the vector model, and pixels in the raster model). Nevertheless the goals are similar: transparent data re-use across subdisciplines and within education settings, information and data integration and fusion, personalization of user interactions with the data collection, semantic search and retrieval, and knowledge discovery. The implementation of an X-informatics framework enables these semantic e-science research goals. We describe the concepts, challenges, and new developments associated with the new discipline of astroinformatics, and how geoinformatics provides valuable lessons learned and a model for practical semantic science within a traditional science discipline through the accretion of data science methodologies (such as formal metadata creation, data models, data mining, information retrieval, knowledge engineering, provenance, taxonomies, and ontologies). The emerging concept of data-as-a-service (DaaS) builds upon the concept of smart data (or data DNA) for intelligent data management, automated workflows, and intelligent processing. Smart data, defined through X-informatics, enables several practical semantic science use cases, including self-discovery, data intelligence, automatic recommendations, relevance analysis, dimension reduction, feature selection, constraint-based mining, interdisciplinary data re-use, knowledge-sharing, data use in education, and more. We describe these concepts within the

  18. A rule-based approach for the correlation of alarms to support Disaster and Emergency Management

    NASA Astrophysics Data System (ADS)

    Gloria, M.; Minei, G.; Lersi, V.; Pasquariello, D.; Monti, C.; Saitto, A.

    2009-04-01

    Key words: Simple Event Correlator, Agent Platform, Ontology, Semantic Web, Distributed Systems, Emergency Management The importance of recognition of emergency's typology to control the critical situation for security of citizens has been always recognized. It follows this aspect is very important for proper management of a hazardous event. In this work we present a solution for the recognition of emergency's typology adopted by an Italian research project, called CI6 (Centro Integrato per Servizi di Emergenza Innovativi). In our approach, CI6 receives alarms by citizen or people involved in the work (for example: police, operator of 112, and so on). CI6 represents any alarm by a set of information, including a text that describes it and obtained when the user points out the danger, and a pair of coordinates for its location. The system realizes an analysis of text and automatically infers information on the type of emergencies by means a set of parsing rules and rules of inference applied by a independent module: a correlator of events based on their log and called Simple Event Correlator (SEC). SEC, integrated in CI6's platform, is an open source and platform independent event correlation tool. SEC accepts input both files and text derived from standard input, making it flexible because it can be matched to any application that is able to write its output to a file stream. The SEC configuration is stored in text files as rules, each rule specifying an event matching condition, an action list, and optionally a Boolean expression whose truth value decides whether the rule can be applied at a given moment. SEC can produce output events by executing user-specified shell scripts or programs, by writing messages to files, and by various other means. SEC has been successfully applied in various domains like network management, system monitoring, data security, intrusion detection, log file monitoring and analysis, etc; it has been used or integrated with many

  19. Mapping the Structure of Semantic Memory

    ERIC Educational Resources Information Center

    Morais, Ana Sofia; Olsson, Henrik; Schooler, Lael J.

    2013-01-01

    Aggregating snippets from the semantic memories of many individuals may not yield a good map of an individual's semantic memory. The authors analyze the structure of semantic networks that they sampled from individuals through a new snowball sampling paradigm during approximately 6 weeks of 1-hr daily sessions. The semantic networks of individuals…

  20. Semantic enrichment for medical ontologies.

    PubMed

    Lee, Yugyung; Geller, James

    2006-04-01

    The Unified Medical Language System (UMLS) contains two separate but interconnected knowledge structures, the Semantic Network (upper level) and the Metathesaurus (lower level). In this paper, we have attempted to work out better how the use of such a two-level structure in the medical field has led to notable advances in terminologies and ontologies. However, most ontologies and terminologies do not have such a two-level structure. Therefore, we present a method, called semantic enrichment, which generates a two-level ontology from a given one-level terminology and an auxiliary two-level ontology. During semantic enrichment, concepts of the one-level terminology are assigned to semantic types, which are the building blocks of the upper level of the auxiliary two-level ontology. The result of this process is the desired new two-level ontology. We discuss semantic enrichment of two example terminologies and how we approach the implementation of semantic enrichment in the medical domain. This implementation performs a major part of the semantic enrichment process with the medical terminologies, with difficult cases left to a human expert. PMID:16185937

  1. Exploiting Recurring Structure in a Semantic Network

    NASA Technical Reports Server (NTRS)

    Wolfe, Shawn R.; Keller, Richard M.

    2004-01-01

    With the growing popularity of the Semantic Web, an increasing amount of information is becoming available in machine interpretable, semantically structured networks. Within these semantic networks are recurring structures that could be mined by existing or novel knowledge discovery methods. The mining of these semantic structures represents an interesting area that focuses on mining both for and from the Semantic Web, with surprising applicability to problems confronting the developers of Semantic Web applications. In this paper, we present representative examples of recurring structures and show how these structures could be used to increase the utility of a semantic repository deployed at NASA.

  2. Receptive vocabulary and semantic knowledge in children with SLI and children with Down syndrome.

    PubMed

    Laws, Glynis; Briscoe, Josie; Ang, Su-Yin; Brown, Heather; Hermena, Ehab; Kapikian, Anna

    2015-01-01

    Receptive vocabulary and associated semantic knowledge were compared within and between groups of children with specific language impairment (SLI), children with Down syndrome (DS), and typically developing children. To overcome the potential confounding effects of speech or language difficulties on verbal tests of semantic knowledge, a novel task was devised based on picture-based semantic association tests used to assess adult patients with semantic dementia. Receptive vocabulary, measured by word-picture matching, of children with SLI was weak relative to chronological age and to nonverbal mental age but their semantic knowledge, probed across the same lexical items, did not differ significantly from that of vocabulary-matched typically developing children. By contrast, although receptive vocabulary of children with DS was a relative strength compared to nonverbal cognitive abilities (p < .0001), DS was associated with a significant deficit in semantic knowledge (p < .0001) indicative of dissociation between word-picture matching vocabulary and depth of semantic knowledge. Overall, these data challenge the integrity of semantic-conceptual development in DS and imply that contemporary theories of semantic cognition should also seek to incorporate evidence from atypical conceptual development. PMID:24830646

  3. Using rule-based shot dose assignment in model-based MPC applications

    NASA Astrophysics Data System (ADS)

    Bork, Ingo; Buck, Peter; Wang, Lin; Müller, Uwe

    2014-10-01

    Shrinking feature sizes and the need for tighter CD (Critical Dimension) control require the introduction of new technologies in mask making processes. One of those methods is the dose assignment of individual shots on VSB (Variable Shaped Beam) mask writers to compensate CD non-linearity effects and improve dose edge slope. Using increased dose levels only for most critical features, generally only for the smallest CDs on a mask, the change in mask write time is minimal while the increase in image quality can be significant. This paper describes a method combining rule-based shot dose assignment with model-based shot size correction. This combination proves to be very efficient in correcting mask linearity errors while also improving dose edge slope of small features. Shot dose assignment is based on tables assigning certain dose levels to a range of feature sizes. The dose to feature size assignment is derived from mask measurements in such a way that shape corrections are kept to a minimum. For example, if a 50nm drawn line on mask results in a 45nm chrome line using nominal dose, a dose level is chosen which is closest to getting the line back on target. Since CD non-linearity is different for lines, line-ends and contacts, different tables are generated for the different shape categories. The actual dose assignment is done via DRC rules in a pre-processing step before executing the shape correction in the MPC engine. Dose assignment to line ends can be restricted to critical line/space dimensions since it might not be required for all line ends. In addition, adding dose assignment to a wide range of line ends might increase shot count which is undesirable. The dose assignment algorithm is very flexible and can be adjusted based on the type of layer and the best balance between accuracy and shot count. These methods can be optimized for the number of dose levels available for specific mask writers. The MPC engine now needs to be able to handle different dose

  4. Panacea, a semantic-enabled drug recommendations discovery framework

    PubMed Central

    2014-01-01

    Background Personalized drug prescription can be benefited from the use of intelligent information management and sharing. International standard classifications and terminologies have been developed in order to provide unique and unambiguous information representation. Such standards can be used as the basis of automated decision support systems for providing drug-drug and drug-disease interaction discovery. Additionally, Semantic Web technologies have been proposed in earlier works, in order to support such systems. Results The paper presents Panacea, a semantic framework capable of offering drug-drug and drug-diseases interaction discovery. For enabling this kind of service, medical information and terminology had to be translated to ontological terms and be appropriately coupled with medical knowledge of the field. International standard classifications and terminologies, provide the backbone of the common representation of medical data while the medical knowledge of drug interactions is represented by a rule base which makes use of the aforementioned standards. Representation is based on a lightweight ontology. A layered reasoning approach is implemented where at the first layer ontological inference is used in order to discover underlying knowledge, while at the second layer a two-step rule selection strategy is followed resulting in a computationally efficient reasoning approach. Details of the system architecture are presented while also giving an outline of the difficulties that had to be overcome. Conclusions Panacea is evaluated both in terms of quality of recommendations against real clinical data and performance. The quality recommendation gave useful insights regarding requirements for real world deployment and revealed several parameters that affected the recommendation results. Performance-wise, Panacea is compared to a previous published work by the authors, a service for drug recommendations named GalenOWL, and presents their differences in

  5. Improving protein coreference resolution by simple semantic classification

    PubMed Central

    2012-01-01

    Background Current research has shown that major difficulties in event extraction for the biomedical domain are traceable to coreference. Therefore, coreference resolution is believed to be useful for improving event extraction. To address coreference resolution in molecular biology literature, the Protein Coreference (COREF) task was arranged in the BioNLP Shared Task (BioNLP-ST, hereafter) 2011, as a supporting task. However, the shared task results indicated that transferring coreference resolution methods developed for other domains to the biological domain was not a straight-forward task, due to the domain differences in the coreference phenomena. Results We analyzed the contribution of domain-specific information, including the information that indicates the protein type, in a rule-based protein coreference resolution system. In particular, the domain-specific information is encoded into semantic classification modules for which the output is used in different components of the coreference resolution. We compared our system with the top four systems in the BioNLP-ST 2011; surprisingly, we found that the minimal configuration had outperformed the best system in the BioNLP-ST 2011. Analysis of the experimental results revealed that semantic classification, using protein information, has contributed to an increase in performance by 2.3% on the test data, and 4.0% on the development data, in F-score. Conclusions The use of domain-specific information in semantic classification is important for effective coreference resolution. Since it is difficult to transfer domain-specific information across different domains, we need to continue seek for methods to utilize such information in coreference resolution. PMID:23157272

  6. From autopoiesis to semantic closure.

    PubMed

    Stewart, J

    2000-01-01

    This article addresses the question of providing an adequate mathematical formulation for the concepts of autopoiesis and closure under efficient cause. What is required is metaphorically equivalent to reducing the act of writing to a set of mathematical equations, habitually effected by a human mathematician, within the ongoing function of the system itself. This, in turn, raises the question of the relationship between autopoiesis and semantics. The hypothesis suggested is that whereas semantics clearly requires autopoiesis, it may be also be the case that autopoiesis itself can only be materially realized in a system that is characterized by a semantic dimension. PMID:10818567

  7. Workspaces in the Semantic Web

    NASA Technical Reports Server (NTRS)

    Wolfe, Shawn R.; Keller, RIchard M.

    2005-01-01

    Due to the recency and relatively limited adoption of Semantic Web technologies. practical issues related to technology scaling have received less attention than foundational issues. Nonetheless, these issues must be addressed if the Semantic Web is to realize its full potential. In particular, we concentrate on the lack of scoping methods that reduce the size of semantic information spaces so they are more efficient to work with and more relevant to an agent's needs. We provide some intuition to motivate the need for such reduced information spaces, called workspaces, give a formal definition, and suggest possible methods of deriving them.

  8. High Performance Descriptive Semantic Analysis of Semantic Graph Databases

    SciTech Connect

    Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan; Feo, John T.; Haglin, David J.; Mackey, Greg E.; Mizell, David W.

    2011-06-02

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.

  9. Matching Alternative Addresses: a Semantic Web Approach

    NASA Astrophysics Data System (ADS)

    Ariannamazi, S.; Karimipour, F.; Hakimpour, F.

    2015-12-01

    Rapid development of crowd-sourcing or volunteered geographic information (VGI) provides opportunities for authoritatives that deal with geospatial information. Heterogeneity of multiple data sources and inconsistency of data types is a key characteristics of VGI datasets. The expansion of cities resulted in the growing number of POIs in the OpenStreetMap, a well-known VGI source, which causes the datasets to outdate in short periods of time. These changes made to spatial and aspatial attributes of features such as names and addresses might cause confusion or ambiguity in the processes that require feature's literal information like addressing and geocoding. VGI sources neither will conform specific vocabularies nor will remain in a specific schema for a long period of time. As a result, the integration of VGI sources is crucial and inevitable in order to avoid duplication and the waste of resources. Information integration can be used to match features and qualify different annotation alternatives for disambiguation. This study enhances the search capabilities of geospatial tools with applications able to understand user terminology to pursuit an efficient way for finding desired results. Semantic web is a capable tool for developing technologies that deal with lexical and numerical calculations and estimations. There are a vast amount of literal-spatial data representing the capability of linguistic information in knowledge modeling, but these resources need to be harmonized based on Semantic Web standards. The process of making addresses homogenous generates a helpful tool based on spatial data integration and lexical annotation matching and disambiguating.

  10. Does semantic redundancy gain result from multiple semantic priming?

    PubMed

    Schröter, Hannes; Bratzke, Daniel; Fiedler, Anja; Birngruber, Teresa

    2015-10-01

    Fiedler, Schröter, and Ulrich (2013) reported faster responses to a single written word when the semantic content of this word (e.g., "elephant") matched both targets (e.g., "animal", "gray") as compared to a single target (e.g., "animal", "brown"). This semantic redundancy gain was explained by statistical facilitation due to a race of independent memory retrieval processes. The present experiment addresses one alternative explanation, namely that semantic redundancy gain results from multiple pre-activation of words that match both targets. In different blocks of trials, participants performed a redundant-targets task and a lexical decision task. The targets of the redundant-targets task served as primes in the lexical decision task. Replicating the findings of Fiedler et al., a semantic redundancy gain was observed in the redundant-targets task. Crucially, however, there was no evidence of a multiple semantic priming effect in the lexical decision task. This result suggests that semantic redundancy gain cannot be explained by multiple pre-activation of words that match both targets. PMID:26342771

  11. Syntactic and semantic processing of Chinese middle sentences: evidence from event-related potentials.

    PubMed

    Zeng, Tao; Mao, Wen; Lu, Qing

    2016-05-25

    Scalp-recorded event-related potentials are known to be sensitive to particular aspects of sentence processing. The N400 component is widely recognized as an effect closely related to lexical-semantic processing. The absence of an N400 effect in participants performing tasks in Indo-European languages has been considered evidence that failed syntactic category processing appears to block lexical-semantic integration and that syntactic structure building is a prerequisite of semantic analysis. An event-related potential experiment was designed to investigate whether such syntactic primacy can be considered to apply equally to Chinese sentence processing. Besides correct middles, sentences with either single semantic or single syntactic violation as well as double syntactic and semantic anomaly were used in the present research. Results showed that both purely semantic and combined violation induced a broad negativity in the time window 300-500 ms, indicating the independence of lexical-semantic integration. These findings provided solid evidence that lexical-semantic parsing plays a crucial role in Chinese sentence comprehension. PMID:27028353

  12. Hybrid OPC technique using model based and rule-based flows

    NASA Astrophysics Data System (ADS)

    Harb, Mohammed; Abdelghany, Hesham

    2013-04-01

    To transfer an electronic circuit from design to silicon, a lot of stages are involved in between. As technology evolves, the design shapes are getting closer to each other. Since the wavelength of the lithography process didn't get any better than 193nm, optical interference is a problem that needs to be accounted for by using Optical Proximity Correction (OPC) algorithms. In earlier technologies, simple OPC was applied to the design based on spatial rules. This is not the situation in the recent technologies anymore, since more optical interference took place with the intensive scaling down of the designs. Model-based OPC is a better solution now to produce accurate results, but this comes at the cost of the increased run time. Electronic Design Automation (EDA) companies compete to offer tools that provide both accuracy and run time efficiency. In this paper, we show that optimum usage of some of these tools can ensure OPC accuracy with better run time. The hybrid technique of OPC uses the classic rule-based OPC in a modern fashion to consider the optical parameters, instead of the spatial metrics only. Combined with conventional model-based OPC, the whole flow shows better results in terms of accuracy and run time.

  13. Overcoming rule-based rigidity and connectionist limitations through massively-parallel case-based reasoning

    NASA Technical Reports Server (NTRS)

    Barnden, John; Srinivas, Kankanahalli

    1990-01-01

    Symbol manipulation as used in traditional Artificial Intelligence has been criticized by neural net researchers for being excessively inflexible and sequential. On the other hand, the application of neural net techniques to the types of high-level cognitive processing studied in traditional artificial intelligence presents major problems as well. A promising way out of this impasse is to build neural net models that accomplish massively parallel case-based reasoning. Case-based reasoning, which has received much attention recently, is essentially the same as analogy-based reasoning, and avoids many of the problems leveled at traditional artificial intelligence. Further problems are avoided by doing many strands of case-based reasoning in parallel, and by implementing the whole system as a neural net. In addition, such a system provides an approach to some aspects of the problems of noise, uncertainty and novelty in reasoning systems. The current neural net system (Conposit), which performs standard rule-based reasoning, is being modified into a massively parallel case-based reasoning version.

  14. Fuzzy Rule-Based Classification System for Assessing Coronary Artery Disease

    PubMed Central

    Mohammadpour, Reza Ali; Abedi, Seyed Mohammad; Bagheri, Somayeh; Ghaemian, Ali

    2015-01-01

    The aim of this study was to determine the accuracy of fuzzy rule-based classification that could noninvasively predict CAD based on myocardial perfusion scan test and clinical-epidemiological variables. This was a cross-sectional study in which the characteristics, the results of myocardial perfusion scan (MPS), and coronary artery angiography of 115 patients, 62 (53.9%) males, in Mazandaran Heart Center in the north of Iran have been collected. We used membership functions for medical variables by reviewing the related literature. To improve the classification performance, we used Ishibuchi et al. and Nozaki et al. methods by adjusting the grade of certainty CFj of each rule. This system includes 144 rules and the antecedent part of all rules has more than one part. The coronary artery disease data used in this paper contained 115 samples. The data was classified into four classes, namely, classes 1 (normal), 2 (stenosis in one single vessel), 3 (stenosis in two vessels), and 4 (stenosis in three vessels) which had 39, 35, 17, and 24 subjects, respectively. The accuracy in the fuzzy classification based on if-then rule was 92.8 percent if classification result was considered based on rule selection by expert, while it was 91.9 when classification result was obtained according to the equation. To increase the classification rate, we deleted the extra rules to reduce the fuzzy rules after introducing the membership functions. PMID:26448783

  15. Classification of a set of vectors using self-organizing map- and rule-based technique

    NASA Astrophysics Data System (ADS)

    Ae, Tadashi; Okaniwa, Kaishirou; Nosaka, Kenzaburou

    2005-02-01

    There exist various objects, such as pictures, music, texts, etc., around our environment. We have a view for these objects by looking, reading or listening. Our view is concerned with our behaviors deeply, and is very important to understand our behaviors. We have a view for an object, and decide the next action (data selection, etc.) with our view. Such a series of actions constructs a sequence. Therefore, we propose a method which acquires a view as a vector from several words for a view, and apply the vector to sequence generation. We focus on sequences of the data of which a user selects from a multimedia database containing pictures, music, movie, etc... These data cannot be stereotyped because user's view for them changes by each user. Therefore, we represent the structure of the multimedia database as the vector representing user's view and the stereotyped vector, and acquire sequences containing the structure as elements. Such a vector can be classified by SOM (Self-Organizing Map). Hidden Markov Model (HMM) is a method to generate sequences. Therefore, we use HMM of which a state corresponds to the representative vector of user's view, and acquire sequences containing the change of user's view. We call it Vector-state Markov Model (VMM). We introduce the rough set theory as a rule-base technique, which plays a role of classifying the sets of data such as the sets of "Tour".

  16. Robust Unsupervised Arousal Rating: A Rule-Based Framework with Knowledge-Inspired Vocal Features

    PubMed Central

    Bone, Daniel; Lee, Chi-Chun; Narayanan, Shrikanth

    2015-01-01

    Studies in classifying affect from vocal cues have produced exceptional within-corpus results, especially for arousal (activation or stress); yet cross-corpora affect recognition has only recently garnered attention. An essential requirement of many behavioral studies is affect scoring that generalizes across different social contexts and data conditions. We present a robust, unsupervised (rule-based) method for providing a scale-continuous, bounded arousal rating operating on the vocal signal. The method incorporates just three knowledge-inspired features chosen based on empirical and theoretical evidence. It constructs a speaker’s baseline model for each feature separately, and then computes single-feature arousal scores. Lastly, it advantageously fuses the single-feature arousal scores into a final rating without knowledge of the true affect. The baseline data is preferably labeled as neutral, but some initial evidence is provided to suggest that no labeled data is required in certain cases. The proposed method is compared to a state-of-the-art supervised technique which employs a high-dimensional feature set. The proposed framework achieves highly-competitive performance with additional benefits. The measure is interpretable, scale-continuous as opposed to discrete, and can operate without any affective labeling. An accompanying Matlab tool is made available with the paper. PMID:25705327

  17. Rule-based fuzzy vector median filters for 3D phase contrast MRI segmentation

    NASA Astrophysics Data System (ADS)

    Sundareswaran, Kartik S.; Frakes, David H.; Yoganathan, Ajit P.

    2008-02-01

    Recent technological advances have contributed to the advent of phase contrast magnetic resonance imaging (PCMRI) as standard practice in clinical environments. In particular, decreased scan times have made using the modality more feasible. PCMRI is now a common tool for flow quantification, and for more complex vector field analyses that target the early detection of problematic flow conditions. Segmentation is one component of this type of application that can impact the accuracy of the final product dramatically. Vascular segmentation, in general, is a long-standing problem that has received significant attention. Segmentation in the context of PCMRI data, however, has been explored less and can benefit from object-based image processing techniques that incorporate fluids specific information. Here we present a fuzzy rule-based adaptive vector median filtering (FAVMF) algorithm that in combination with active contour modeling facilitates high-quality PCMRI segmentation while mitigating the effects of noise. The FAVMF technique was tested on 111 synthetically generated PC MRI slices and on 15 patients with congenital heart disease. The results were compared to other multi-dimensional filters namely the adaptive vector median filter, the adaptive vector directional filter, and the scalar low pass filter commonly used in PC MRI applications. FAVMF significantly outperformed the standard filtering methods (p < 0.0001). Two conclusions can be drawn from these results: a) Filtering should be performed after vessel segmentation of PC MRI; b) Vector based filtering methods should be used instead of scalar techniques.

  18. Reliability and performance evaluation of systems containing embedded rule-based expert systems

    NASA Technical Reports Server (NTRS)

    Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.

    1989-01-01

    A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.

  19. Analysis, Simulation, and Verification of Knowledge-Based, Rule-Based, and Expert Systems

    NASA Technical Reports Server (NTRS)

    Hinchey, Mike; Rash, James; Erickson, John; Gracanin, Denis; Rouff, Chris

    2010-01-01

    Mathematically sound techniques are used to view a knowledge-based system (KBS) as a set of processes executing in parallel and being enabled in response to specific rules being fired. The set of processes can be manipulated, examined, analyzed, and used in a simulation. The tool that embodies this technology may warn developers of errors in their rules, but may also highlight rules (or sets of rules) in the system that are underspecified (or overspecified) and need to be corrected for the KBS to operate as intended. The rules embodied in a KBS specify the allowed situations, events, and/or results of the system they describe. In that sense, they provide a very abstract specification of a system. The system is implemented through the combination of the system specification together with an appropriate inference engine, independent of the algorithm used in that inference engine. Viewing the rule base as a major component of the specification, and choosing an appropriate specification notation to represent it, reveals how additional power can be derived from an approach to the knowledge-base system that involves analysis, simulation, and verification. This innovative approach requires no special knowledge of the rules, and allows a general approach where standardized analysis, verification, simulation, and model checking techniques can be applied to the KBS.

  20. Space communications scheduler: A rule-based approach to adaptive deadline scheduling

    NASA Technical Reports Server (NTRS)

    Straguzzi, Nicholas

    1990-01-01

    Job scheduling is a deceptively complex subfield of computer science. The highly combinatorial nature of the problem, which is NP-complete in nearly all cases, requires a scheduling program to intelligently transverse an immense search tree to create the best possible schedule in a minimal amount of time. In addition, the program must continually make adjustments to the initial schedule when faced with last-minute user requests, cancellations, unexpected device failures, quests, cancellations, unexpected device failures, etc. A good scheduler must be quick, flexible, and efficient, even at the expense of generating slightly less-than-optimal schedules. The Space Communication Scheduler (SCS) is an intelligent rule-based scheduling system. SCS is an adaptive deadline scheduler which allocates modular communications resources to meet an ordered set of user-specified job requests on board the NASA Space Station. SCS uses pattern matching techniques to detect potential conflicts through algorithmic and heuristic means. As a result, the system generates and maintains high density schedules without relying heavily on backtracking or blind search techniques. SCS is suitable for many common real-world applications.