Science.gov

Sample records for rule-based semantic integration

  1. Semantic classification of diseases in discharge summaries using a context-aware rule-based classifier.

    PubMed

    Solt, Illés; Tikk, Domonkos; Gál, Viktor; Kardkovács, Zsolt T

    2009-01-01

    OBJECTIVE Automated and disease-specific classification of textual clinical discharge summaries is of great importance in human life science, as it helps physicians to make medical studies by providing statistically relevant data for analysis. This can be further facilitated if, at the labeling of discharge summaries, semantic labels are also extracted from text, such as whether a given disease is present, absent, questionable in a patient, or is unmentioned in the document. The authors present a classification technique that successfully solves the semantic classification task. DESIGN The authors introduce a context-aware rule-based semantic classification technique for use on clinical discharge summaries. The classification is performed in subsequent steps. First, some misleading parts are removed from the text; then the text is partitioned into positive, negative, and uncertain context segments, then a sequence of binary classifiers is applied to assign the appropriate semantic labels. Measurement For evaluation the authors used the documents of the i2b2 Obesity Challenge and adopted its evaluation measures: F(1)-macro and F(1)-micro for measurements. RESULTS On the two subtasks of the Obesity Challenge (textual and intuitive classification) the system performed very well, and achieved a F(1)-macro = 0.80 for the textual and F(1)-macro = 0.67 for the intuitive tasks, and obtained second place at the textual and first place at the intuitive subtasks of the challenge. CONCLUSIONS The authors show in the paper that a simple rule-based classifier can tackle the semantic classification task more successfully than machine learning techniques, if the training data are limited and some semantic labels are very sparse. PMID:19390101

  2. Annotation of rule-based models with formal semantics to enable creation, analysis, reuse and visualization

    PubMed Central

    Misirli, Goksel; Cavaliere, Matteo; Waites, William; Pocock, Matthew; Madsen, Curtis; Gilfellon, Owen; Honorato-Zimmer, Ricardo; Zuliani, Paolo; Danos, Vincent; Wipat, Anil

    2016-01-01

    Motivation: Biological systems are complex and challenging to model and therefore model reuse is highly desirable. To promote model reuse, models should include both information about the specifics of simulations and the underlying biology in the form of metadata. The availability of computationally tractable metadata is especially important for the effective automated interpretation and processing of models. Metadata are typically represented as machine-readable annotations which enhance programmatic access to information about models. Rule-based languages have emerged as a modelling framework to represent the complexity of biological systems. Annotation approaches have been widely used for reaction-based formalisms such as SBML. However, rule-based languages still lack a rich annotation framework to add semantic information, such as machine-readable descriptions, to the components of a model. Results: We present an annotation framework and guidelines for annotating rule-based models, encoded in the commonly used Kappa and BioNetGen languages. We adapt widely adopted annotation approaches to rule-based models. We initially propose a syntax to store machine-readable annotations and describe a mapping between rule-based modelling entities, such as agents and rules, and their annotations. We then describe an ontology to both annotate these models and capture the information contained therein, and demonstrate annotating these models using examples. Finally, we present a proof of concept tool for extracting annotations from a model that can be queried and analyzed in a uniform way. The uniform representation of the annotations can be used to facilitate the creation, analysis, reuse and visualization of rule-based models. Although examples are given, using specific implementations the proposed techniques can be applied to rule-based models in general. Availability and implementation: The annotation ontology for rule-based models can be found at http

  3. An HL7-CDA wrapper for facilitating semantic interoperability to rule-based Clinical Decision Support Systems.

    PubMed

    Sáez, Carlos; Bresó, Adrián; Vicente, Javier; Robles, Montserrat; García-Gómez, Juan Miguel

    2013-03-01

    The success of Clinical Decision Support Systems (CDSS) greatly depends on its capability of being integrated in Health Information Systems (HIS). Several proposals have been published up to date to permit CDSS gathering patient data from HIS. Some base the CDSS data input on the HL7 reference model, however, they are tailored to specific CDSS or clinical guidelines technologies, or do not focus on standardizing the CDSS resultant knowledge. We propose a solution for facilitating semantic interoperability to rule-based CDSS focusing on standardized input and output documents conforming an HL7-CDA wrapper. We define the HL7-CDA restrictions in a HL7-CDA implementation guide. Patient data and rule inference results are mapped respectively to and from the CDSS by means of a binding method based on an XML binding file. As an independent clinical document, the results of a CDSS can present clinical and legal validity. The proposed solution is being applied in a CDSS for providing patient-specific recommendations for the care management of outpatients with diabetes mellitus. PMID:23199936

  4. Automatic extraction of semantic relations between medical entities: a rule based approach

    PubMed Central

    2011-01-01

    Background Information extraction is a complex task which is necessary to develop high-precision information retrieval tools. In this paper, we present the platform MeTAE (Medical Texts Annotation and Exploration). MeTAE allows (i) to extract and annotate medical entities and relationships from medical texts and (ii) to explore semantically the produced RDF annotations. Results Our annotation approach relies on linguistic patterns and domain knowledge and consists in two steps: (i) recognition of medical entities and (ii) identification of the correct semantic relation between each pair of entities. The first step is achieved by an enhanced use of MetaMap which improves the precision obtained by MetaMap by 19.59% in our evaluation. The second step relies on linguistic patterns which are built semi-automatically from a corpus selected according to semantic criteria. We evaluate our system’s ability to identify medical entities of 16 types. We also evaluate the extraction of treatment relations between a treatment (e.g. medication) and a problem (e.g. disease): we obtain 75.72% precision and 60.46% recall. Conclusions According to our experiments, using an external sentence segmenter and noun phrase chunker may improve the precision of MetaMap-based medical entity recognition. Our pattern-based relation extraction method obtains good precision and recall w.r.t related works. A more precise comparison with related approaches remains difficult however given the differences in corpora and in the exact nature of the extracted relations. The selection of MEDLINE articles through queries related to known drug-disease pairs enabled us to obtain a more focused corpus of relevant examples of treatment relations than a more general MEDLINE query. PMID:22166723

  5. Rule-based and information-integration category learning in normal aging.

    PubMed

    Maddox, W Todd; Pacheco, Jennifer; Reeves, Maia; Zhu, Bo; Schnyer, David M

    2010-08-01

    The basal ganglia and prefrontal cortex play critical roles in category learning. Both regions evidence age-related structural and functional declines. The current study examined rule-based and information-integration category learning in a group of older and younger adults. Rule-based learning is thought to involve explicit, frontally mediated processes, whereas information-integration is thought to involve implicit, striatally mediated processes. As a group, older adults showed rule-based and information-integration deficits. A series of models were applied that provided insights onto the type of strategy used to solve the task. Interestingly, when the analyses focused only on participants who used the task appropriate strategy in the final block of trials, the age-related rule-based deficit disappeared whereas the information-integration deficit remained. For this group of individuals, the final block information-integration deficit was due to less consistent application of the task appropriate strategy by older adults, and over the course of learning these older adults shifted from an explicit hypothesis-testing strategy to the task appropriate strategy later in learning. In addition, the use of the task appropriate strategy was associated with less interference and better inhibitory control for rule-based and information-information learning, whereas use of the task appropriate strategy was associated with greater working memory and better new verbal learning only for the rule-based task. These results suggest that normal aging impacts both forms of category learning and that there are some important similarities and differences in the explanatory locus of these deficits. The data also support a two-component model of information-integration category learning that includes a striatal component that mediated procedural-based learning, and a prefrontal cortical component that mediates the transition from hypothesis-testing to procedural-based strategies

  6. A Rule-Based Expert System as an Integrated Resource in an Outpatient Clinic Information System

    PubMed Central

    Wilton, Richard

    1990-01-01

    A rule-based expert system can be integrated in a useful way into a microcomputer-based clinical information system by using symmetric data-communication methods and intuitive user-interface design. To users of the computer system, the expert system appears as one of several distributed information resources, among which are database management systems and a gateway to a mainframe computing system. Transparent access to the expert system is based on the use of both commercial and public-domain data-communication standards.

  7. The Recall of Verbal Material Accompanying Semantically Well-Integrated and Semantically Poorly-Integrated Sentences.

    ERIC Educational Resources Information Center

    Rosenberg, Sheldon

    This study was designed to test the hypothesis that the recall of verbal material (critical material) accompanying semantically well integrated (SWI) sentences will be superior to the recall of verbal material accompanying semantically poorly integrated (SPI) sentences. This hypothesis was based upon the conclusion derived from previous research…

  8. Project Integration Architecture: Formulation of Semantic Parameters

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    One of several key elements of the Project Integration Architecture (PIA) is the intention to formulate parameter objects which convey meaningful semantic information. In so doing, it is expected that a level of automation can be achieved in the consumption of information content by PIA-consuming clients outside the programmatic boundary of a presenting PIA-wrapped application. This paper discusses the steps that have been recently taken in formulating such semantically-meaningful parameters.

  9. Semantic web for integrated network analysis in biomedicine.

    PubMed

    Chen, Huajun; Ding, Li; Wu, Zhaohui; Yu, Tong; Dhanapalan, Lavanya; Chen, Jake Y

    2009-03-01

    The Semantic Web technology enables integration of heterogeneous data on the World Wide Web by making the semantics of data explicit through formal ontologies. In this article, we survey the feasibility and state of the art of utilizing the Semantic Web technology to represent, integrate and analyze the knowledge in various biomedical networks. We introduce a new conceptual framework, semantic graph mining, to enable researchers to integrate graph mining with ontology reasoning in network data analysis. Through four case studies, we demonstrate how semantic graph mining can be applied to the analysis of disease-causal genes, Gene Ontology category cross-talks, drug efficacy analysis and herb-drug interactions analysis.

  10. Semantic Web meets Integrative Biology: a survey.

    PubMed

    Chen, Huajun; Yu, Tong; Chen, Jake Y

    2013-01-01

    Integrative Biology (IB) uses experimental or computational quantitative technologies to characterize biological systems at the molecular, cellular, tissue and population levels. IB typically involves the integration of the data, knowledge and capabilities across disciplinary boundaries in order to solve complex problems. We identify a series of bioinformatics problems posed by interdisciplinary integration: (i) data integration that interconnects structured data across related biomedical domains; (ii) ontology integration that brings jargons, terminologies and taxonomies from various disciplines into a unified network of ontologies; (iii) knowledge integration that integrates disparate knowledge elements from multiple sources; (iv) service integration that build applications out of services provided by different vendors. We argue that IB can benefit significantly from the integration solutions enabled by Semantic Web (SW) technologies. The SW enables scientists to share content beyond the boundaries of applications and websites, resulting into a web of data that is meaningful and understandable to any computers. In this review, we provide insight into how SW technologies can be used to build open, standardized and interoperable solutions for interdisciplinary integration on a global basis. We present a rich set of case studies in system biology, integrative neuroscience, bio-pharmaceutics and translational medicine, to highlight the technical features and benefits of SW applications in IB.

  11. A Comparison of the neural correlates that underlie rule-based and information-integration category learning.

    PubMed

    Carpenter, Kathryn L; Wills, Andy J; Benattayallah, Abdelmalek; Milton, Fraser

    2016-10-01

    The influential competition between verbal and implicit systems (COVIS) model proposes that category learning is driven by two competing neural systems-an explicit, verbal, system, and a procedural-based, implicit, system. In the current fMRI study, participants learned either a conjunctive, rule-based (RB), category structure that is believed to engage the explicit system, or an information-integration category structure that is thought to preferentially recruit the implicit system. The RB and information-integration category structures were matched for participant error rate, the number of relevant stimulus dimensions, and category separation. Under these conditions, considerable overlap in brain activation, including the prefrontal cortex, basal ganglia, and the hippocampus, was found between the RB and information-integration category structures. Contrary to the predictions of COVIS, the medial temporal lobes and in particular the hippocampus, key regions for explicit memory, were found to be more active in the information-integration condition than in the RB condition. No regions were more activated in RB than information-integration category learning. The implications of these results for theories of category learning are discussed. Hum Brain Mapp 37:3557-3574, 2016. © 2016 Wiley Periodicals, Inc. PMID:27199090

  12. A Comparison of the neural correlates that underlie rule-based and information-integration category learning.

    PubMed

    Carpenter, Kathryn L; Wills, Andy J; Benattayallah, Abdelmalek; Milton, Fraser

    2016-10-01

    The influential competition between verbal and implicit systems (COVIS) model proposes that category learning is driven by two competing neural systems-an explicit, verbal, system, and a procedural-based, implicit, system. In the current fMRI study, participants learned either a conjunctive, rule-based (RB), category structure that is believed to engage the explicit system, or an information-integration category structure that is thought to preferentially recruit the implicit system. The RB and information-integration category structures were matched for participant error rate, the number of relevant stimulus dimensions, and category separation. Under these conditions, considerable overlap in brain activation, including the prefrontal cortex, basal ganglia, and the hippocampus, was found between the RB and information-integration category structures. Contrary to the predictions of COVIS, the medial temporal lobes and in particular the hippocampus, key regions for explicit memory, were found to be more active in the information-integration condition than in the RB condition. No regions were more activated in RB than information-integration category learning. The implications of these results for theories of category learning are discussed. Hum Brain Mapp 37:3557-3574, 2016. © 2016 Wiley Periodicals, Inc.

  13. Integrated Estimation of Seismic Physical Vulnerability of Tehran Using Rule Based Granular Computing

    NASA Astrophysics Data System (ADS)

    Sheikhian, H.; Delavar, M. R.; Stein, A.

    2015-08-01

    Tehran, the capital of Iran, is surrounded by the North Tehran fault, the Mosha fault and the Rey fault. This exposes the city to possibly huge earthquakes followed by dramatic human loss and physical damage, in particular as it contains a large number of non-standard constructions and aged buildings. Estimation of the likely consequences of an earthquake facilitates mitigation of these losses. Mitigation of the earthquake fatalities may be achieved by promoting awareness of earthquake vulnerability and implementation of seismic vulnerability reduction measures. In this research, granular computing using generality and absolute support for rule extraction is applied. It uses coverage and entropy for rule prioritization. These rules are combined to form a granule tree that shows the order and relation of the extracted rules. In this way the seismic physical vulnerability is assessed, integrating the effects of the three major known faults. Effective parameters considered in the physical seismic vulnerability assessment are slope, seismic intensity, height and age of the buildings. Experts were asked to predict seismic vulnerability for 100 randomly selected samples among more than 3000 statistical units in Tehran. The integrated experts' point of views serve as input into granular computing. Non-redundant covering rules preserve the consistency in the model, which resulted in 84% accuracy in the seismic vulnerability assessment based on the validation of the predicted test data against expected vulnerability degree. The study concluded that granular computing is a useful method to assess the effects of earthquakes in an earthquake prone area.

  14. Rule-based integration of RNA-Seq analyses tools for identification of novel transcripts.

    PubMed

    Inamdar, Harshal; Datta, Avik; Manjari, K Sunitha; Joshi, Rajendra

    2014-10-01

    Recent evidences suggest that a substantial amount of genome is transcribed more than that was anticipated, giving rise to a large number of unknown or novel transcripts. Identification of novel transcripts can provide key insights into understanding important cellular functions as well as molecular mechanisms underlying complex diseases like cancer. RNA-Seq has emerged as a powerful tool to detect novel transcripts, which previous profiling techniques failed to identify. A number of tools are available for enabling identification of novel transcripts at different levels. Read mappers such as TopHat, MapSplice, and SOAPsplice predict novel junctions, which are the indicators of novel transcripts. Cufflinks assembles novel transcripts based on alignment information and Oases performs de novo construction of transcripts. A common limitation of all these tools is prediction of sizable number of spurious or false positive (FP) novel transcripts. An approach that integrates information from all above sources and simultaneously scrutinizes FPs to correctly identify authentic novel transcripts of high confidence is proposed. PMID:25245144

  15. Semantic search integration to climate data

    SciTech Connect

    Devarakonda, Ranjeet; Palanisamy, Giri; Pouchard, Line Catherine; Shrestha, Biva

    2014-01-01

    In this paper we present how research projects at Oak Ridge National Laboratory are using Semantic Search capabilities to help scientists perform their research. We will discuss how the Mercury metadata search system, with the help of the semantic search capability, is being used to find, retrieve, and link climate change data. DOI: 10.1109/CTS.2014.6867639

  16. Differential impact of relevant and irrelevant dimension primes on rule-based and information-integration category learning.

    PubMed

    Grimm, Lisa R; Maddox, W Todd

    2013-11-01

    Research has identified multiple category-learning systems with each being "tuned" for learning categories with different task demands and each governed by different neurobiological systems. Rule-based (RB) classification involves testing verbalizable rules for category membership while information-integration (II) classification requires the implicit learning of stimulus-response mappings. In the first study to directly test rule priming with RB and II category learning, we investigated the influence of the availability of information presented at the beginning of the task. Participants viewed lines that varied in length, orientation, and position on the screen, and were primed to focus on stimulus dimensions that were relevant or irrelevant to the correct classification rule. In Experiment 1, we used an RB category structure, and in Experiment 2, we used an II category structure. Accuracy and model-based analyses suggested that a focus on relevant dimensions improves RB task performance later in learning while a focus on an irrelevant dimension improves II task performance early in learning.

  17. Category Number Impacts Rule-Based "and" Information-Integration Category Learning: A Reassessment of Evidence for Dissociable Category-Learning Systems

    ERIC Educational Resources Information Center

    Stanton, Roger D.; Nosofsky, Robert M.

    2013-01-01

    Researchers have proposed that an explicit reasoning system is responsible for learning rule-based category structures and that a separate implicit, procedural-learning system is responsible for learning information-integration category structures. As evidence for this multiple-system hypothesis, researchers report a dissociation based on…

  18. Semantic integration for mapping the underworld

    NASA Astrophysics Data System (ADS)

    Fu, Gaihua; Cohn, Anthony G.

    2008-10-01

    Utility infrastructure is vital to the daily life of modern society. As the vast majority of urban utility assets are buried underneath public roads, the need to install/repair utility assets often requires opening ground with busy traffic. Unfortunately, at present most excavation works are carried out without knowing exactly what is where, which causes far more street breakings than necessary. This research studies how maximum benefit can be gained from the existing knowledge of buried assets. The key challenge here is that utility data is heterogeneous, which arises due to different domain perceptions and varying data modelling practices. This research investigates factors which prevent utility knowledge from being fully exploited and suggests that integration techniques can be applied for reconciling semantic heterogeneity within the utility domain. In this paper we discuss the feasibility of a common utility ontology to describe underground assets, and present techniques for constructing a basic utility ontology in the form of a thesaurus. The paper also demonstrates how the utility thesaurus developed is employed as a shared ontology for mapping utility data. Experiments have been performed to evaluate the techniques proposed, and feedback from industrial partners is encouraging and shows that techniques work effectively with real world utility data.

  19. The Balance-Scale Task Revisited: A Comparison of Statistical Models for Rule-Based and Information-Integration Theories of Proportional Reasoning.

    PubMed

    Hofman, Abe D; Visser, Ingmar; Jansen, Brenda R J; van der Maas, Han L J

    2015-01-01

    We propose and test three statistical models for the analysis of children's responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM) following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779), and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808). For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development. PMID:26505905

  20. The Balance-Scale Task Revisited: A Comparison of Statistical Models for Rule-Based and Information-Integration Theories of Proportional Reasoning

    PubMed Central

    Hofman, Abe D.; Visser, Ingmar; Jansen, Brenda R. J.; van der Maas, Han L. J.

    2015-01-01

    We propose and test three statistical models for the analysis of children’s responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM) following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779), and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808). For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development. PMID:26505905

  1. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-13

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

  2. Two theories of consciousness: Semantic pointer competition vs. information integration.

    PubMed

    Thagard, Paul; Stewart, Terrence C

    2014-11-01

    Consciousness results from three mechanisms: representation by firing patterns in neural populations, binding of representations into more complex representations called semantic pointers, and competition among semantic pointers to capture the most important aspects of an organism's current state. We contrast the semantic pointer competition (SPC) theory of consciousness with the hypothesis that consciousness is the capacity of a system to integrate information (IIT). We describe computer simulations to show that SPC surpasses IIT in providing better explanations of key aspects of consciousness: qualitative features, onset and cessation, shifts in experiences, differences in kinds across different organisms, unity and diversity, and storage and retrieval.

  3. Chemical Entity Semantic Specification: Knowledge representation for efficient semantic cheminformatics and facile data integration

    PubMed Central

    2011-01-01

    Background Over the past several centuries, chemistry has permeated virtually every facet of human lifestyle, enriching fields as diverse as medicine, agriculture, manufacturing, warfare, and electronics, among numerous others. Unfortunately, application-specific, incompatible chemical information formats and representation strategies have emerged as a result of such diverse adoption of chemistry. Although a number of efforts have been dedicated to unifying the computational representation of chemical information, disparities between the various chemical databases still persist and stand in the way of cross-domain, interdisciplinary investigations. Through a common syntax and formal semantics, Semantic Web technology offers the ability to accurately represent, integrate, reason about and query across diverse chemical information. Results Here we specify and implement the Chemical Entity Semantic Specification (CHESS) for the representation of polyatomic chemical entities, their substructures, bonds, atoms, and reactions using Semantic Web technologies. CHESS provides means to capture aspects of their corresponding chemical descriptors, connectivity, functional composition, and geometric structure while specifying mechanisms for data provenance. We demonstrate that using our readily extensible specification, it is possible to efficiently integrate multiple disparate chemical data sources, while retaining appropriate correspondence of chemical descriptors, with very little additional effort. We demonstrate the impact of some of our representational decisions on the performance of chemically-aware knowledgebase searching and rudimentary reaction candidate selection. Finally, we provide access to the tools necessary to carry out chemical entity encoding in CHESS, along with a sample knowledgebase. Conclusions By harnessing the power of Semantic Web technologies with CHESS, it is possible to provide a means of facile cross-domain chemical knowledge integration with full

  4. Specification and Enforcement of Semantic Integrity Constraints in Microsoft Access

    ERIC Educational Resources Information Center

    Dadashzadeh, Mohammad

    2007-01-01

    Semantic integrity constraints are business-specific rules that limit the permissible values in a database. For example, a university rule dictating that an "incomplete" grade cannot be changed to an A constrains the possible states of the database. To maintain database integrity, business rules should be identified in the course of database…

  5. Ontology Alignment Architecture for Semantic Sensor Web Integration

    PubMed Central

    Fernandez, Susel; Marsa-Maestre, Ivan; Velasco, Juan R.; Alarcos, Bernardo

    2013-01-01

    Sensor networks are a concept that has become very popular in data acquisition and processing for multiple applications in different fields such as industrial, medicine, home automation, environmental detection, etc. Today, with the proliferation of small communication devices with sensors that collect environmental data, semantic Web technologies are becoming closely related with sensor networks. The linking of elements from Semantic Web technologies with sensor networks has been called Semantic Sensor Web and has among its main features the use of ontologies. One of the key challenges of using ontologies in sensor networks is to provide mechanisms to integrate and exchange knowledge from heterogeneous sources (that is, dealing with semantic heterogeneity). Ontology alignment is the process of bringing ontologies into mutual agreement by the automatic discovery of mappings between related concepts. This paper presents a system for ontology alignment in the Semantic Sensor Web which uses fuzzy logic techniques to combine similarity measures between entities of different ontologies. The proposed approach focuses on two key elements: the terminological similarity, which takes into account the linguistic and semantic information of the context of the entity's names, and the structural similarity, based on both the internal and relational structure of the concepts. This work has been validated using sensor network ontologies and the Ontology Alignment Evaluation Initiative (OAEI) tests. The results show that the proposed techniques outperform previous approaches in terms of precision and recall. PMID:24051523

  6. Ontology alignment architecture for semantic sensor Web integration.

    PubMed

    Fernandez, Susel; Marsa-Maestre, Ivan; Velasco, Juan R; Alarcos, Bernardo

    2013-09-18

    Sensor networks are a concept that has become very popular in data acquisition and processing for multiple applications in different fields such as industrial, medicine, home automation, environmental detection, etc. Today, with the proliferation of small communication devices with sensors that collect environmental data, semantic Web technologies are becoming closely related with sensor networks. The linking of elements from Semantic Web technologies with sensor networks has been called Semantic Sensor Web and has among its main features the use of ontologies. One of the key challenges of using ontologies in sensor networks is to provide mechanisms to integrate and exchange knowledge from heterogeneous sources (that is, dealing with semantic heterogeneity). Ontology alignment is the process of bringing ontologies into mutual agreement by the automatic discovery of mappings between related concepts. This paper presents a system for ontology alignment in the Semantic Sensor Web which uses fuzzy logic techniques to combine similarity measures between entities of different ontologies. The proposed approach focuses on two key elements: the terminological similarity, which takes into account the linguistic and semantic information of the context of the entity's names, and the structural similarity, based on both the internal and relational structure of the concepts. This work has been validated using sensor network ontologies and the Ontology Alignment Evaluation Initiative (OAEI) tests. The results show that the proposed techniques outperform previous approaches in terms of precision and recall.

  7. Semantic Integration and Syntactic Planning in Language Production

    ERIC Educational Resources Information Center

    Solomon, Eric S.; Pearlmutter, Neal J.

    2004-01-01

    Five experiments, using a subject-verb agreement error elicitation procedure, investigated syntactic planning processes in production. The experiments examined the influence of semantic integration--the degree to which phrases are tightly linked at the conceptual level--and contrasted two accounts of planning: serial stack-based systems and…

  8. Morphological Decomposition and Semantic Integration in Word Processing

    ERIC Educational Resources Information Center

    Meunier, Fanny; Longtin, Catherine-Marie

    2007-01-01

    In the present study, we looked at cross-modal priming effects produced by auditory presentation of morphologically complex pseudowords in order to investigate semantic integration during the processing of French morphologically complex items. In Experiment 1, we used as primes pseudowords consisting of a non-interpretable combination of roots and…

  9. Mining integrated semantic networks for drug repositioning opportunities

    PubMed Central

    Mullen, Joseph; Tipney, Hannah

    2016-01-01

    Current research and development approaches to drug discovery have become less fruitful and more costly. One alternative paradigm is that of drug repositioning. Many marketed examples of repositioned drugs have been identified through serendipitous or rational observations, highlighting the need for more systematic methodologies to tackle the problem. Systems level approaches have the potential to enable the development of novel methods to understand the action of therapeutic compounds, but requires an integrative approach to biological data. Integrated networks can facilitate systems level analyses by combining multiple sources of evidence to provide a rich description of drugs, their targets and their interactions. Classically, such networks can be mined manually where a skilled person is able to identify portions of the graph (semantic subgraphs) that are indicative of relationships between drugs and highlight possible repositioning opportunities. However, this approach is not scalable. Automated approaches are required to systematically mine integrated networks for these subgraphs and bring them to the attention of the user. We introduce a formal framework for the definition of integrated networks and their associated semantic subgraphs for drug interaction analysis and describe DReSMin, an algorithm for mining semantically-rich networks for occurrences of a given semantic subgraph. This algorithm allows instances of complex semantic subgraphs that contain data about putative drug repositioning opportunities to be identified in a computationally tractable fashion, scaling close to linearly with network data. We demonstrate the utility of our approach by mining an integrated drug interaction network built from 11 sources. This work identified and ranked 9,643,061 putative drug-target interactions, showing a strong correlation between highly scored associations and those supported by literature. We discuss the 20 top ranked associations in more detail, of which

  10. Mining integrated semantic networks for drug repositioning opportunities.

    PubMed

    Mullen, Joseph; Cockell, Simon J; Tipney, Hannah; Woollard, Peter M; Wipat, Anil

    2016-01-01

    Current research and development approaches to drug discovery have become less fruitful and more costly. One alternative paradigm is that of drug repositioning. Many marketed examples of repositioned drugs have been identified through serendipitous or rational observations, highlighting the need for more systematic methodologies to tackle the problem. Systems level approaches have the potential to enable the development of novel methods to understand the action of therapeutic compounds, but requires an integrative approach to biological data. Integrated networks can facilitate systems level analyses by combining multiple sources of evidence to provide a rich description of drugs, their targets and their interactions. Classically, such networks can be mined manually where a skilled person is able to identify portions of the graph (semantic subgraphs) that are indicative of relationships between drugs and highlight possible repositioning opportunities. However, this approach is not scalable. Automated approaches are required to systematically mine integrated networks for these subgraphs and bring them to the attention of the user. We introduce a formal framework for the definition of integrated networks and their associated semantic subgraphs for drug interaction analysis and describe DReSMin, an algorithm for mining semantically-rich networks for occurrences of a given semantic subgraph. This algorithm allows instances of complex semantic subgraphs that contain data about putative drug repositioning opportunities to be identified in a computationally tractable fashion, scaling close to linearly with network data. We demonstrate the utility of our approach by mining an integrated drug interaction network built from 11 sources. This work identified and ranked 9,643,061 putative drug-target interactions, showing a strong correlation between highly scored associations and those supported by literature. We discuss the 20 top ranked associations in more detail, of which

  11. Semantic Web-based integration of cancer pathways and allele frequency data.

    PubMed

    Holford, Matthew E; Rajeevan, Haseena; Zhao, Hongyu; Kidd, Kenneth K; Cheung, Kei-Hoi

    2009-01-01

    We demonstrate the use of Semantic Web technology to integrate the ALFRED allele frequency database and the Starpath pathway resource. The linking of population-specific genotype data with cancer-related pathway data is potentially useful given the growing interest in personalized medicine and the exploitation of pathway knowledge for cancer drug discovery. We model our data using the Web Ontology Language (OWL), drawing upon ideas from existing standard formats BioPAX for pathway data and PML for allele frequency data. We store our data within an Oracle database, using Oracle Semantic Technologies. We then query the data using Oracle's rule-based inference engine and SPARQL-like RDF query language. The ability to perform queries across the domains of population genetics and pathways offers the potential to answer a number of cancer-related research questions. Among the possibilities is the ability to identify genetic variants which are associated with cancer pathways and whose frequency varies significantly between ethnic groups. This sort of information could be useful for designing clinical studies and for providing background data in personalized medicine. It could also assist with the interpretation of genetic analysis results such as those from genome-wide association studies.

  12. Towards A Topological Framework for Integrating Semantic Information Sources

    SciTech Connect

    Joslyn, Cliff A.; Hogan, Emilie A.; Robinson, Michael

    2014-09-07

    In this position paper we argue for the role that topological modeling principles can play in providing a framework for sensor integration. While used successfully in standard (quantitative) sensors, we are developing this methodology in new directions to make it appropriate specifically for semantic information sources, including keyterms, ontology terms, and other general Boolean, categorical, ordinal, and partially-ordered data types. We illustrate the basics of the methodology in an extended use case/example, and discuss path forward.

  13. Project Integration Architecture: Formulation of Dimensionality in Semantic Parameters Outline

    NASA Technical Reports Server (NTRS)

    Jones, William Henry

    2005-01-01

    One of several key elements of the Project Integration Architecture (PIA) is the formulation of parameter objects which convey meaningful semantic information. The infusion of measurement dimensionality into such objects is an important part of that effort since it promises to automate the conversion of units between cooperating applications and, thereby, eliminate the mistakes that have occasionally beset other systems of information transport. This paper discusses the conceptualization of dimensionality developed as a result of that effort.

  14. Semantic integration during metaphor comprehension in Asperger syndrome.

    PubMed

    Gold, Rinat; Faust, Miriam; Goldstein, Abraham

    2010-06-01

    Previous research indicates severe disabilities in processing figurative language in people diagnosed on the autism spectrum disorders. However, this aspect of language comprehension in Asperger syndrome (AS) specifically has rarely been the subject of formal study. The present study aimed to examine the possibility that in addition to their pragmatic deficits, the difficulties in the comprehension of metaphors in AS may be explained by deficient linguistic information processing. Specifically, we aimed to examine whether a deficient semantic integration process underlies the difficulties in metaphor comprehension frequently experienced by persons with AS. The semantic integration process of sixteen AS participants and sixteen matched controls was examined using event related potentials (ERPs). N400 amplitude served as an index for degree of effort invested in the semantic integration process of two-word expressions denoting literal, conventional metaphoric, and novel metaphoric meaning, as well as unrelated word pairs. Large N400 amplitudes for both novel and conventional metaphors demonstrated the greater difficulties in metaphor comprehension in the AS participants as compared to controls. Findings suggest that differences in linguistic information processing cause difficulties in metaphor comprehension in AS.

  15. A computational modeling of semantic knowledge in reading comprehension: Integrating the landscape model with latent semantic analysis.

    PubMed

    Yeari, Menahem; van den Broek, Paul

    2016-09-01

    It is a well-accepted view that the prior semantic (general) knowledge that readers possess plays a central role in reading comprehension. Nevertheless, computational models of reading comprehension have not integrated the simulation of semantic knowledge and online comprehension processes under a unified mathematical algorithm. The present article introduces a computational model that integrates the landscape model of comprehension processes with latent semantic analysis representation of semantic knowledge. In three sets of simulations of previous behavioral findings, the integrated model successfully simulated the activation and attenuation of predictive and bridging inferences during reading, as well as centrality estimations and recall of textual information after reading. Analyses of the computational results revealed new theoretical insights regarding the underlying mechanisms of the various comprehension phenomena.

  16. A computational modeling of semantic knowledge in reading comprehension: Integrating the landscape model with latent semantic analysis.

    PubMed

    Yeari, Menahem; van den Broek, Paul

    2016-09-01

    It is a well-accepted view that the prior semantic (general) knowledge that readers possess plays a central role in reading comprehension. Nevertheless, computational models of reading comprehension have not integrated the simulation of semantic knowledge and online comprehension processes under a unified mathematical algorithm. The present article introduces a computational model that integrates the landscape model of comprehension processes with latent semantic analysis representation of semantic knowledge. In three sets of simulations of previous behavioral findings, the integrated model successfully simulated the activation and attenuation of predictive and bridging inferences during reading, as well as centrality estimations and recall of textual information after reading. Analyses of the computational results revealed new theoretical insights regarding the underlying mechanisms of the various comprehension phenomena. PMID:27383752

  17. Semantic Integrative Digital Pathology: Insights into Microsemiological Semantics and Image Analysis Scalability.

    PubMed

    Racoceanu, Daniel; Capron, Frédérique

    2016-01-01

    Being able to provide a traceable and dynamic second opinion has become an ethical priority for patients and health care professionals in modern computer-aided medicine. In this perspective, a semantic cognitive virtual microscopy approach has been recently initiated, the MICO project, by focusing on cognitive digital pathology. This approach supports the elaboration of pathology-compliant daily protocols dedicated to breast cancer grading, in particular mitotic counts and nuclear atypia. A proof of concept has thus been elaborated, and an extension of these approaches is now underway in a collaborative digital pathology framework, the FlexMIm project. As important milestones on the way to routine digital pathology, a series of pioneer international benchmarking initiatives have been launched for mitosis detection (MITOS), nuclear atypia grading (MITOS-ATYPIA) and glandular structure detection (GlaS), some of the fundamental grading components in diagnosis and prognosis. These initiatives allow envisaging a consolidated validation referential database for digital pathology in the very near future. This reference database will need coordinated efforts from all major teams working in this area worldwide, and it will certainly represent a critical bottleneck for the acceptance of all future imaging modules in clinical practice. In line with recent advances in molecular imaging and genetics, keeping the microscopic modality at the core of future digital systems in pathology is fundamental to insure the acceptance of these new technologies, as well as for a deeper systemic, structured comprehension of the pathologies. After all, at the scale of routine whole-slide imaging (WSI; ∼0.22 µm/pixel), the microscopic image represents a structured 'genomic cluster', enabling a naturally structured support for integrative digital pathology approaches. In order to accelerate and structure the integration of this heterogeneous information, a major effort is and will continue to

  18. Semantic Integrative Digital Pathology: Insights into Microsemiological Semantics and Image Analysis Scalability.

    PubMed

    Racoceanu, Daniel; Capron, Frédérique

    2016-01-01

    Being able to provide a traceable and dynamic second opinion has become an ethical priority for patients and health care professionals in modern computer-aided medicine. In this perspective, a semantic cognitive virtual microscopy approach has been recently initiated, the MICO project, by focusing on cognitive digital pathology. This approach supports the elaboration of pathology-compliant daily protocols dedicated to breast cancer grading, in particular mitotic counts and nuclear atypia. A proof of concept has thus been elaborated, and an extension of these approaches is now underway in a collaborative digital pathology framework, the FlexMIm project. As important milestones on the way to routine digital pathology, a series of pioneer international benchmarking initiatives have been launched for mitosis detection (MITOS), nuclear atypia grading (MITOS-ATYPIA) and glandular structure detection (GlaS), some of the fundamental grading components in diagnosis and prognosis. These initiatives allow envisaging a consolidated validation referential database for digital pathology in the very near future. This reference database will need coordinated efforts from all major teams working in this area worldwide, and it will certainly represent a critical bottleneck for the acceptance of all future imaging modules in clinical practice. In line with recent advances in molecular imaging and genetics, keeping the microscopic modality at the core of future digital systems in pathology is fundamental to insure the acceptance of these new technologies, as well as for a deeper systemic, structured comprehension of the pathologies. After all, at the scale of routine whole-slide imaging (WSI; ∼0.22 µm/pixel), the microscopic image represents a structured 'genomic cluster', enabling a naturally structured support for integrative digital pathology approaches. In order to accelerate and structure the integration of this heterogeneous information, a major effort is and will continue to

  19. Simulation of operating rules and discretional decisions using a fuzzy rule-based system integrated into a water resources management model

    NASA Astrophysics Data System (ADS)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel

    2013-04-01

    Water resources systems are operated, mostly, using a set of pre-defined rules not regarding, usually, to an optimal allocation in terms of water use or economic benefits, but to historical and institutional reasons. These operating policies are reproduced, commonly, as hedging rules, pack rules or zone-based operations, and simulation models can be used to test their performance under a wide range of hydrological and/or socio-economic hypothesis. Despite the high degree of acceptation and testing that these models have achieved, the actual operation of water resources systems hardly follows all the time the pre-defined rules with the consequent uncertainty on the system performance. Real-world reservoir operation is very complex, affected by input uncertainty (imprecision in forecast inflow, seepage and evaporation losses, etc.), filtered by the reservoir operator's experience and natural risk-aversion, while considering the different physical and legal/institutional constraints in order to meet the different demands and system requirements. The aim of this work is to expose a fuzzy logic approach to derive and assess the historical operation of a system. This framework uses a fuzzy rule-based system to reproduce pre-defined rules and also to match as close as possible the actual decisions made by managers. After built up, the fuzzy rule-based system can be integrated in a water resources management model, making possible to assess the system performance at the basin scale. The case study of the Mijares basin (eastern Spain) is used to illustrate the method. A reservoir operating curve regulates the two main reservoir releases (operated in a conjunctive way) with the purpose of guaranteeing a high realiability of supply to the traditional irrigation districts with higher priority (more senior demands that funded the reservoir construction). A fuzzy rule-based system has been created to reproduce the operating curve's performance, defining the system state (total

  20. Semantic Integration for Marine Science Interoperability Using Web Technologies

    NASA Astrophysics Data System (ADS)

    Rueda, C.; Bermudez, L.; Graybeal, J.; Isenor, A. W.

    2008-12-01

    The Marine Metadata Interoperability Project, MMI (http://marinemetadata.org) promotes the exchange, integration, and use of marine data through enhanced data publishing, discovery, documentation, and accessibility. A key effort is the definition of an Architectural Framework and Operational Concept for Semantic Interoperability (http://marinemetadata.org/sfc), which is complemented with the development of tools that realize critical use cases in semantic interoperability. In this presentation, we describe a set of such Semantic Web tools that allow performing important interoperability tasks, ranging from the creation of controlled vocabularies and the mapping of terms across multiple ontologies, to the online registration, storage, and search services needed to work with the ontologies (http://mmisw.org). This set of services uses Web standards and technologies, including Resource Description Framework (RDF), Web Ontology language (OWL), Web services, and toolkits for Rich Internet Application development. We will describe the following components: MMI Ontology Registry: The MMI Ontology Registry and Repository provides registry and storage services for ontologies. Entries in the registry are associated with projects defined by the registered users. Also, sophisticated search functions, for example according to metadata items and vocabulary terms, are provided. Client applications can submit search requests using the WC3 SPARQL Query Language for RDF. Voc2RDF: This component converts an ASCII comma-delimited set of terms and definitions into an RDF file. Voc2RDF facilitates the creation of controlled vocabularies by using a simple form-based user interface. Created vocabularies and their descriptive metadata can be submitted to the MMI Ontology Registry for versioning and community access. VINE: The Vocabulary Integration Environment component allows the user to map vocabulary terms across multiple ontologies. Various relationships can be established, for example

  1. Electrophysiological Evidence for Incremental Lexical-Semantic Integration in Auditory Compound Comprehension

    ERIC Educational Resources Information Center

    Koester, Dirk; Holle, Henning; Gunter, Thomas C.

    2009-01-01

    The present study investigated the time-course of semantic integration in auditory compound word processing. Compounding is a productive mechanism of word formation that is used frequently in many languages. Specifically, we examined whether semantic integration is incremental or is delayed until the head, the last constituent in German, is…

  2. Neural correlates of audiovisual integration of semantic category information.

    PubMed

    Hu, Zhonghua; Zhang, Ruiling; Zhang, Qinglin; Liu, Qiang; Li, Hong

    2012-04-01

    Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded during a words-categorization task with stimuli presented in the auditory-visual modality. In the experiment, congruency of the visual and auditory stimuli was manipulated. Results showed that within the window of about 180-210 ms post-stimulus more positive values were elicited by category-congruent audiovisual stimuli than category-incongruent audiovisual stimuli. This indicates that the late frontal-central audiovisual interaction is related to audiovisual integration of semantic category information.

  3. Integrated semantics service platform for the Internet of Things: a case study of a smart office.

    PubMed

    Ryu, Minwoo; Kim, Jaeho; Yun, Jaeseok

    2015-01-19

    The Internet of Things (IoT) allows machines and devices in the world to connect with each other and generate a huge amount of data, which has a great potential to provide useful knowledge across service domains. Combining the context of IoT with semantic technologies, we can build integrated semantic systems to support semantic interoperability. In this paper, we propose an integrated semantic service platform (ISSP) to support ontological models in various IoT-based service domains of a smart city. In particular, we address three main problems for providing integrated semantic services together with IoT systems: semantic discovery, dynamic semantic representation, and semantic data repository for IoT resources. To show the feasibility of the ISSP, we develop a prototype service for a smart office using the ISSP, which can provide a preset, personalized office environment by interpreting user text input via a smartphone. We also discuss a scenario to show how the ISSP-based method would help build a smart city, where services in each service domain can discover and exploit IoT resources that are wanted across domains. We expect that our method could eventually contribute to providing people in a smart city with more integrated, comprehensive services based on semantic interoperability.

  4. Integrated semantics service platform for the Internet of Things: a case study of a smart office.

    PubMed

    Ryu, Minwoo; Kim, Jaeho; Yun, Jaeseok

    2015-01-01

    The Internet of Things (IoT) allows machines and devices in the world to connect with each other and generate a huge amount of data, which has a great potential to provide useful knowledge across service domains. Combining the context of IoT with semantic technologies, we can build integrated semantic systems to support semantic interoperability. In this paper, we propose an integrated semantic service platform (ISSP) to support ontological models in various IoT-based service domains of a smart city. In particular, we address three main problems for providing integrated semantic services together with IoT systems: semantic discovery, dynamic semantic representation, and semantic data repository for IoT resources. To show the feasibility of the ISSP, we develop a prototype service for a smart office using the ISSP, which can provide a preset, personalized office environment by interpreting user text input via a smartphone. We also discuss a scenario to show how the ISSP-based method would help build a smart city, where services in each service domain can discover and exploit IoT resources that are wanted across domains. We expect that our method could eventually contribute to providing people in a smart city with more integrated, comprehensive services based on semantic interoperability. PMID:25608216

  5. A Semantic Web Management Model for Integrative Biomedical Informatics

    PubMed Central

    Deus, Helena F.; Stanislaus, Romesh; Veiga, Diogo F.; Behrens, Carmen; Wistuba, Ignacio I.; Minna, John D.; Garner, Harold R.; Swisher, Stephen G.; Roth, Jack A.; Correa, Arlene M.; Broom, Bradley; Coombes, Kevin; Chang, Allen; Vogel, Lynn H.; Almeida, Jonas S.

    2008-01-01

    Background Data, data everywhere. The diversity and magnitude of the data generated in the Life Sciences defies automated articulation among complementary efforts. The additional need in this field for managing property and access permissions compounds the difficulty very significantly. This is particularly the case when the integration involves multiple domains and disciplines, even more so when it includes clinical and high throughput molecular data. Methodology/Principal Findings The emergence of Semantic Web technologies brings the promise of meaningful interoperation between data and analysis resources. In this report we identify a core model for biomedical Knowledge Engineering applications and demonstrate how this new technology can be used to weave a management model where multiple intertwined data structures can be hosted and managed by multiple authorities in a distributed management infrastructure. Specifically, the demonstration is performed by linking data sources associated with the Lung Cancer SPORE awarded to The University of Texas MDAnderson Cancer Center at Houston and the Southwestern Medical Center at Dallas. A software prototype, available with open source at www.s3db.org, was developed and its proposed design has been made publicly available as an open source instrument for shared, distributed data management. Conclusions/Significance The Semantic Web technologies have the potential to addresses the need for distributed and evolvable representations that are critical for systems Biology and translational biomedical research. As this technology is incorporated into application development we can expect that both general purpose productivity software and domain specific software installed on our personal computers will become increasingly integrated with the relevant remote resources. In this scenario, the acquisition of a new dataset should automatically trigger the delegation of its analysis. PMID:18698353

  6. Combining Semantic Web technologies with Multi-Agent Systems for integrated access to biological resources.

    PubMed

    García-Sánchez, Francisco; Fernández-Breis, Jesualdo Tomás; Valencia-García, Rafael; Gómez, Juan Miguel; Martínez-Béjar, Rodrigo

    2008-10-01

    The increasing volume and diversity of information in biomedical research is demanding new approaches for data integration in this domain. Semantic Web technologies and applications can leverage the potential of biomedical information integration and discovery, facing the problem of semantic heterogeneity of biomedical information sources. In such an environment, agent technology can assist users in discovering and invoking the services available on the Internet. In this paper we present SEMMAS, an ontology-based, domain-independent framework for seamlessly integrating Intelligent Agents and Semantic Web Services. Our approach is backed with a proof-of-concept implementation where the breakthrough and efficiency of integrating disparate biomedical information sources have been tested.

  7. Disease Ontology: a backbone for disease semantic integration.

    PubMed

    Schriml, Lynn Marie; Arze, Cesar; Nadendla, Suvarna; Chang, Yu-Wei Wayne; Mazaitis, Mark; Felix, Victor; Feng, Gang; Kibbe, Warren Alden

    2012-01-01

    The Disease Ontology (DO) database (http://disease-ontology.org) represents a comprehensive knowledge base of 8043 inherited, developmental and acquired human diseases (DO version 3, revision 2510). The DO web browser has been designed for speed, efficiency and robustness through the use of a graph database. Full-text contextual searching functionality using Lucene allows the querying of name, synonym, definition, DOID and cross-reference (xrefs) with complex Boolean search strings. The DO semantically integrates disease and medical vocabularies through extensive cross mapping and integration of MeSH, ICD, NCI's thesaurus, SNOMED CT and OMIM disease-specific terms and identifiers. The DO is utilized for disease annotation by major biomedical databases (e.g. Array Express, NIF, IEDB), as a standard representation of human disease in biomedical ontologies (e.g. IDO, Cell line ontology, NIFSTD ontology, Experimental Factor Ontology, Influenza Ontology), and as an ontological cross mappings resource between DO, MeSH and OMIM (e.g. GeneWiki). The DO project (http://diseaseontology.sf.net) has been incorporated into open source tools (e.g. Gene Answers, FunDO) to connect gene and disease biomedical data through the lens of human disease. The next iteration of the DO web browser will integrate DO's extended relations and logical definition representation along with these biomedical resource cross-mappings.

  8. Disease Ontology: a backbone for disease semantic integration

    PubMed Central

    Schriml, Lynn Marie; Arze, Cesar; Nadendla, Suvarna; Chang, Yu-Wei Wayne; Mazaitis, Mark; Felix, Victor; Feng, Gang; Kibbe, Warren Alden

    2012-01-01

    The Disease Ontology (DO) database (http://disease-ontology.org) represents a comprehensive knowledge base of 8043 inherited, developmental and acquired human diseases (DO version 3, revision 2510). The DO web browser has been designed for speed, efficiency and robustness through the use of a graph database. Full-text contextual searching functionality using Lucene allows the querying of name, synonym, definition, DOID and cross-reference (xrefs) with complex Boolean search strings. The DO semantically integrates disease and medical vocabularies through extensive cross mapping and integration of MeSH, ICD, NCI's thesaurus, SNOMED CT and OMIM disease-specific terms and identifiers. The DO is utilized for disease annotation by major biomedical databases (e.g. Array Express, NIF, IEDB), as a standard representation of human disease in biomedical ontologies (e.g. IDO, Cell line ontology, NIFSTD ontology, Experimental Factor Ontology, Influenza Ontology), and as an ontological cross mappings resource between DO, MeSH and OMIM (e.g. GeneWiki). The DO project (http://diseaseontology.sf.net) has been incorporated into open source tools (e.g. Gene Answers, FunDO) to connect gene and disease biomedical data through the lens of human disease. The next iteration of the DO web browser will integrate DO's extended relations and logical definition representation along with these biomedical resource cross-mappings. PMID:22080554

  9. Distributed semantic networks and CLIPS

    NASA Technical Reports Server (NTRS)

    Snyder, James; Rodriguez, Tony

    1991-01-01

    Semantic networks of frames are commonly used as a method of reasoning in many problems. In most of these applications the semantic network exists as a single entity in a single process environment. Advances in workstation hardware provide support for more sophisticated applications involving multiple processes, interacting in a distributed environment. In these applications the semantic network may well be distributed over several concurrently executing tasks. This paper describes the design and implementation of a frame based, distributed semantic network in which frames are accessed both through C Language Integrated Production System (CLIPS) expert systems and procedural C++ language programs. The application area is a knowledge based, cooperative decision making model utilizing both rule based and procedural experts.

  10. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases.

    PubMed

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-07-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.

  11. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases

    PubMed Central

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-01-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org. PMID:21632604

  12. Semantic Integration and Age of Acquisition Effects in Code-Blend Comprehension.

    PubMed

    Giezen, Marcel R; Emmorey, Karen

    2016-04-01

    Semantic and lexical decision tasks were used to investigate the mechanisms underlying code-blend facilitation: the finding that hearing bimodal bilinguals comprehend signs in American Sign Language (ASL) and spoken English words more quickly when they are presented together simultaneously than when each is presented alone. More robust facilitation effects were observed for semantic decision than for lexical decision, suggesting that lexical integration of signs and words within a code-blend occurs primarily at the semantic level, rather than at the level of form. Early bilinguals exhibited greater facilitation effects than late bilinguals for English (the dominant language) in the semantic decision task, possibly because early bilinguals are better able to process early visual cues from ASL signs and use these to constrain English word recognition. Comprehension facilitation via semantic integration of words and signs is consistent with co-speech gesture research demonstrating facilitative effects of gesture integration on language comprehension. PMID:26657077

  13. Linked Metadata - lightweight semantics for data integration (Invited)

    NASA Astrophysics Data System (ADS)

    Hendler, J. A.

    2013-12-01

    The "Linked Open Data" cloud (http://linkeddata.org) is currently used to show how the linking of datasets, supported by SPARQL endpoints, is creating a growing set of linked data assets. This linked data space has been growing rapidly, and the last version collected is estimated to have had over 35 billion 'triples.' As impressive as this may sound, there is an inherent flaw in the way the linked data story is conceived. The idea is that all of the data is represented in a linked format (generally RDF) and applications will essentially query this cloud and provide mashup capabilities between the various kinds of data that are found. The view of linking in the cloud is fairly simple -links are provided by either shared URIs or by URIs that are asserted to be owl:sameAs. This view of the linking, which primarily focuses on shared objects and subjects in RDF's subject-predicate-object representation, misses a critical aspect of Semantic Web technology. Given triples such as * A:person1 foaf:knows A:person2 * B:person3 foaf:knows B:person4 * C:person5 foaf:name 'John Doe' this view would not consider them linked (barring other assertions) even though they share a common vocabulary. In fact, we get significant clues that there are commonalities in these data items from the shared namespaces and predicates, even if the traditional 'graph' view of RDF doesn't appear to join on these. Thus, it is the linking of the data descriptions, whether as metadata or other vocabularies, that provides the linking in these cases. This observation is crucial to scientific data integration where the size of the datasets, or even the individual relationships within them, can be quite large. (Note that this is not restricted to scientific data - search engines, social networks, and massive multiuser games also create huge amounts of data.) To convert all the triples into RDF and provide individual links is often unnecessary, and is both time and space intensive. Those looking to do on the

  14. The Relationship of Semantic Inferencing to General Ability on a Sentence Integration Task.

    ERIC Educational Resources Information Center

    Sorensen, H. Barbara

    The purpose of this research was to examine the relationship between semantic memory or integration and general ability. The main hypotheses are as follows: (1) the ability to memorize meaningless materials will not correlate with general ability; (2) the ability to abstract meaning from semantically related propositions will correlate with…

  15. SemantEco: a semantically powered modular architecture for integrating distributed environmental and ecological data

    USGS Publications Warehouse

    Patton, Evan W.; Seyed, Patrice; Wang, Ping; Fu, Linyun; Dein, F. Joshua; Bristol, R. Sky; McGuinness, Deborah L.

    2014-01-01

    We aim to inform the development of decision support tools for resource managers who need to examine large complex ecosystems and make recommendations in the face of many tradeoffs and conflicting drivers. We take a semantic technology approach, leveraging background ontologies and the growing body of linked open data. In previous work, we designed and implemented a semantically enabled environmental monitoring framework called SemantEco and used it to build a water quality portal named SemantAqua. Our previous system included foundational ontologies to support environmental regulation violations and relevant human health effects. In this work, we discuss SemantEco’s new architecture that supports modular extensions and makes it easier to support additional domains. Our enhanced framework includes foundational ontologies to support modeling of wildlife observation and wildlife health impacts, thereby enabling deeper and broader support for more holistically examining the effects of environmental pollution on ecosystems. We conclude with a discussion of how, through the application of semantic technologies, modular designs will make it easier for resource managers to bring in new sources of data to support more complex use cases.

  16. Electrophysiological evidence for incremental lexical-semantic integration in auditory compound comprehension.

    PubMed

    Koester, Dirk; Holle, Henning; Gunter, Thomas C

    2009-07-01

    The present study investigated the time-course of semantic integration in auditory compound word processing. Compounding is a productive mechanism of word formation that is used frequently in many languages. Specifically, we examined whether semantic integration is incremental or is delayed until the head, the last constituent in German, is available. Stimuli were compounds consisting of three nouns, and the semantic plausibility of the second and the third constituent was manipulated independently (high vs. low). Participants' task was to listen to the compounds and evaluate them semantically. Event-related brain potentials in response to the head constituents showed an increased N400 for less plausible head constituents, reflecting the lexical-semantic integration of all three compound constituents. In response to the second (less plausible) constituents, an increased N400 with a central-left scalp distribution was observed followed by a parietal positivity. The occurrence of this N400 effect during the presentation of the second constituents suggests that the initial two non-head constituents are immediately integrated. The subsequent positivity might be an instance of a P600 and is suggested to reflect the structural change of the initially constructed compound structure. The results suggest that lexical-semantic integration of compound constituents is an incremental process and, thus, challenge a recent proposal on the time-course of semantic processing in auditory compound comprehension. PMID:19428417

  17. A Bayesian framework for knowledge attribution: evidence from semantic integration.

    PubMed

    Powell, Derek; Horne, Zachary; Pinillos, N Ángel; Holyoak, Keith J

    2015-06-01

    We propose a Bayesian framework for the attribution of knowledge, and apply this framework to generate novel predictions about knowledge attribution for different types of "Gettier cases", in which an agent is led to a justified true belief yet has made erroneous assumptions. We tested these predictions using a paradigm based on semantic integration. We coded the frequencies with which participants falsely recalled the word "thought" as "knew" (or a near synonym), yielding an implicit measure of conceptual activation. Our experiments confirmed the predictions of our Bayesian account of knowledge attribution across three experiments. We found that Gettier cases due to counterfeit objects were not treated as knowledge (Experiment 1), but those due to intentionally-replaced evidence were (Experiment 2). Our findings are not well explained by an alternative account focused only on luck, because accidentally-replaced evidence activated the knowledge concept more strongly than did similar false belief cases (Experiment 3). We observed a consistent pattern of results across a number of different vignettes that varied the quality and type of evidence available to agents, the relative stakes involved, and surface details of content. Accordingly, the present findings establish basic phenomena surrounding people's knowledge attributions in Gettier cases, and provide explanations of these phenomena within a Bayesian framework.

  18. The Effects of Semantic Integration Training on Memory for Pictograph Sentences.

    ERIC Educational Resources Information Center

    Ledger, George W.; Ryan, Ellen Bouchard

    1982-01-01

    The effectiveness of training a semantic integration strategy for recall of pictograph sequences and the generalization of the strategy to a related oral sentence task were examined in 60 kindergarten prereaders. (Author/MP)

  19. Integration of Sentence-Level Semantic Information in Parafovea: Evidence from the RSVP-Flanker Paradigm

    PubMed Central

    Zhang, Wenjia; Li, Nan; Wang, Xiaoyue; Wang, Suiping

    2015-01-01

    During text reading, the parafoveal word was usually presented between 2° and 5° from the point of fixation. Whether semantic information of parafoveal words can be processed during sentence reading is a critical and long-standing issue. Recently, studies using the RSVP-flanker paradigm have shown that the incongruent parafoveal word, presented as right flanker, elicited a more negative N400 compared with the congruent parafoveal word. This suggests that the semantic information of parafoveal words can be extracted and integrated during sentence reading, because the N400 effect is a classical index of semantic integration. However, as most previous studies did not control the word-pair congruency of the parafoveal and the foveal words that were presented in the critical triad, it is still unclear whether such integration happened at the sentence level or just at the word-pair level. The present study addressed this question by manipulating verbs in Chinese sentences to yield either a semantically congruent or semantically incongruent context for the critical noun. In particular, the interval between the critical nouns and verbs was controlled to be 4 or 5 characters. Thus, to detect the incongruence of the parafoveal noun, participants had to integrate it with the global sentential context. The results revealed that the N400 time-locked to the critical triads was more negative in incongruent than in congruent sentences, suggesting that parafoveal semantic information can be integrated at the sentence level during Chinese reading. PMID:26418230

  20. Integration of Sentence-Level Semantic Information in Parafovea: Evidence from the RSVP-Flanker Paradigm.

    PubMed

    Zhang, Wenjia; Li, Nan; Wang, Xiaoyue; Wang, Suiping

    2015-01-01

    During text reading, the parafoveal word was usually presented between 2° and 5° from the point of fixation. Whether semantic information of parafoveal words can be processed during sentence reading is a critical and long-standing issue. Recently, studies using the RSVP-flanker paradigm have shown that the incongruent parafoveal word, presented as right flanker, elicited a more negative N400 compared with the congruent parafoveal word. This suggests that the semantic information of parafoveal words can be extracted and integrated during sentence reading, because the N400 effect is a classical index of semantic integration. However, as most previous studies did not control the word-pair congruency of the parafoveal and the foveal words that were presented in the critical triad, it is still unclear whether such integration happened at the sentence level or just at the word-pair level. The present study addressed this question by manipulating verbs in Chinese sentences to yield either a semantically congruent or semantically incongruent context for the critical noun. In particular, the interval between the critical nouns and verbs was controlled to be 4 or 5 characters. Thus, to detect the incongruence of the parafoveal noun, participants had to integrate it with the global sentential context. The results revealed that the N400 time-locked to the critical triads was more negative in incongruent than in congruent sentences, suggesting that parafoveal semantic information can be integrated at the sentence level during Chinese reading.

  1. A semantic web framework to integrate cancer omics data with biological knowledge

    PubMed Central

    2012-01-01

    Background The RDF triple provides a simple linguistic means of describing limitless types of information. Triples can be flexibly combined into a unified data source we call a semantic model. Semantic models open new possibilities for the integration of variegated biological data. We use Semantic Web technology to explicate high throughput clinical data in the context of fundamental biological knowledge. We have extended Corvus, a data warehouse which provides a uniform interface to various forms of Omics data, by providing a SPARQL endpoint. With the querying and reasoning tools made possible by the Semantic Web, we were able to explore quantitative semantic models retrieved from Corvus in the light of systematic biological knowledge. Results For this paper, we merged semantic models containing genomic, transcriptomic and epigenomic data from melanoma samples with two semantic models of functional data - one containing Gene Ontology (GO) data, the other, regulatory networks constructed from transcription factor binding information. These two semantic models were created in an ad hoc manner but support a common interface for integration with the quantitative semantic models. Such combined semantic models allow us to pose significant translational medicine questions. Here, we study the interplay between a cell's molecular state and its response to anti-cancer therapy by exploring the resistance of cancer cells to Decitabine, a demethylating agent. Conclusions We were able to generate a testable hypothesis to explain how Decitabine fights cancer - namely, that it targets apoptosis-related gene promoters predominantly in Decitabine-sensitive cell lines, thus conveying its cytotoxic effect by activating the apoptosis pathway. Our research provides a framework whereby similar hypotheses can be developed easily. PMID:22373303

  2. Precedency control and other semantic integrity issues in a workbench database

    NASA Technical Reports Server (NTRS)

    Dampney, C. N. G.

    1983-01-01

    Most database systems model the current state of a system of real world discrete and simple entities together with their relationships. By examining instead a database system that is a workbench and models more complicated entities, a fresh perspective is gained. Specifically, semantic integrity is analysed. Four aspects distinct from physical integrity are identified, namely - access, failure, concurrency and precedency. Access control is shown to be the consequence of semantic interdependency between data and its matching semantic routines. Failure, concurrency precedency controls are concerned with preventing processes interfering with each other. Precedency is a new concept in the database context. It expresses a constraint between processes that act on the database. As processes create, update and delete entities they in general obey a partial ordering imposed by the semantics of their actions. Precedency control ensures that data remains consistent with respect to this partial order.

  3. The semantic metadatabase (SEMEDA): ontology based integration of federated molecular biological data sources.

    PubMed

    Köhler, Jacob; Schulze-Kremer, Steffen

    2002-01-01

    A system for "intelligent" semantic integration and querying of federated databases is being implemented by using three main components: A component which enables SQL access to integrated databases by database federation (MARGBench), an ontology based semantic metadatabase (SEMEDA) and an ontology based query interface (SEMEDA-query). In this publication we explain and demonstrate the principles, architecture and the use of SEMEDA. Since SEMEDA is implemented as 3 tiered web application database providers can enter all relevant semantic and technical information about their databases by themselves via a web browser. SEMEDA' s collaborative ontology editing feature is not restricted to database integration, and might also be useful for ongoing ontology developments, such as the "Gene Ontology" [2]. SEMEDA can be found at http://www-bm.cs.uni-magdeburg.de/semeda/. We explain how this ontologically structured information can be used for semantic database integration. In addition, requirements to ontologies for molecular biological database integration are discussed and relevant existing ontologies are evaluated. We further discuss how ontologies and structured knowledge sources can be used in SEMEDA and whether they can be merged supplemented or updated to meet the requirements for semantic database integration.

  4. Towards virtual knowledge broker services for semantic integration of life science literature and data sources.

    PubMed

    Harrow, Ian; Filsell, Wendy; Woollard, Peter; Dix, Ian; Braxenthaler, Michael; Gedye, Richard; Hoole, David; Kidd, Richard; Wilson, Jabe; Rebholz-Schuhmann, Dietrich

    2013-05-01

    Research in the life sciences requires ready access to primary data, derived information and relevant knowledge from a multitude of sources. Integration and interoperability of such resources are crucial for sharing content across research domains relevant to the life sciences. In this article we present a perspective review of data integration with emphasis on a semantics driven approach to data integration that pushes content into a shared infrastructure, reduces data redundancy and clarifies any inconsistencies. This enables much improved access to life science data from numerous primary sources. The Semantic Enrichment of the Scientific Literature (SESL) pilot project demonstrates feasibility for using already available open semantic web standards and technologies to integrate public and proprietary data resources, which span structured and unstructured content. This has been accomplished through a precompetitive consortium, which provides a cost effective approach for numerous stakeholders to work together to solve common problems.

  5. Towards virtual knowledge broker services for semantic integration of life science literature and data sources.

    PubMed

    Harrow, Ian; Filsell, Wendy; Woollard, Peter; Dix, Ian; Braxenthaler, Michael; Gedye, Richard; Hoole, David; Kidd, Richard; Wilson, Jabe; Rebholz-Schuhmann, Dietrich

    2013-05-01

    Research in the life sciences requires ready access to primary data, derived information and relevant knowledge from a multitude of sources. Integration and interoperability of such resources are crucial for sharing content across research domains relevant to the life sciences. In this article we present a perspective review of data integration with emphasis on a semantics driven approach to data integration that pushes content into a shared infrastructure, reduces data redundancy and clarifies any inconsistencies. This enables much improved access to life science data from numerous primary sources. The Semantic Enrichment of the Scientific Literature (SESL) pilot project demonstrates feasibility for using already available open semantic web standards and technologies to integrate public and proprietary data resources, which span structured and unstructured content. This has been accomplished through a precompetitive consortium, which provides a cost effective approach for numerous stakeholders to work together to solve common problems. PMID:23247259

  6. Electrophysiological correlates of cross-linguistic semantic integration in hearing signers: N400 and LPC.

    PubMed

    Zachau, Swantje; Korpilahti, Pirjo; Hämäläinen, Jarmo A; Ervast, Leena; Heinänen, Kaisu; Suominen, Kalervo; Lehtihalmes, Matti; Leppänen, Paavo H T

    2014-07-01

    We explored semantic integration mechanisms in native and non-native hearing users of sign language and non-signing controls. Event-related brain potentials (ERPs) were recorded while participants performed a semantic decision task for priming lexeme pairs. Pairs were presented either within speech or across speech and sign language. Target-related ERP responses were subjected to principal component analyses (PCA), and neurocognitive basis of semantic integration processes were assessed by analyzing the N400 and the late positive complex (LPC) components in response to spoken (auditory) and signed (visual) antonymic and unrelated targets. Semantically-related effects triggered across modalities would indicate a similar tight interconnection between the signers׳ two languages like that described for spoken language bilinguals. Remarkable structural similarity of the N400 and LPC components with varying group differences between the spoken and signed targets were found. The LPC was the dominant response. The controls׳ LPC differed from the LPC of the two signing groups. It was reduced to the auditory unrelated targets and was less frontal for all the visual targets. The visual LPC was more broadly distributed in native than non-native signers and was left-lateralized for the unrelated targets in the native hearing signers only. Semantic priming effects were found for the auditory N400 in all groups, but only native hearing signers revealed a clear N400 effect to the visual targets. Surprisingly, the non-native signers revealed no semantically-related processing effect to the visual targets reflected in the N400 or the LPC; instead they appeared to rely more on visual post-lexical analyzing stages than native signers. We conclude that native and non-native signers employed different processing strategies to integrate signed and spoken semantic content. It appeared that the signers׳ semantic processing system was affected by group-specific factors like language

  7. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  8. Semantic Elaboration through Integration: Hints Both Facilitate and Inform the Process

    ERIC Educational Resources Information Center

    Bauer, Patricia J.; Varga, Nicole L.; King, Jessica E.; Nolen, Ayla M.; White, Elizabeth A.

    2015-01-01

    Semantic knowledge can be extended in a variety of ways, including self-generation of new facts through integration of separate yet related episodes. We sought to promote integration and self-generation by providing "hints" to help 6-year-olds (Experiment 1) and 4-year-olds (Experiment 2) see the relevance of separate episodes to one…

  9. Multimodal Feature Integration in the Angular Gyrus during Episodic and Semantic Retrieval

    PubMed Central

    Bonnici, Heidi M.; Richter, Franziska R.; Yazar, Yasemin

    2016-01-01

    Much evidence from distinct lines of investigation indicates the involvement of angular gyrus (AnG) in the retrieval of both episodic and semantic information, but the region's precise function and whether that function differs across episodic and semantic retrieval have yet to be determined. We used univariate and multivariate fMRI analysis methods to examine the role of AnG in multimodal feature integration during episodic and semantic retrieval. Human participants completed episodic and semantic memory tasks involving unimodal (auditory or visual) and multimodal (audio-visual) stimuli. Univariate analyses revealed the recruitment of functionally distinct AnG subregions during the retrieval of episodic and semantic information. Consistent with a role in multimodal feature integration during episodic retrieval, significantly greater AnG activity was observed during retrieval of integrated multimodal episodic memories compared with unimodal episodic memories. Multivariate classification analyses revealed that individual multimodal episodic memories could be differentiated in AnG, with classification accuracy tracking the vividness of participants' reported recollections, whereas distinct unimodal memories were represented in sensory association areas only. In contrast to episodic retrieval, AnG was engaged to a statistically equivalent degree during retrieval of unimodal and multimodal semantic memories, suggesting a distinct role for AnG during semantic retrieval. Modality-specific sensory association areas exhibited corresponding activity during both episodic and semantic retrieval, which mirrored the functional specialization of these regions during perception. The results offer new insights into the integrative processes subserved by AnG and its contribution to our subjective experience of remembering. SIGNIFICANCE STATEMENT Using univariate and multivariate fMRI analyses, we provide evidence that functionally distinct subregions of angular gyrus (An

  10. Exploration of SWRL Rule Bases through Visualization, Paraphrasing, and Categorization of Rules

    NASA Astrophysics Data System (ADS)

    Hassanpour, Saeed; O'Connor, Martin J.; Das, Amar K.

    Rule bases are increasingly being used as repositories of knowledge content on the Semantic Web. As the size and complexity of these rule bases increases, developers and end users need methods of rule abstraction to facilitate rule management. In this paper, we describe a rule abstraction method for Semantic Web Rule Language (SWRL) rules that is based on lexical analysis and a set of heuristics. Our method results in a tree data structure that we exploit in creating techniques to visualize, paraphrase, and categorize SWRL rules. We evaluate our approach by applying it to several biomedical ontologies that contain SWRL rules, and show how the results reveal rule patterns within the rule base. We have implemented our method as a plug-in tool for Protégé-OWL, the most widely used ontology modeling software for the Semantic Web. Our tool can allow users to rapidly explore content and patterns in SWRL rule bases, enabling their acquisition and management.

  11. Organizational Knowledge Transfer Using Ontologies and a Rule-Based System

    NASA Astrophysics Data System (ADS)

    Okabe, Masao; Yoshioka, Akiko; Kobayashi, Keido; Yamaguchi, Takahira

    In recent automated and integrated manufacturing, so-called intelligence skill is becoming more and more important and its efficient transfer to next-generation engineers is one of the urgent issues. In this paper, we propose a new approach without costly OJT (on-the-job training), that is, combinational usage of a domain ontology, a rule ontology and a rule-based system. Intelligence skill can be decomposed into pieces of simple engineering rules. A rule ontology consists of these engineering rules as primitives and the semantic relations among them. A domain ontology consists of technical terms in the engineering rules and the semantic relations among them. A rule ontology helps novices get the total picture of the intelligence skill and a domain ontology helps them understand the exact meanings of the engineering rules. A rule-based system helps domain experts externalize their tacit intelligence skill to ontologies and also helps novices internalize them. As a case study, we applied our proposal to some actual job at a remote control and maintenance office of hydroelectric power stations in Tokyo Electric Power Co., Inc. We also did an evaluation experiment for this case study and the result supports our proposal.

  12. The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation

    PubMed Central

    2011-01-01

    Background The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community. Description SADI - Semantic Automated Discovery and Integration - is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services "stack", SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers. Conclusions SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behaviour we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner

  13. Enhancing Vocabulary Intervention for Kindergarten Students: Strategic Integration of Semantically Related and Embedded Word Review

    ERIC Educational Resources Information Center

    Zipoli, Richard P., Jr.; Coyne, Michael D.; McCoach, D. Betsy

    2011-01-01

    Two approaches to systematic word review were integrated into an 18-week program of extended vocabulary instruction with kindergarten students from three high-need urban schools. Words in the embedded and semantically related review conditions received systematic and distributed review. In the embedded review condition, brief word definitions were…

  14. An Approach to Formalizing Ontology Driven Semantic Integration: Concepts, Dimensions and Framework

    ERIC Educational Resources Information Center

    Gao, Wenlong

    2012-01-01

    The ontology approach has been accepted as a very promising approach to semantic integration today. However, because of the diversity of focuses and its various connections to other research domains, the core concepts, theoretical and technical approaches, and research areas of this domain still remain unclear. Such ambiguity makes it difficult to…

  15. Rule-Based Runtime Verification

    NASA Technical Reports Server (NTRS)

    Barringer, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik

    2003-01-01

    We present a rule-based framework for defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time logics, interval logics, forms of quantified temporal logics, and so on. Our logic, EAGLE, is implemented as a Java library and involves novel techniques for rule definition, manipulation and execution. Monitoring is done on a state-by-state basis, without storing the execution trace.

  16. A case study of integrating protein interaction data using semantic web technology.

    PubMed

    Dhanapalan, Lavanya; Chen, Jake Yue

    2007-01-01

    We describe a new ontology-driven semantic data integration approach for post-genome biology studies. Here, a view-based global schema can be automatically generated by merging RDF schemas from local databases. The semantic inconsistency of the merged schema is resolved by the creation of 'RDF ontology maps'. Data querying capability is accomplished with a virtual data repository, in which a D2RQ-based 'relational-to-RDF' map is developed to link schema to the relational database backend. With sample RDQL queries, we demonstrate that our approach significantly simplifies the retrieval of human protein interaction data from different databases containing hundreds of thousands of records.

  17. An architecture for rule based system explanation

    NASA Technical Reports Server (NTRS)

    Fennel, T. R.; Johannes, James D.

    1990-01-01

    A system architecture is presented which incorporate both graphics and text into explanations provided by rule based expert systems. This architecture facilitates explanation of the knowledge base content, the control strategies employed by the system, and the conclusions made by the system. The suggested approach combines hypermedia and inference engine capabilities. Advantages include: closer integration of user interface, explanation system, and knowledge base; the ability to embed links to deeper knowledge underlying the compiled knowledge used in the knowledge base; and allowing for more direct control of explanation depth and duration by the user. User models are suggested to control the type, amount, and order of information presented.

  18. Large scale healthcare data integration and analysis using the semantic web.

    PubMed

    Timm, John; Renly, Sondra; Farkash, Ariel

    2011-01-01

    Healthcare data interoperability can only be achieved when the semantics of the content is well defined and consistently implemented across heterogeneous data sources. Achieving these objectives of interoperability requires the collaboration of experts from several domains. This paper describes tooling that integrates Semantic Web technologies with common tools to facilitate cross-domain collaborative development for the purposes of data interoperability. Our approach is divided into stages of data harmonization and representation, model transformation, and instance generation. We applied our approach on Hypergenes, an EU funded project, where we use our method to the Essential Hypertension disease model using a CDA template. Our domain expert partners include clinical providers, clinical domain researchers, healthcare information technology experts, and a variety of clinical data consumers. We show that bringing Semantic Web technologies into the healthcare interoperability toolkit increases opportunities for beneficial collaboration thus improving patient care and clinical research outcomes.

  19. Sharing human-generated observations by integrating HMI and the Semantic Sensor Web.

    PubMed

    Sigüenza, Alvaro; Díaz-Pardo, David; Bernat, Jesús; Vancea, Vasile; Blanco, José Luis; Conejero, David; Gómez, Luis Hernández

    2012-01-01

    Current "Internet of Things" concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C's Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers' observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound.

  20. Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web

    PubMed Central

    Sigüenza, Álvaro; Díaz-Pardo, David; Bernat, Jesús; Vancea, Vasile; Blanco, José Luis; Conejero, David; Gómez, Luis Hernández

    2012-01-01

    Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C's Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers' observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound. PMID:22778643

  1. Clinical evaluation of using semantic searching engine for radiological imaging services in RIS-integrated PACS

    NASA Astrophysics Data System (ADS)

    Ling, Tonghui; Zhang, Kai; Yang, Yuanyuan; Hua, Yanqing; Zhang, Jianguo

    2015-03-01

    We had designed a semantic searching engine (SSE) for radiological imaging to search both reports and images in RIS-integrated PACS environment. In this presentation, we present evaluation results of this SSE about how it impacting the radiologists' behaviors in reporting for different kinds of examinations, and how it improving the performance of retrieval and usage of historical images in RIS-integrated PACS.

  2. Addressing the Challenges of Multi-Domain Data Integration with the SemantEco Framework

    NASA Astrophysics Data System (ADS)

    Patton, E. W.; Seyed, P.; McGuinness, D. L.

    2013-12-01

    Data integration across multiple domains will continue to be a challenge with the proliferation of big data in the sciences. Data origination issues and how data are manipulated are critical to enable scientists to understand and consume disparate datasets as research becomes more multidisciplinary. We present the SemantEco framework as an exemplar for designing an integrative portal for data discovery, exploration, and interpretation that uses best practice W3C Recommendations. We use the Resource Description Framework (RDF) with extensible ontologies described in the Web Ontology Language (OWL) to provide graph-based data representation. Furthermore, SemantEco ingests data via the software package csv2rdf4lod, which generates data provenance using the W3C provenance recommendation (PROV). Our presentation will discuss benefits and challenges of semantic integration, their effect on runtime performance, and how the SemantEco framework assisted in identifying performance issues and improved query performance across multiple domains by an order of magnitude. SemantEco benefits from a semantic approach that provides an 'open world', which allows data to incrementally change just as it does in the real world. SemantEco modules may load new ontologies and data using the W3C's SPARQL Protocol and RDF Query Language via HTTP. Modules may also provide user interface elements for applications and query capabilities to support new use cases. Modules can associate with domains, which are first-class objects in SemantEco. This enables SemantEco to perform integration and reasoning both within and across domains on module-provided data. The SemantEco framework has been used to construct a web portal for environmental and ecological data. The portal includes water and air quality data from the U.S. Geological Survey (USGS) and Environmental Protection Agency (EPA) and species observation counts for birds and fish from the Avian Knowledge Network and the Santa Barbara Long Term

  3. Applying Semantic Web Services and Wireless Sensor Networks for System Integration

    NASA Astrophysics Data System (ADS)

    Berkenbrock, Gian Ricardo; Hirata, Celso Massaki; de Oliveira Júnior, Frederico Guilherme Álvares; de Oliveira, José Maria Parente

    In environments like factories, buildings, and homes automation services tend to often change during their lifetime. Changes are concerned to business rules, process optimization, cost reduction, and so on. It is important to provide a smooth and straightforward way to deal with these changes so that could be handled in a faster and low cost manner. Some prominent solutions use the flexibility of Wireless Sensor Networks and the meaningful description of Semantic Web Services to provide service integration. In this work, we give an overview of current solutions for machinery integration that combine both technologies as well as a discussion about some perspectives and open issues when applying Wireless Sensor Networks and Semantic Web Services for automation services integration.

  4. Exploring the role of the posterior middle temporal gyrus in semantic cognition: Integration of anterior temporal lobe with executive processes.

    PubMed

    Davey, James; Thompson, Hannah E; Hallam, Glyn; Karapanagiotidis, Theodoros; Murphy, Charlotte; De Caso, Irene; Krieger-Redwood, Katya; Bernhardt, Boris C; Smallwood, Jonathan; Jefferies, Elizabeth

    2016-08-15

    Making sense of the world around us depends upon selectively retrieving information relevant to our current goal or context. However, it is unclear whether selective semantic retrieval relies exclusively on general control mechanisms recruited in demanding non-semantic tasks, or instead on systems specialised for the control of meaning. One hypothesis is that the left posterior middle temporal gyrus (pMTG) is important in the controlled retrieval of semantic (not non-semantic) information; however this view remains controversial since a parallel literature links this site to event and relational semantics. In a functional neuroimaging study, we demonstrated that an area of pMTG implicated in semantic control by a recent meta-analysis was activated in a conjunction of (i) semantic association over size judgements and (ii) action over colour feature matching. Under these circumstances the same region showed functional coupling with the inferior frontal gyrus - another crucial site for semantic control. Structural and functional connectivity analyses demonstrated that this site is at the nexus of networks recruited in automatic semantic processing (the default mode network) and executively demanding tasks (the multiple-demand network). Moreover, in both task and task-free contexts, pMTG exhibited functional properties that were more similar to ventral parts of inferior frontal cortex, implicated in controlled semantic retrieval, than more dorsal inferior frontal sulcus, implicated in domain-general control. Finally, the pMTG region was functionally correlated at rest with other regions implicated in control-demanding semantic tasks, including inferior frontal gyrus and intraparietal sulcus. We suggest that pMTG may play a crucial role within a large-scale network that allows the integration of automatic retrieval in the default mode network with executively-demanding goal-oriented cognition, and that this could support our ability to understand actions and non

  5. Famous face identification in temporal lobe epilepsy: Support for a multimodal integration model of semantic memory

    PubMed Central

    Drane, Daniel L.; Ojemann, Jeffrey G.; Phatak, Vaishali; Loring, David W.; Gross, Robert E.; Hebb, Adam O.; Silbergeld, Daniel L.; Miller, John W.; Voets, Natalie L.; Saindane, Amit M.; Barsalou, Lawrence; Meador, Kimford J.; Ojemann, George A.; Tranel, Daniel

    2012-01-01

    This study aims to demonstrate that the left and right anterior temporal lobes (ATLs) perform critical but unique roles in famous face identification, with damage to either leading to differing deficit patterns reflecting decreased access to lexical or semantic concepts but not their degradation. Famous face identification was studied in 22 presurgical and 14 postsurgical temporal lobe epilepsy (TLE) patients and 20 healthy comparison subjects using free recall and multiple choice (MC) paradigms. Right TLE patients exhibited presurgical deficits in famous face recognition, and postsurgical deficits in both famous face recognition and familiarity judgments. However, they did not exhibit any problems with naming before or after surgery. In contrast, left TLE patients demonstrated both pre-and postsurgical deficits in famous face naming but no significant deficits in recognition or familiarity. Double dissociations in performance between groups were alleviated by altering task demands. Postsurgical right TLE patients provided with MC options correctly identified greater than 70% of famous faces they initially rated as unfamiliar. Left TLE patients accurately chose the name for nearly all famous faces they recognized (based on their verbal description) but initially failed to name, although they tended to rapidly lose access to this name. We believe alterations in task demands activate alternative routes to semantic and lexical networks, demonstrating that unique pathways to such stored information exist, and suggesting a different role for each ATL in identifying visually presented famous faces. The right ATL appears to play a fundamental role in accessing semantic information from a visual route, with the left ATL serving to link semantic information to the language system to produce a specific name. These findings challenge several assumptions underlying amodal models of semantic memory, and provide support for the integrated multimodal theories of semantic memory

  6. Famous face identification in temporal lobe epilepsy: support for a multimodal integration model of semantic memory.

    PubMed

    Drane, Daniel L; Ojemann, Jeffrey G; Phatak, Vaishali; Loring, David W; Gross, Robert E; Hebb, Adam O; Silbergeld, Daniel L; Miller, John W; Voets, Natalie L; Saindane, Amit M; Barsalou, Lawrence; Meador, Kimford J; Ojemann, George A; Tranel, Daniel

    2013-06-01

    This study aims to demonstrate that the left and right anterior temporal lobes (ATLs) perform critical but unique roles in famous face identification, with damage to either leading to differing deficit patterns reflecting decreased access to lexical or semantic concepts but not their degradation. Famous face identification was studied in 22 presurgical and 14 postsurgical temporal lobe epilepsy (TLE) patients and 20 healthy comparison subjects using free recall and multiple choice (MC) paradigms. Right TLE patients exhibited presurgical deficits in famous face recognition, and postsurgical deficits in both famous face recognition and familiarity judgments. However, they did not exhibit any problems with naming before or after surgery. In contrast, left TLE patients demonstrated both pre- and postsurgical deficits in famous face naming but no significant deficits in recognition or familiarity. Double dissociations in performance between groups were alleviated by altering task demands. Postsurgical right TLE patients provided with MC options correctly identified greater than 70% of famous faces they initially rated as unfamiliar. Left TLE patients accurately chose the name for nearly all famous faces they recognized (based on their verbal description) but initially failed to name, although they tended to rapidly lose access to this name. We believe alterations in task demands activate alternative routes to semantic and lexical networks, demonstrating that unique pathways to such stored information exist, and suggesting a different role for each ATL in identifying visually presented famous faces. The right ATL appears to play a fundamental role in accessing semantic information from a visual route, with the left ATL serving to link semantic information to the language system to produce a specific name. These findings challenge several assumptions underlying amodal models of semantic memory, and provide support for the integrated multimodal theories of semantic memory

  7. Famous face identification in temporal lobe epilepsy: support for a multimodal integration model of semantic memory.

    PubMed

    Drane, Daniel L; Ojemann, Jeffrey G; Phatak, Vaishali; Loring, David W; Gross, Robert E; Hebb, Adam O; Silbergeld, Daniel L; Miller, John W; Voets, Natalie L; Saindane, Amit M; Barsalou, Lawrence; Meador, Kimford J; Ojemann, George A; Tranel, Daniel

    2013-06-01

    This study aims to demonstrate that the left and right anterior temporal lobes (ATLs) perform critical but unique roles in famous face identification, with damage to either leading to differing deficit patterns reflecting decreased access to lexical or semantic concepts but not their degradation. Famous face identification was studied in 22 presurgical and 14 postsurgical temporal lobe epilepsy (TLE) patients and 20 healthy comparison subjects using free recall and multiple choice (MC) paradigms. Right TLE patients exhibited presurgical deficits in famous face recognition, and postsurgical deficits in both famous face recognition and familiarity judgments. However, they did not exhibit any problems with naming before or after surgery. In contrast, left TLE patients demonstrated both pre- and postsurgical deficits in famous face naming but no significant deficits in recognition or familiarity. Double dissociations in performance between groups were alleviated by altering task demands. Postsurgical right TLE patients provided with MC options correctly identified greater than 70% of famous faces they initially rated as unfamiliar. Left TLE patients accurately chose the name for nearly all famous faces they recognized (based on their verbal description) but initially failed to name, although they tended to rapidly lose access to this name. We believe alterations in task demands activate alternative routes to semantic and lexical networks, demonstrating that unique pathways to such stored information exist, and suggesting a different role for each ATL in identifying visually presented famous faces. The right ATL appears to play a fundamental role in accessing semantic information from a visual route, with the left ATL serving to link semantic information to the language system to produce a specific name. These findings challenge several assumptions underlying amodal models of semantic memory, and provide support for the integrated multimodal theories of semantic memory

  8. The Role of Sleep Spindles and Slow-Wave Activity in Integrating New Information in Semantic Memory

    PubMed Central

    Lambon Ralph, Matthew A.; Lewis, Penelope A.

    2013-01-01

    Assimilating new information into existing knowledge is a fundamental part of consolidating new memories and allowing them to guide behavior optimally and is vital for conceptual knowledge (semantic memory), which is accrued over many years. Sleep is important for memory consolidation, but its impact upon assimilation of new information into existing semantic knowledge has received minimal examination. Here, we examined the integration process by training human participants on novel words with meanings that fell into densely or sparsely populated areas of semantic memory in two separate sessions. Overnight sleep was polysomnographically monitored after each training session and recall was tested immediately after training, after a night of sleep, and 1 week later. Results showed that participants learned equal numbers of both word types, thus equating amount and difficulty of learning across the conditions. Measures of word recognition speed showed a disadvantage for novel words in dense semantic neighborhoods, presumably due to interference from many semantically related concepts, suggesting that the novel words had been successfully integrated into semantic memory. Most critically, semantic neighborhood density influenced sleep architecture, with participants exhibiting more sleep spindles and slow-wave activity after learning the sparse compared with the dense neighborhood words. These findings provide the first evidence that spindles and slow-wave activity mediate integration of new information into existing semantic networks. PMID:24068804

  9. Construction of an Ortholog Database Using the Semantic Web Technology for Integrative Analysis of Genomic Data

    PubMed Central

    Chiba, Hirokazu; Nishide, Hiroyo; Uchiyama, Ikuo

    2015-01-01

    Recently, various types of biological data, including genomic sequences, have been rapidly accumulating. To discover biological knowledge from such growing heterogeneous data, a flexible framework for data integration is necessary. Ortholog information is a central resource for interlinking corresponding genes among different organisms, and the Semantic Web provides a key technology for the flexible integration of heterogeneous data. We have constructed an ortholog database using the Semantic Web technology, aiming at the integration of numerous genomic data and various types of biological information. To formalize the structure of the ortholog information in the Semantic Web, we have constructed the Ortholog Ontology (OrthO). While the OrthO is a compact ontology for general use, it is designed to be extended to the description of database-specific concepts. On the basis of OrthO, we described the ortholog information from our Microbial Genome Database for Comparative Analysis (MBGD) in the form of Resource Description Framework (RDF) and made it available through the SPARQL endpoint, which accepts arbitrary queries specified by users. In this framework based on the OrthO, the biological data of different organisms can be integrated using the ortholog information as a hub. Besides, the ortholog information from different data sources can be compared with each other using the OrthO as a shared ontology. Here we show some examples demonstrating that the ortholog information described in RDF can be used to link various biological data such as taxonomy information and Gene Ontology. Thus, the ortholog database using the Semantic Web technology can contribute to biological knowledge discovery through integrative data analysis. PMID:25875762

  10. Construction of an ortholog database using the semantic web technology for integrative analysis of genomic data.

    PubMed

    Chiba, Hirokazu; Nishide, Hiroyo; Uchiyama, Ikuo

    2015-01-01

    Recently, various types of biological data, including genomic sequences, have been rapidly accumulating. To discover biological knowledge from such growing heterogeneous data, a flexible framework for data integration is necessary. Ortholog information is a central resource for interlinking corresponding genes among different organisms, and the Semantic Web provides a key technology for the flexible integration of heterogeneous data. We have constructed an ortholog database using the Semantic Web technology, aiming at the integration of numerous genomic data and various types of biological information. To formalize the structure of the ortholog information in the Semantic Web, we have constructed the Ortholog Ontology (OrthO). While the OrthO is a compact ontology for general use, it is designed to be extended to the description of database-specific concepts. On the basis of OrthO, we described the ortholog information from our Microbial Genome Database for Comparative Analysis (MBGD) in the form of Resource Description Framework (RDF) and made it available through the SPARQL endpoint, which accepts arbitrary queries specified by users. In this framework based on the OrthO, the biological data of different organisms can be integrated using the ortholog information as a hub. Besides, the ortholog information from different data sources can be compared with each other using the OrthO as a shared ontology. Here we show some examples demonstrating that the ortholog information described in RDF can be used to link various biological data such as taxonomy information and Gene Ontology. Thus, the ortholog database using the Semantic Web technology can contribute to biological knowledge discovery through integrative data analysis.

  11. Bim-Gis Integrated Geospatial Information Model Using Semantic Web and Rdf Graphs

    NASA Astrophysics Data System (ADS)

    Hor, A.-H.; Jadidi, A.; Sohn, G.

    2016-06-01

    In recent years, 3D virtual indoor/outdoor urban modelling becomes a key spatial information framework for many civil and engineering applications such as evacuation planning, emergency and facility management. For accomplishing such sophisticate decision tasks, there is a large demands for building multi-scale and multi-sourced 3D urban models. Currently, Building Information Model (BIM) and Geographical Information Systems (GIS) are broadly used as the modelling sources. However, data sharing and exchanging information between two modelling domains is still a huge challenge; while the syntactic or semantic approaches do not fully provide exchanging of rich semantic and geometric information of BIM into GIS or vice-versa. This paper proposes a novel approach for integrating BIM and GIS using semantic web technologies and Resources Description Framework (RDF) graphs. The novelty of the proposed solution comes from the benefits of integrating BIM and GIS technologies into one unified model, so-called Integrated Geospatial Information Model (IGIM). The proposed approach consists of three main modules: BIM-RDF and GIS-RDF graphs construction, integrating of two RDF graphs, and query of information through IGIM-RDF graph using SPARQL. The IGIM generates queries from both the BIM and GIS RDF graphs resulting a semantically integrated model with entities representing both BIM classes and GIS feature objects with respect to the target-client application. The linkage between BIM-RDF and GIS-RDF is achieved through SPARQL endpoints and defined by a query using set of datasets and entity classes with complementary properties, relationships and geometries. To validate the proposed approach and its performance, a case study was also tested using IGIM system design.

  12. Construction of an ortholog database using the semantic web technology for integrative analysis of genomic data.

    PubMed

    Chiba, Hirokazu; Nishide, Hiroyo; Uchiyama, Ikuo

    2015-01-01

    Recently, various types of biological data, including genomic sequences, have been rapidly accumulating. To discover biological knowledge from such growing heterogeneous data, a flexible framework for data integration is necessary. Ortholog information is a central resource for interlinking corresponding genes among different organisms, and the Semantic Web provides a key technology for the flexible integration of heterogeneous data. We have constructed an ortholog database using the Semantic Web technology, aiming at the integration of numerous genomic data and various types of biological information. To formalize the structure of the ortholog information in the Semantic Web, we have constructed the Ortholog Ontology (OrthO). While the OrthO is a compact ontology for general use, it is designed to be extended to the description of database-specific concepts. On the basis of OrthO, we described the ortholog information from our Microbial Genome Database for Comparative Analysis (MBGD) in the form of Resource Description Framework (RDF) and made it available through the SPARQL endpoint, which accepts arbitrary queries specified by users. In this framework based on the OrthO, the biological data of different organisms can be integrated using the ortholog information as a hub. Besides, the ortholog information from different data sources can be compared with each other using the OrthO as a shared ontology. Here we show some examples demonstrating that the ortholog information described in RDF can be used to link various biological data such as taxonomy information and Gene Ontology. Thus, the ortholog database using the Semantic Web technology can contribute to biological knowledge discovery through integrative data analysis. PMID:25875762

  13. Toward Open Science at the European Scale: Geospatial Semantic Array Programming for Integrated Environmental Modelling

    NASA Astrophysics Data System (ADS)

    de Rigo, Daniele; Corti, Paolo; Caudullo, Giovanni; McInerney, Daniel; Di Leo, Margherita; San-Miguel-Ayanz, Jesús

    2013-04-01

    of the science-policy interface, INRMM should be able to provide citizens and policy-makers with a clear, accurate understanding of the implications of the technical apparatus on collective environmental decision-making [1]. Complexity of course should not be intended as an excuse for obscurity [27-29]. Geospatial Semantic Array Programming. Concise array-based mathematical formulation and implementation (with array programming tools, see (b) ) have proved helpful in supporting and mitigating the complexity of WSTMe [40-47] when complemented with generalized modularization and terse array-oriented semantic constraints. This defines the paradigm of Semantic Array Programming (SemAP) [35,36] where semantic transparency also implies free software use (although black-boxes [12] - e.g. legacy code - might easily be semantically interfaced). A new approach for WSTMe has emerged by formalizing unorganized best practices and experience-driven informal patterns. The approach introduces a lightweight (non-intrusive) integration of SemAP and geospatial tools (c) - called Geospatial Semantic Array Programming (GeoSemAP). GeoSemAP (d) exploits the joint semantics provided by SemAP and geospatial tools to split a complex D- TM into logical blocks which are easier to check by means of mathematical array-based and geospatial constraints. Those constraints take the form of precondition, invariant and postcondition semantic checks. This way, even complex WSTMe may be described as the composition of simpler GeoSemAP blocks, each of them structured as (d). GeoSemAP allows intermediate data and information layers to be more easily an formally semantically described so as to increase fault-tolerance [17], transparency and reproducibility of WSTMe. This might also help to better communicate part of the policy-relevant knowledge, often difficult to transfer from technical WSTMe to the science-policy interface [1,15]. References de Rigo, D., 2013. Behind the horizon of reproducible

  14. A standard ontology for the semantic integration of components in healthcare organizations.

    PubMed

    Román, I; Roa, L M; Madinabeitia, G; Reina, L J

    2006-01-01

    In this paper we introduce an ontology that covers all the terminology involved in the ODP standard. This ontology has been extended with concepts taken from the prEN12967 in order to apply it in the healthcare domain. Describing components formally and using this ontology, their semantic integration can be eased together with the benefits derived from the assistance to the automatic discovery, selection, invocation and composition of components facilities.

  15. Scaling the walls of discovery: using semantic metadata for integrative problem solving.

    PubMed

    Manning, Maurice; Aggarwal, Amit; Gao, Kevin; Tucker-Kellogg, Greg

    2009-03-01

    Current data integration approaches by bioinformaticians frequently involve extracting data from a wide variety of public and private data repositories, each with a unique vocabulary and schema, via scripts. These separate data sets must then be normalized through the tedious and lengthy process of resolving naming differences and collecting information into a single view. Attempts to consolidate such diverse data using data warehouses or federated queries add significant complexity and have shown limitations in flexibility. The alternative of complete semantic integration of data requires a massive, sustained effort in mapping data types and maintaining ontologies. We focused instead on creating a data architecture that leverages semantic mapping of experimental metadata, to support the rapid prototyping of scientific discovery applications with the twin goals of reducing architectural complexity while still leveraging semantic technologies to provide flexibility, efficiency and more fully characterized data relationships. A metadata ontology was developed to describe our discovery process. A metadata repository was then created by mapping metadata from existing data sources into this ontology, generating RDF triples to describe the entities. Finally an interface to the repository was designed which provided not only search and browse capabilities but complex query templates that aggregate data from both RDF and RDBMS sources. We describe how this approach (i) allows scientists to discover and link relevant data across diverse data sources and (ii) provides a platform for development of integrative informatics applications.

  16. A case study of data integration for aquatic resources using semantic web technologies

    USGS Publications Warehouse

    Gordon, Janice M.; Chkhenkeli, Nina; Govoni, David L.; Lightsom, Frances L.; Ostroff, Andrea; Schweitzer, Peter N.; Thongsavanh, Phethala; Varanka, Dalia E.; Zednik, Stephan

    2015-01-01

    Use cases, information modeling, and linked data techniques are Semantic Web technologies used to develop a prototype system that integrates scientific observations from four independent USGS and cooperator data systems. The techniques were tested with a use case goal of creating a data set for use in exploring potential relationships among freshwater fish populations and environmental factors. The resulting prototype extracts data from the BioData Retrieval System, the Multistate Aquatic Resource Information System, the National Geochemical Survey, and the National Hydrography Dataset. A prototype user interface allows a scientist to select observations from these data systems and combine them into a single data set in RDF format that includes explicitly defined relationships and data definitions. The project was funded by the USGS Community for Data Integration and undertaken by the Community for Data Integration Semantic Web Working Group in order to demonstrate use of Semantic Web technologies by scientists. This allows scientists to simultaneously explore data that are available in multiple, disparate systems beyond those they traditionally have used.

  17. Integrating The Stereotype Content Model (Warmth And Competence) And The Osgood Semantic Differential (Evaluation, Potency, And Activity)

    PubMed Central

    Fiske, Susan T.; Yzerbyt, Vincent Y.

    2015-01-01

    We integrate two prominent models of social perception dimensionality. In three studies, we demonstrate how the well-established semantic differential dimensions of evaluation and potency relate to the stereotype content model dimensions of warmth and competence. Specifially, using a correlational design (Study 1) and experimental designs (Studies 2 and 3), we found that semantic differential dimensions run diagonally across stereotype content model quadrants. Implications of integrating classic and modern approaches of social perception are discussed. PMID:26120217

  18. A semantic data dictionary method for database schema integration in CIESIN

    NASA Astrophysics Data System (ADS)

    Hinds, N.; Huang, Y.; Ravishankar, C.

    1993-08-01

    CIESIN (Consortium for International Earth Science Information Network) is funded by NASA to investigate the technology necessary to integrate and facilitate the interdisciplinary use of Global Change information. A clear of this mission includes providing a link between the various global change data sets, in particular the physical sciences and the human (social) sciences. The typical scientist using the CIESIN system will want to know how phenomena in an outside field affects his/her work. For example, a medical researcher might ask: how does air-quality effect emphysema? This and many similar questions will require sophisticated semantic data integration. The researcher who raised the question may be familiar with medical data sets containing emphysema occurrences. But this same investigator may know little, if anything, about the existance or location of air-quality data. It is easy to envision a system which would allow that investigator to locate and perform a ``join'' on two data sets, one containing emphysema cases and the other containing air-quality levels. No such system exists today. One major obstacle to providing such a system will be overcoming the heterogeneity which falls into two broad categories. ``Database system'' heterogeneity involves differences in data models and packages. ``Data semantic'' heterogeneity involves differences in terminology between disciplines which translates into data semantic issues, and varying levels of data refinement, from raw to summary. Our work investigates a global data dictionary mechanism to facilitate a merged data service. Specially, we propose using a semantic tree during schema definition to aid in locating and integrating heterogeneous databases.

  19. Cognitive integration of language and memory in bilinguals: semantic representation.

    PubMed

    Francis, W S

    1999-03-01

    Understanding cognitive research on the integration of 2 languages in bilingual memory is difficult because of the different terminology, methodology, analysis, and interpretation strategies that scholars with different backgrounds bring to the research. These studies can be usefully categorized on 2 dimensions: memory for verbal experience versus linguistic knowledge, and systemwise versus pairwise issues. Experimental findings in this area converge on the conclusion that at the word meaning/conceptual level, both episodic and linguistic memory can be characterized as shared at the systems level and at least partly shared at the pairwise translation-equivalent level. Interpretation problems that stem from weak hypothesis testing structure and from covert translation can be minimized by using appropriate design and analysis techniques.

  20. Semantic integration of information about orthologs and diseases: the OGO system.

    PubMed

    Miñarro-Gimenez, Jose Antonio; Egaña Aranguren, Mikel; Martínez Béjar, Rodrigo; Fernández-Breis, Jesualdo Tomás; Madrid, Marisa

    2011-12-01

    Semantic Web technologies like RDF and OWL are currently applied in life sciences to improve knowledge management by integrating disparate information. Many of the systems that perform such task, however, only offer a SPARQL query interface, which is difficult to use for life scientists. We present the OGO system, which consists of a knowledge base that integrates information of orthologous sequences and genetic diseases, providing an easy to use ontology-constrain driven query interface. Such interface allows the users to define SPARQL queries through a graphical process, therefore not requiring SPARQL expertise.

  1. Entrez Neuron RDFa: a pragmatic semantic web application for data integration in neuroscience research.

    PubMed

    Samwald, Matthias; Lim, Ernest; Masiar, Peter; Marenco, Luis; Chen, Huajun; Morse, Thomas; Mutalik, Pradeep; Shepherd, Gordon; Miller, Perry; Cheung, Kei-Hoi

    2009-01-01

    The amount of biomedical data available in Semantic Web formats has been rapidly growing in recent years. While these formats are machine-friendly, user-friendly web interfaces allowing easy querying of these data are typically lacking. We present "Entrez Neuron", a pilot neuron-centric interface that allows for keyword-based queries against a coherent repository of OWL ontologies. These ontologies describe neuronal structures, physiology, mathematical models and microscopy images. The returned query results are organized hierarchically according to brain architecture. Where possible, the application makes use of entities from the Open Biomedical Ontologies (OBO) and the 'HCLS knowledgebase' developed by the W3C Interest Group for Health Care and Life Science. It makes use of the emerging RDFa standard to embed ontology fragments and semantic annotations within its HTML-based user interface. The application and underlying ontologies demonstrate how Semantic Web technologies can be used for information integration within a curated information repository and between curated information repositories. It also demonstrates how information integration can be accomplished on the client side, through simple copying and pasting of portions of documents that contain RDFa markup.

  2. The effect of discourse structure on depth of semantic integration in reading.

    PubMed

    Yang, Xiaohong; Chen, Lijing; Yang, Yufang

    2014-02-01

    A coherent discourse exhibits certain structures in that subunits of discourses are related to one another in various ways and in that subunits that contribute to the same discourse purpose are joined to create a larger unit so as to produce an effect on the reader. To date, this crucial aspect of discourse has been largely neglected in the psycholinguistic literature. In two experiments, we examined whether semantic integration in discourse context was influenced by the difference of discourse structure. Readers read discourses in which the last sentence was locally congruent but either semantically congruent or incongruent when interpreted with the preceding sentence. Furthermore, the last sentence was either in the same discourse unit or not in the same discourse unit as the preceding sentence, depending on whether they shared the same discourse purpose. Results from self-paced reading (Experiment 1) and eye tracking (Experiment 2) showed that discourse-incongruous words were read longer than discourse-congruous words only when the critical sentence and the preceding sentence were in the same discourse unit, but not when they belonged to different discourse units. These results establish discourse structure as a new factor in semantic integration and suggest that discourse effects depend both on the content of what is being said and on the way that the contents are organized.

  3. Mixing positive and negative valence: Affective-semantic integration of bivalent words

    PubMed Central

    Kuhlmann, Michael; Hofmann, Markus J.; Briesemeister, Benny B.; Jacobs, Arthur M.

    2016-01-01

    Single words have affective and aesthetic properties that influence their processing. Here we investigated the processing of a special case of word stimuli that are extremely difficult to evaluate, bivalent noun-noun-compounds (NNCs), i.e. novel words that mix a positive and negative noun, e.g. ‘Bombensex’ (bomb-sex). In a functional magnetic resonance imaging (fMRI) experiment we compared their processing with easier-to-evaluate non-bivalent NNCs in a valence decision task (VDT). Bivalent NNCs produced longer reaction times and elicited greater activation in the left inferior frontal gyrus (LIFG) than non-bivalent words, especially in contrast to words of negative valence. We attribute this effect to a LIFG-grounded process of semantic integration that requires greater effort for processing converse information, supporting the notion of a valence representation based on associations in semantic networks. PMID:27491491

  4. Delineating the Effect of Semantic Congruency on Episodic Memory: The Role of Integration and Relatedness

    PubMed Central

    Bein, Oded; Livneh, Neta; Reggev, Niv; Gilead, Michael; Goshen-Gottstein, Yonatan; Maril, Anat

    2015-01-01

    A fundamental challenge in the study of learning and memory is to understand the role of existing knowledge in the encoding and retrieval of new episodic information. The importance of prior knowledge in memory is demonstrated in the congruency effect—the robust finding wherein participants display better memory for items that are compatible, rather than incompatible, with their pre-existing semantic knowledge. Despite its robustness, the mechanism underlying this effect is not well understood. In four studies, we provide evidence that demonstrates the privileged explanatory power of the elaboration-integration account over alternative hypotheses. Furthermore, we question the implicit assumption that the congruency effect pertains to the truthfulness/sensibility of a subject-predicate proposition, and show that congruency is a function of semantic relatedness between item and context words. PMID:25695759

  5. Mixing positive and negative valence: Affective-semantic integration of bivalent words.

    PubMed

    Kuhlmann, Michael; Hofmann, Markus J; Briesemeister, Benny B; Jacobs, Arthur M

    2016-08-05

    Single words have affective and aesthetic properties that influence their processing. Here we investigated the processing of a special case of word stimuli that are extremely difficult to evaluate, bivalent noun-noun-compounds (NNCs), i.e. novel words that mix a positive and negative noun, e.g. 'Bombensex' (bomb-sex). In a functional magnetic resonance imaging (fMRI) experiment we compared their processing with easier-to-evaluate non-bivalent NNCs in a valence decision task (VDT). Bivalent NNCs produced longer reaction times and elicited greater activation in the left inferior frontal gyrus (LIFG) than non-bivalent words, especially in contrast to words of negative valence. We attribute this effect to a LIFG-grounded process of semantic integration that requires greater effort for processing converse information, supporting the notion of a valence representation based on associations in semantic networks.

  6. Mixing positive and negative valence: Affective-semantic integration of bivalent words.

    PubMed

    Kuhlmann, Michael; Hofmann, Markus J; Briesemeister, Benny B; Jacobs, Arthur M

    2016-01-01

    Single words have affective and aesthetic properties that influence their processing. Here we investigated the processing of a special case of word stimuli that are extremely difficult to evaluate, bivalent noun-noun-compounds (NNCs), i.e. novel words that mix a positive and negative noun, e.g. 'Bombensex' (bomb-sex). In a functional magnetic resonance imaging (fMRI) experiment we compared their processing with easier-to-evaluate non-bivalent NNCs in a valence decision task (VDT). Bivalent NNCs produced longer reaction times and elicited greater activation in the left inferior frontal gyrus (LIFG) than non-bivalent words, especially in contrast to words of negative valence. We attribute this effect to a LIFG-grounded process of semantic integration that requires greater effort for processing converse information, supporting the notion of a valence representation based on associations in semantic networks. PMID:27491491

  7. Using XML technology for the ontology-based semantic integration of life science databases.

    PubMed

    Philippi, Stephan; Köhler, Jacob

    2004-06-01

    Several hundred internet accessible life science databases with constantly growing contents and varying areas of specialization are publicly available via the internet. Database integration, consequently, is a fundamental prerequisite to be able to answer complex biological questions. Due to the presence of syntactic, schematic, and semantic heterogeneities, large scale database integration at present takes considerable efforts. As there is a growing apprehension of extensible markup language (XML) as a means for data exchange in the life sciences, this article focuses on the impact of XML technology on database integration in this area. In detail, a general architecture for ontology-driven data integration based on XML technology is introduced, which overcomes some of the traditional problems in this area. As a proof of concept, a prototypical implementation of this architecture based on a native XML database and an expert system shell is described for the realization of a real world integration scenario.

  8. How Distance Affects Semantic Integration in Discourse: Evidence from Event-Related Potentials

    PubMed Central

    Yang, Xiaohong; Chen, Shuang; Chen, Xuhai; Yang, Yufang

    2015-01-01

    Event-related potentials were used to investigate whether semantic integration in discourse is influenced by the number of intervening sentences between the endpoints of integration. Readers read discourses in which the last sentence contained a critical word that was either congruent or incongruent with the information introduced in the first sentence. Furthermore, for the short discourses, the first and last sentence were intervened by only one sentence while for the long discourses, they were intervened by three sentences. We found that the incongruent words elicited an N400 effect for both the short and long discourses. However, a P600 effect was only observed for the long discourses, but not for the short ones. These results suggest that although readers can successfully integrate upcoming words into the existing discourse representation, the effort required for this integration process is modulated by the number of intervening sentences. Thus, discourse distance as measured by the number of intervening sentences should be taken as an important factor for semantic integration in discourse. PMID:26569606

  9. How Distance Affects Semantic Integration in Discourse: Evidence from Event-Related Potentials.

    PubMed

    Yang, Xiaohong; Chen, Shuang; Chen, Xuhai; Yang, Yufang

    2015-01-01

    Event-related potentials were used to investigate whether semantic integration in discourse is influenced by the number of intervening sentences between the endpoints of integration. Readers read discourses in which the last sentence contained a critical word that was either congruent or incongruent with the information introduced in the first sentence. Furthermore, for the short discourses, the first and last sentence were intervened by only one sentence while for the long discourses, they were intervened by three sentences. We found that the incongruent words elicited an N400 effect for both the short and long discourses. However, a P600 effect was only observed for the long discourses, but not for the short ones. These results suggest that although readers can successfully integrate upcoming words into the existing discourse representation, the effort required for this integration process is modulated by the number of intervening sentences. Thus, discourse distance as measured by the number of intervening sentences should be taken as an important factor for semantic integration in discourse.

  10. Automated revision of CLIPS rule-bases

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick M.; Pazzani, Michael J.

    1994-01-01

    This paper describes CLIPS-R, a theory revision system for the revision of CLIPS rule-bases. CLIPS-R may be used for a variety of knowledge-base revision tasks, such as refining a prototype system, adapting an existing system to slightly different operating conditions, or improving an operational system that makes occasional errors. We present a description of how CLIPS-R revises rule-bases, and an evaluation of the system on three rule-bases.

  11. Using the Semantic Web for Rapid Integration of WikiPathways with Other Biological Online Data Resources.

    PubMed

    Waagmeester, Andra; Kutmon, Martina; Riutta, Anders; Miller, Ryan; Willighagen, Egon L; Evelo, Chris T; Pico, Alexander R

    2016-06-01

    The diversity of online resources storing biological data in different formats provides a challenge for bioinformaticians to integrate and analyse their biological data. The semantic web provides a standard to facilitate knowledge integration using statements built as triples describing a relation between two objects. WikiPathways, an online collaborative pathway resource, is now available in the semantic web through a SPARQL endpoint at http://sparql.wikipathways.org. Having biological pathways in the semantic web allows rapid integration with data from other resources that contain information about elements present in pathways using SPARQL queries. In order to convert WikiPathways content into meaningful triples we developed two new vocabularies that capture the graphical representation and the pathway logic, respectively. Each gene, protein, and metabolite in a given pathway is defined with a standard set of identifiers to support linking to several other biological resources in the semantic web. WikiPathways triples were loaded into the Open PHACTS discovery platform and are available through its Web API (https://dev.openphacts.org/docs) to be used in various tools for drug development. We combined various semantic web resources with the newly converted WikiPathways content using a variety of SPARQL query types and third-party resources, such as the Open PHACTS API. The ability to use pathway information to form new links across diverse biological data highlights the utility of integrating WikiPathways in the semantic web. PMID:27336457

  12. Using the Semantic Web for Rapid Integration of WikiPathways with Other Biological Online Data Resources.

    PubMed

    Waagmeester, Andra; Kutmon, Martina; Riutta, Anders; Miller, Ryan; Willighagen, Egon L; Evelo, Chris T; Pico, Alexander R

    2016-06-01

    The diversity of online resources storing biological data in different formats provides a challenge for bioinformaticians to integrate and analyse their biological data. The semantic web provides a standard to facilitate knowledge integration using statements built as triples describing a relation between two objects. WikiPathways, an online collaborative pathway resource, is now available in the semantic web through a SPARQL endpoint at http://sparql.wikipathways.org. Having biological pathways in the semantic web allows rapid integration with data from other resources that contain information about elements present in pathways using SPARQL queries. In order to convert WikiPathways content into meaningful triples we developed two new vocabularies that capture the graphical representation and the pathway logic, respectively. Each gene, protein, and metabolite in a given pathway is defined with a standard set of identifiers to support linking to several other biological resources in the semantic web. WikiPathways triples were loaded into the Open PHACTS discovery platform and are available through its Web API (https://dev.openphacts.org/docs) to be used in various tools for drug development. We combined various semantic web resources with the newly converted WikiPathways content using a variety of SPARQL query types and third-party resources, such as the Open PHACTS API. The ability to use pathway information to form new links across diverse biological data highlights the utility of integrating WikiPathways in the semantic web.

  13. Using the Semantic Web for Rapid Integration of WikiPathways with Other Biological Online Data Resources

    PubMed Central

    Waagmeester, Andra; Pico, Alexander R.

    2016-01-01

    The diversity of online resources storing biological data in different formats provides a challenge for bioinformaticians to integrate and analyse their biological data. The semantic web provides a standard to facilitate knowledge integration using statements built as triples describing a relation between two objects. WikiPathways, an online collaborative pathway resource, is now available in the semantic web through a SPARQL endpoint at http://sparql.wikipathways.org. Having biological pathways in the semantic web allows rapid integration with data from other resources that contain information about elements present in pathways using SPARQL queries. In order to convert WikiPathways content into meaningful triples we developed two new vocabularies that capture the graphical representation and the pathway logic, respectively. Each gene, protein, and metabolite in a given pathway is defined with a standard set of identifiers to support linking to several other biological resources in the semantic web. WikiPathways triples were loaded into the Open PHACTS discovery platform and are available through its Web API (https://dev.openphacts.org/docs) to be used in various tools for drug development. We combined various semantic web resources with the newly converted WikiPathways content using a variety of SPARQL query types and third-party resources, such as the Open PHACTS API. The ability to use pathway information to form new links across diverse biological data highlights the utility of integrating WikiPathways in the semantic web. PMID:27336457

  14. HIV-K: an integrative knowledge base for semantic integration of AIDS-related malignancy data and treatment outcomes.

    PubMed

    Tirado-Ramos, A; Saltz, Joel; Lechowicz, Mary Jo

    2010-01-01

    Technological innovations such as web services and collaborative Grid platforms like caGrid can create opportunities to converge the worlds of health care and clinical research, by facilitating access and integration of HIV-related malignancy clinical and outcomes data at more sophisticated, semantic levels. At the same time, large numbers of randomized clinical trial and outcomes data on AIDS-defining malignancies (ADM) and non-AIDS-defining malignancies (nADM) have been produced during the last few years. There is still much work to do, though, on obtaining clear conclusions from the integration of such information. This is a white paper on work in progress from Emory University's HIV/AIDS related malignancy data integrative knowledge base project (HIV-K). We are working to increase the understanding of available clinical trial data and outcomes of ADM such as lymphoma, as well as nADM such as anal cancer, Hodgkin lymphoma, or liver cancer. Our hypothesis is that, by creating prototypes of tools for semantics-enabled integrative knowledge bases for HIV/AIDS-related malignancy data, we will facilitate the identification of patterns and potential new overall evidence, as well as the linking of integrated data and results to registries of interest.

  15. A structure-based model of semantic integrity constraints for relational data bases

    NASA Technical Reports Server (NTRS)

    Rasdorf, William J.; Ulberg, Karen J.; Baugh, John W., Jr.

    1987-01-01

    Data base management systems (DBMSs) are in widespread use because of the ease and flexibility with which users access large volumes of data. Ensuring data accuracy through integrity constraints is a central aspect of DBMS use. However, many DBMSs still lack adequate integrity support. In additon, a comprehensive theoretical basis for such support the role of a constraint classification system - has yet to be developed. This paper presents a formalism that classifies semantic integrity constraints based on the structure of the relational model. Integrity constraints are characterized by the portion of the data base structure they access, whether one or more relations, attributes, or tuples. Thus, the model is completely general, allowing the arbitrary specification of any constraint. Examples of each type of constraint are illustrated using a small engineering data base, and various implementation issues are discussed.

  16. Theta rhythm of navigation: link between path integration and landmark navigation, episodic and semantic memory.

    PubMed

    Buzsáki, György

    2005-01-01

    Five key topics have been reverberating in hippocampal-entorhinal cortex (EC) research over the past five decades: episodic and semantic memory, path integration ("dead reckoning") and landmark ("map") navigation, and theta oscillation. We suggest that the systematic relations between single cell discharge and the activity of neuronal ensembles reflected in local field theta oscillations provide a useful insight into the relationship among these terms. In rats trained to run in direction-guided (1-dimensional) tasks, hippocampal cell assemblies discharge sequentially, with different assemblies active on opposite runs, i.e., place cells are unidirectional. Such tasks do not require map representation and are formally identical with learning sequentially occurring items in an episode. Hebbian plasticity, acting within the temporal window of the theta cycle, converts the travel distances into synaptic strengths between the sequentially activated and unidirectionally connected assemblies. In contrast, place representations by hippocampal neurons in 2-dimensional environments are typically omnidirectional, characteristic of a map. Generation of a map requires exploration, essentially a dead reckoning behavior. We suggest that omnidirectional navigation through the same places (junctions) during exploration gives rise to omnidirectional place cells and, consequently, maps free of temporal context. Analogously, multiple crossings of common junction(s) of episodes convert the common junction(s) into context-free or semantic memory. Theta oscillation can hence be conceived as the navigation rhythm through both physical and mnemonic space, facilitating the formation of maps and episodic/semantic memories.

  17. A ubiquitous sensor network platform for integrating smart devices into the semantic sensor web.

    PubMed

    de Vera, David Díaz Pardo; Izquierdo, Alvaro Sigüenza; Vercher, Jesús Bernat; Hernández Gómez, Luis Alfonso

    2014-01-01

    Ongoing Sensor Web developments make a growing amount of heterogeneous sensor data available to smart devices. This is generating an increasing demand for homogeneous mechanisms to access, publish and share real-world information. This paper discusses, first, an architectural solution based on Next Generation Networks: a pilot Telco Ubiquitous Sensor Network (USN) Platform that embeds several OGC® Sensor Web services. This platform has already been deployed in large scale projects. Second, the USN-Platform is extended to explore a first approach to Semantic Sensor Web principles and technologies, so that smart devices can access Sensor Web data, allowing them also to share richer (semantically interpreted) information. An experimental scenario is presented: a smart car that consumes and produces real-world information which is integrated into the Semantic Sensor Web through a Telco USN-Platform. Performance tests revealed that observation publishing times with our experimental system were well within limits compatible with the adequate operation of smart safety assistance systems in vehicles. On the other hand, response times for complex queries on large repositories may be inappropriate for rapid reaction needs. PMID:24945678

  18. Hands typing what hands do: Action-semantic integration dynamics throughout written verb production.

    PubMed

    García, Adolfo M; Ibáñez, Agustín

    2016-04-01

    Processing action verbs, in general, and manual action verbs, in particular, involves activations in gross and hand-specific motor networks, respectively. While this is well established for receptive language processes, no study has explored action-semantic integration during written production. Moreover, little is known about how such crosstalk unfolds from motor planning to execution. Here we address both issues through our novel "action semantics in typing" paradigm, which allows to time keystroke operations during word typing. Specifically, we created a primed-verb-copying task involving manual action verbs, non-manual action verbs, and non-action verbs. Motor planning processes were indexed by first-letter lag (the lapse between target onset and first keystroke), whereas execution dynamics were assessed considering whole-word lag (the lapse between first and last keystroke). Each phase was differently delayed by action verbs. When these were processed for over one second, interference was strong and magnified by effector compatibility during programming, but weak and effector-blind during execution. Instead, when they were processed for less than 900ms, interference was reduced by effector compatibility during programming and it faded during execution. Finally, typing was facilitated by prime-target congruency, irrespective of the verbs' motor content. Thus, action-verb semantics seems to extend beyond its embodied foundations, involving conceptual dynamics not tapped by classical reaction-time measures. These findings are compatible with non-radical models of language embodiment and with predictions of event coding theory. PMID:26803393

  19. A Ubiquitous Sensor Network Platform for Integrating Smart Devices into the Semantic Sensor Web

    PubMed Central

    de Vera, David Díaz Pardo; Izquierdo, Álvaro Sigüenza; Vercher, Jesús Bernat; Gómez, Luis Alfonso Hernández

    2014-01-01

    Ongoing Sensor Web developments make a growing amount of heterogeneous sensor data available to smart devices. This is generating an increasing demand for homogeneous mechanisms to access, publish and share real-world information. This paper discusses, first, an architectural solution based on Next Generation Networks: a pilot Telco Ubiquitous Sensor Network (USN) Platform that embeds several OGC® Sensor Web services. This platform has already been deployed in large scale projects. Second, the USN-Platform is extended to explore a first approach to Semantic Sensor Web principles and technologies, so that smart devices can access Sensor Web data, allowing them also to share richer (semantically interpreted) information. An experimental scenario is presented: a smart car that consumes and produces real-world information which is integrated into the Semantic Sensor Web through a Telco USN-Platform. Performance tests revealed that observation publishing times with our experimental system were well within limits compatible with the adequate operation of smart safety assistance systems in vehicles. On the other hand, response times for complex queries on large repositories may be inappropriate for rapid reaction needs. PMID:24945678

  20. Learning new vocabulary during childhood: effects of semantic training on lexical consolidation and integration.

    PubMed

    Henderson, Lisa; Weighall, Anna; Gaskell, Gareth

    2013-11-01

    Research suggests that word learning is an extended process, with offline consolidation crucial for the strengthening of new lexical representations and their integration with existing lexical knowledge (as measured by engagement in lexical competition). This supports a dual memory systems account, in which new information is initially sparsely encoded separately from existing knowledge and integrated with long-term memory over time. However, previous studies of this type exploited unnatural learning contexts, involving fictitious words in the absence of word meaning. In this study, 5- to 9-year-old children learned real science words (e.g., hippocampus) with or without semantic information. Children in both groups were slower to detect pauses in familiar competitor words (e.g., hippopotamus) relative to control words 24h after training but not immediately, confirming that offline consolidation is required before new words are integrated with the lexicon and engage in lexical competition. Children recalled more new words 24h after training than immediately (with similar improvements shown for the recall and recognition of new word meanings); however, children who were exposed to the meanings during training showed further improvements in recall after 1 week and outperformed children who were not exposed to meanings. These findings support the dual memory systems account of vocabulary acquisition and suggest that the association of a new phonological form with semantic information is critical for the development of stable lexical representations. PMID:23981272

  1. Early Stages of Sensory Processing, but Not Semantic Integration, Are Altered in Dyslexic Adults

    PubMed Central

    Silva, Patrícia B.; Ueki, Karen; Oliveira, Darlene G.; Boggio, Paulo S.; Macedo, Elizeu C.

    2016-01-01

    The aim of this study was to verify which stages of language processing are impaired in individuals with dyslexia. For this, a visual-auditory crossmodal task with semantic judgment was used. The P100 potentials were chosen, related to visual processing and initial integration, and N400 potentials related to semantic processing. Based on visual-auditory crossmodal studies, it is understood that dyslexic individuals present impairments in the integration of these two types of tasks and impairments in processing spoken and musical auditory information. The present study sought to investigate and compare the performance of 32 adult participants (14 individuals with dyslexia), in semantic processing tasks in two situations with auditory stimuli: sentences and music, with integrated visual stimuli (pictures). From the analysis of the accuracy, both the sentence and the music blocks showed significant effects on the congruency variable, with both groups having higher scores for the incongruent items than for the congruent ones. Furthermore, there was also a group effect when the priming was music, with the dyslexic group showing an inferior performance to the control group, demonstrating greater impairments in processing when the priming was music. Regarding the reaction time variable, a group effect in music and sentence priming was found, with the dyslexic group being slower than the control group. The N400 and P100 components were analyzed. In items with judgment and music priming, a group effect was observed for the amplitude of the P100, with higher means produced by individuals with dyslexia, corroborating the literature that individuals with dyslexia have difficulties in early information processing. A congruency effect was observed in the items with music priming, with greater P100 amplitudes found in incongruous situations. Analyses of the N400 component showed the congruency effect for amplitude in both types of priming, with the mean amplitude for incongruent

  2. Early Stages of Sensory Processing, but Not Semantic Integration, Are Altered in Dyslexic Adults.

    PubMed

    Silva, Patrícia B; Ueki, Karen; Oliveira, Darlene G; Boggio, Paulo S; Macedo, Elizeu C

    2016-01-01

    The aim of this study was to verify which stages of language processing are impaired in individuals with dyslexia. For this, a visual-auditory crossmodal task with semantic judgment was used. The P100 potentials were chosen, related to visual processing and initial integration, and N400 potentials related to semantic processing. Based on visual-auditory crossmodal studies, it is understood that dyslexic individuals present impairments in the integration of these two types of tasks and impairments in processing spoken and musical auditory information. The present study sought to investigate and compare the performance of 32 adult participants (14 individuals with dyslexia), in semantic processing tasks in two situations with auditory stimuli: sentences and music, with integrated visual stimuli (pictures). From the analysis of the accuracy, both the sentence and the music blocks showed significant effects on the congruency variable, with both groups having higher scores for the incongruent items than for the congruent ones. Furthermore, there was also a group effect when the priming was music, with the dyslexic group showing an inferior performance to the control group, demonstrating greater impairments in processing when the priming was music. Regarding the reaction time variable, a group effect in music and sentence priming was found, with the dyslexic group being slower than the control group. The N400 and P100 components were analyzed. In items with judgment and music priming, a group effect was observed for the amplitude of the P100, with higher means produced by individuals with dyslexia, corroborating the literature that individuals with dyslexia have difficulties in early information processing. A congruency effect was observed in the items with music priming, with greater P100 amplitudes found in incongruous situations. Analyses of the N400 component showed the congruency effect for amplitude in both types of priming, with the mean amplitude for incongruent

  3. Francisella tularensis novicida proteomic and transcriptomic data integration and annotation based on semantic web technologies

    PubMed Central

    Anwar, Nadia; Hunt, Ela

    2009-01-01

    Background This paper summarises the lessons and experiences gained from a case study of the application of semantic web technologies to the integration of data from the bacterial species Francisella tularensis novicida (Fn). Fn data sources are disparate and heterogeneous, as multiple laboratories across the world, using multiple technologies, perform experiments to understand the mechanism of virulence. It is hard to integrate these data sources in a flexible manner that allows new experimental data to be added and compared when required. Results Public domain data sources were combined in RDF. Using this connected graph of database cross references, we extended the annotations of an experimental data set by superimposing onto it the annotation graph. Identifiers used in the experimental data automatically resolved and the data acquired annotations in the rest of the RDF graph. This happened without the expensive manual annotation that would normally be required to produce these links. This graph of resolved identifiers was then used to combine two experimental data sets, a proteomics experiment and a transcriptomic experiment studying the mechanism of virulence through the comparison of wildtype Fn with an avirulent mutant strain. Conclusion We produced a graph of Fn cross references which enabled the combination of two experimental datasets. Through combination of these data we are able to perform queries that compare the results of the two experiments. We found that data are easily combined in RDF and that experimental results are easily compared when the data are integrated. We conclude that semantic data integration offers a convenient, simple and flexible solution to the integration of published and unpublished experimental data. PMID:19796400

  4. Semantic Representation and Scale-Up of Integrated Air Traffic Management Data

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Ranjan, Shubha; Wei, Mei Y.; Eshow, Michelle M.

    2016-01-01

    Each day, the global air transportation industry generates a vast amount of heterogeneous data from air carriers, air traffic control providers, and secondary aviation entities handling baggage, ticketing, catering, fuel delivery, and other services. Generally, these data are stored in isolated data systems, separated from each other by significant political, regulatory, economic, and technological divides. These realities aside, integrating aviation data into a single, queryable, big data store could enable insights leading to major efficiency, safety, and cost advantages. In this paper, we describe an implemented system for combining heterogeneous air traffic management data using semantic integration techniques. The system transforms data from its original disparate source formats into a unified semantic representation within an ontology-based triple store. Our initial prototype stores only a small sliver of air traffic data covering one day of operations at a major airport. The paper also describes our analysis of difficulties ahead as we prepare to scale up data storage to accommodate successively larger quantities of data -- eventually covering all US commercial domestic flights over an extended multi-year timeframe. We review several approaches to mitigating scale-up related query performance concerns.

  5. Integrating atmospheric and volcanic gas data in support of climate impact studies using semantic technologies

    NASA Astrophysics Data System (ADS)

    Sinha, K.; Fox, P.; Raskin, R.; McGuinness, D.

    2008-05-01

    In support of a NASA-funded scientific application (SESDI; Semantically Enabled Science Data Integration Project; (http://sesdi.hao.ucar.edu/) that needs to share volcano and climate data to investigate statistical (e.g. height of the tropopause) relationships between volcanism and global climate, we have generated a volcano and plate tectonic ontologies and leveraged and re-factored the existing SWEET (Semantic Web for Earth and Environmental Terminology) ontoloy. To fulfil several goals we have developed set of packages for integrating the relevant ontologies (which are shared and reused by a broad community of users) to provide access to the key solid-earth (volcano) and atmospheric related databases. We present details on how ontologies are used in this science application setting, the methodologies employed to create the ontologies, register them to the underlying data and the implementation for use by scientists. SESDI is an NASA/ESTO/ACCESS-funded project involving the High Altitude Observatory at the National Center for Atmospheric Research (NCAR), McGuinness Associates Consulting, NASA/JPL and Virginia Polytechnic University.

  6. SENHANCE: A Semantic Web framework for integrating social and hardware sensors in e-Health.

    PubMed

    Pagkalos, Ioannis; Petrou, Loukas

    2016-09-01

    Self-reported data are very important in Healthcare, especially when combined with data from sensors. Social Networking Sites, such as Facebook, are a promising source of not only self-reported data but also social data, which are otherwise difficult to obtain. Due to their unstructured nature, providing information that is meaningful to health professionals from this source is a daunting task. To this end, we employ Social Network Applications as Social Sensors that gather structured data and use Semantic Web technologies to fuse them with hardware sensor data, effectively integrating both sources. We show that this combination of social and hardware sensor observations creates a novel space that can be used for a variety of feature-rich e-Health applications. We present the design of our prototype framework, SENHANCE, and our findings from its pilot application in the NutriHeAl project, where a Facebook app is integrated with Fitbit digital pedometers for Lifestyle monitoring.

  7. A Case Study in Integrating Multiple E-commerce Standards via Semantic Web Technology

    NASA Astrophysics Data System (ADS)

    Yu, Yang; Hillman, Donald; Setio, Basuki; Heflin, Jeff

    Internet business-to-business transactions present great challenges in merging information from different sources. In this paper we describe a project to integrate four representative commercial classification systems with the Federal Cataloging System (FCS). The FCS is used by the US Defense Logistics Agency to name, describe and classify all items under inventory control by the DoD. Our approach uses the ECCMA Open Technical Dictionary (eOTD) as a common vocabulary to accommodate all different classifications. We create a semantic bridging ontology between each classification and the eOTD to describe their logical relationships in OWL DL. The essential idea is that since each classification has formal definitions in a common vocabulary, we can use subsumption to automatically integrate them, thus mitigating the need for pairwise mappings. Furthermore our system provides an interactive interface to let users choose and browse the results and more importantly it can translate catalogs that commit to these classifications using compiled mapping results.

  8. Semantic Web Ontology and Data Integration: a Case Study in Aiding Psychiatric Drug Repurposing.

    PubMed

    Liang, Chen; Sun, Jingchun; Tao, Cui

    2016-01-01

    Despite ongoing progress towards treating mental illness, there remain significant difficulties in selecting probable candidate drugs from the existing database. We describe an ontology - oriented approach aims to represent the nexus between genes, drugs, phenotypes, symptoms, and diseases from multiple information sources. Along with this approach, we report a case study in which we attempted to explore the candidate drugs that effective for both bipolar disorder and epilepsy. We constructed an ontology that incorporates the knowledge between the two diseases and performed semantic reasoning task on the ontology. The reasoning results suggested 48 candidate drugs that hold promise for a further breakthrough. The evaluation was performed and demonstrated the validity of the proposed ontology. The overarching goal of this research is to build a framework of ontology - based data integration underpinning psychiatric drug repurposing. This approach prioritizes the candidate drugs that have potential associations among genes, phenotypes and symptoms, and thus facilitates the data integration and drug repurposing in psychiatric disorders. PMID:27570661

  9. Semantic Web Ontology and Data Integration: a Case Study in Aiding Psychiatric Drug Repurposing.

    PubMed

    Liang, Chen; Sun, Jingchun; Tao, Cui

    2016-01-01

    Despite ongoing progress towards treating mental illness, there remain significant difficulties in selecting probable candidate drugs from the existing database. We describe an ontology - oriented approach aims to represent the nexus between genes, drugs, phenotypes, symptoms, and diseases from multiple information sources. Along with this approach, we report a case study in which we attempted to explore the candidate drugs that effective for both bipolar disorder and epilepsy. We constructed an ontology that incorporates the knowledge between the two diseases and performed semantic reasoning task on the ontology. The reasoning results suggested 48 candidate drugs that hold promise for a further breakthrough. The evaluation was performed and demonstrated the validity of the proposed ontology. The overarching goal of this research is to build a framework of ontology - based data integration underpinning psychiatric drug repurposing. This approach prioritizes the candidate drugs that have potential associations among genes, phenotypes and symptoms, and thus facilitates the data integration and drug repurposing in psychiatric disorders.

  10. Integration of nursing assessment concepts into the medical entities dictionary using the LOINC semantic structure as a terminology model.

    PubMed

    Cieslowski, B J; Wajngurt, D; Cimino, J J; Bakken, S

    2001-01-01

    Recent investigations have tested the applicability of various terminology models for the representing nursing concepts including those related to nursing diagnoses, nursing interventions, and standardized nursing assessments as a prerequisite for building a reference terminology that supports the nursing domain. We used the semantic structure of Clinical LOINC (Logical Observations, Identifiers, Names, and Codes) as a reference terminology model to support the integration of standardized assessment terms from two nursing terminologies into the Medical Entities Dictionary (MED), the concept-oriented, metadata dictionary at New York Presbyterian Hospital. Although the LOINC semantic structure was used previously to represent laboratory terms in the MED, selected hierarchies and semantic slots required revisions in order to incorporate the nursing assessment concepts. This project was an initial step in integrating nursing assessment concepts into the MED in a manner consistent with evolving standards for reference terminology models. Moreover, the revisions provide the foundation for adding other types of standardized assessments to the MED.

  11. Once is Enough: N400 Indexes Semantic Integration of Novel Word Meanings from a Single Exposure in Context

    PubMed Central

    Borovsky, Arielle; Elman, Jeffrey L.; Kutas, Marta

    2012-01-01

    We investigated the impact of contextual constraint on the integration of novel word meanings into semantic memory. Adults read strongly or weakly constraining sentences ending in known or unknown (novel) words as scalp-recorded electrical brain activity was recorded. Word knowledge was assessed via a lexical decision task in which recently seen known and unknown word sentence endings served as primes for semantically related, unrelated, and synonym/identical target words. As expected, N400 amplitudes to target words preceded by known word primes were reduced by prime-target relatedness. Critically, N400 amplitudes to targets preceded by novel primes also varied with prime-target relatedness, but only when they had initially appeared in highly constraining sentences. This demonstrates for the first time that fast-mapped word representations can develop strong associations with semantically related word meanings and reveals a rapid neural process that can integrate information about word meanings into the mental lexicon of young adults. PMID:23125559

  12. A Semantic Big Data Platform for Integrating Heterogeneous Wearable Data in Healthcare.

    PubMed

    Mezghani, Emna; Exposito, Ernesto; Drira, Khalil; Da Silveira, Marcos; Pruski, Cédric

    2015-12-01

    Advances supported by emerging wearable technologies in healthcare promise patients a provision of high quality of care. Wearable computing systems represent one of the most thrust areas used to transform traditional healthcare systems into active systems able to continuously monitor and control the patients' health in order to manage their care at an early stage. However, their proliferation creates challenges related to data management and integration. The diversity and variety of wearable data related to healthcare, their huge volume and their distribution make data processing and analytics more difficult. In this paper, we propose a generic semantic big data architecture based on the "Knowledge as a Service" approach to cope with heterogeneity and scalability challenges. Our main contribution focuses on enriching the NIST Big Data model with semantics in order to smartly understand the collected data, and generate more accurate and valuable information by correlating scattered medical data stemming from multiple wearable devices or/and from other distributed data sources. We have implemented and evaluated a Wearable KaaS platform to smartly manage heterogeneous data coming from wearable devices in order to assist the physicians in supervising the patient health evolution and keep the patient up-to-date about his/her status.

  13. Semantic integration of cervical cancer data repositories to facilitate multicenter association studies: the ASSIST approach.

    PubMed

    Agorastos, Theodoros; Koutkias, Vassilis; Falelakis, Manolis; Lekka, Irini; Mikos, Themistoklis; Delopoulos, Anastasios; Mitkas, Pericles A; Tantsis, Antonios; Weyers, Steven; Coorevits, Pascal; Kaufmann, Andreas M; Kurzeja, Roberto; Maglaveras, Nicos

    2009-01-01

    The current work addresses the unification of Electronic Health Records related to cervical cancer into a single medical knowledge source, in the context of the EU-funded ASSIST research project. The project aims to facilitate the research for cervical precancer and cancer through a system that virtually unifies multiple patient record repositories, physically located in different medical centers/hospitals, thus, increasing flexibility by allowing the formation of study groups "on demand" and by recycling patient records in new studies. To this end, ASSIST uses semantic technologies to translate all medical entities (such as patient examination results, history, habits, genetic profile) and represent them in a common form, encoded in the ASSIST Cervical Cancer Ontology. The current paper presents the knowledge elicitation approach followed, towards the definition and representation of the disease's medical concepts and rules that constitute the basis for the ASSIST Cervical Cancer Ontology. The proposed approach constitutes a paradigm for semantic integration of heterogeneous clinical data that may be applicable to other biomedical application domains. PMID:19458792

  14. Software Uncertainty in Integrated Environmental Modelling: the role of Semantics and Open Science

    NASA Astrophysics Data System (ADS)

    de Rigo, Daniele

    2013-04-01

    Computational aspects increasingly shape environmental sciences [1]. Actually, transdisciplinary modelling of complex and uncertain environmental systems is challenging computational science (CS) and also the science-policy interface [2-7]. Large spatial-scale problems falling within this category - i.e. wide-scale transdisciplinary modelling for environment (WSTMe) [8-10] - often deal with factors (a) for which deep-uncertainty [2,11-13] may prevent usual statistical analysis of modelled quantities and need different ways for providing policy-making with science-based support. Here, practical recommendations are proposed for tempering a peculiar - not infrequently underestimated - source of uncertainty. Software errors in complex WSTMe may subtly affect the outcomes with possible consequences even on collective environmental decision-making. Semantic transparency in CS [2,8,10,14,15] and free software [16,17] are discussed as possible mitigations (b) . Software uncertainty, black-boxes and free software. Integrated natural resources modelling and management (INRMM) [29] frequently exploits chains of nontrivial data-transformation models (D- TM), each of them affected by uncertainties and errors. Those D-TM chains may be packaged as monolithic specialized models, maybe only accessible as black-box executables (if accessible at all) [50]. For end-users, black-boxes merely transform inputs in the final outputs, relying on classical peer-reviewed publications for describing the internal mechanism. While software tautologically plays a vital role in CS, it is often neglected in favour of more theoretical aspects. This paradox has been provocatively described as "the invisibility of software in published science. Almost all published papers required some coding, but almost none mention software, let alone include or link to source code" [51]. Recently, this primacy of theory over reality [52-54] has been challenged by new emerging hybrid approaches [55] and by the

  15. Towards an open-source semantic data infrastructure for integrating clinical and scientific data in cognition-guided surgery

    NASA Astrophysics Data System (ADS)

    Fetzer, Andreas; Metzger, Jasmin; Katic, Darko; März, Keno; Wagner, Martin; Philipp, Patrick; Engelhardt, Sandy; Weller, Tobias; Zelzer, Sascha; Franz, Alfred M.; Schoch, Nicolai; Heuveline, Vincent; Maleshkova, Maria; Rettinger, Achim; Speidel, Stefanie; Wolf, Ivo; Kenngott, Hannes; Mehrabi, Arianeb; Müller-Stich, Beat P.; Maier-Hein, Lena; Meinzer, Hans-Peter; Nolden, Marco

    2016-03-01

    In the surgical domain, individual clinical experience, which is derived in large part from past clinical cases, plays an important role in the treatment decision process. Simultaneously the surgeon has to keep track of a large amount of clinical data, emerging from a number of heterogeneous systems during all phases of surgical treatment. This is complemented with the constantly growing knowledge derived from clinical studies and literature. To recall this vast amount of information at the right moment poses a growing challenge that should be supported by adequate technology. While many tools and projects aim at sharing or integrating data from various sources or even provide knowledge-based decision support - to our knowledge - no concept has been proposed that addresses the entire surgical pathway by accessing the entire information in order to provide context-aware cognitive assistance. Therefore a semantic representation and central storage of data and knowledge is a fundamental requirement. We present a semantic data infrastructure for integrating heterogeneous surgical data sources based on a common knowledge representation. A combination of the Extensible Neuroimaging Archive Toolkit (XNAT) with semantic web technologies, standardized interfaces and a common application platform enables applications to access and semantically annotate data, perform semantic reasoning and eventually create individual context-aware surgical assistance. The infrastructure meets the requirements of a cognitive surgical assistant system and has been successfully applied in various use cases. The system is based completely on free technologies and is available to the community as an open-source package.

  16. HUNTER-GATHERER: Three search techniques integrated for natural language semantics

    SciTech Connect

    Beale, S.; Nirenburg, S.; Mahesh, K.

    1996-12-31

    This work integrates three related Al search techniques - constraint satisfaction, branch-and-bound and solution synthesis - and applies the result to semantic processing in natural language (NL). We summarize the approach as {open_quote}Hunter-Gatherer:{close_quotes} (1) branch-and-bound and constraint satisfaction allow us to {open_quote}hunt down{close_quotes} non-optimal and impossible solutions and prune them from the search space. (2) solution synthesis methods then {open_quote}gather{close_quotes} all optimal solutions avoiding exponential complexity. Each of the three techniques is briefly described, as well as their extensions and combinations used in our system. We focus on the combination of solution synthesis and branch-and-bound methods which has enabled near-linear-time processing in our applications. Finally, we illustrate how the use of our technique in a large-scale MT project allowed a drastic reduction in search space.

  17. Semantic Web Ontology and Data Integration: a Case Study in Aiding Psychiatric Drug Repurposing.

    PubMed

    Liang, Chen; Sun, Jingchun; Tao, Cui

    2015-01-01

    There remain significant difficulties selecting probable candidate drugs from existing databases. We describe an ontology-oriented approach to represent the nexus between genes, drugs, phenotypes, symptoms, and diseases from multiple information sources. We also report a case study in which we attempted to explore candidate drugs effective for bipolar disorder and epilepsy. We constructed an ontology incorporating knowledge between the two diseases and performed semantic reasoning tasks with the ontology. The results suggested 48 candidate drugs that hold promise for further breakthrough. The evaluation demonstrated the validity our approach. Our approach prioritizes the candidate drugs that have potential associations among genes, phenotypes and symptoms, and thus facilitates the data integration and drug repurposing in psychiatric disorders.

  18. The holistic rhizosphere: integrating zones, processes, and semantics in the soil influenced by roots.

    PubMed

    York, Larry M; Carminati, Andrea; Mooney, Sacha J; Ritz, Karl; Bennett, Malcolm J

    2016-06-01

    Despite often being conceptualized as a thin layer of soil around roots, the rhizosphere is actually a dynamic system of interacting processes. Hiltner originally defined the rhizosphere as the soil influenced by plant roots. However, soil physicists, chemists, microbiologists, and plant physiologists have studied the rhizosphere independently, and therefore conceptualized the rhizosphere in different ways and using contrasting terminology. Rather than research-specific conceptions of the rhizosphere, the authors propose a holistic rhizosphere encapsulating the following components: microbial community gradients, macroorganisms, mucigel, volumes of soil structure modification, and depletion or accumulation zones of nutrients, water, root exudates, volatiles, and gases. These rhizosphere components are the result of dynamic processes and understanding the integration of these processes will be necessary for future contributions to rhizosphere science based upon interdisciplinary collaborations. In this review, current knowledge of the rhizosphere is synthesized using this holistic perspective with a focus on integrating traditionally separated rhizosphere studies. The temporal dynamics of rhizosphere activities will also be considered, from annual fine root turnover to diurnal fluctuations of water and nutrient uptake. The latest empirical and computational methods are discussed in the context of rhizosphere integration. Clarification of rhizosphere semantics, a holistic model of the rhizosphere, examples of integration of rhizosphere studies across disciplines, and review of the latest rhizosphere methods will empower rhizosphere scientists from different disciplines to engage in the interdisciplinary collaborations needed to break new ground in truly understanding the rhizosphere and to apply this knowledge for practical guidance. PMID:26980751

  19. Once Is Enough: N400 Indexes Semantic Integration of Novel Word Meanings from a Single Exposure in Context

    ERIC Educational Resources Information Center

    Borovsky, Arielle; Elman, Jeffrey L.; Kutas, Marta

    2012-01-01

    We investigated the impact of contextual constraint on the integration of novel word meanings into semantic memory. Adults read strongly or weakly constraining sentences ending in known or unknown (novel) words as scalp-recorded electrical brain activity was recorded. Word knowledge was assessed via a lexical decision task in which recently seen…

  20. Semantic Repositories for eGovernment Initiatives: Integrating Knowledge and Services

    NASA Astrophysics Data System (ADS)

    Palmonari, Matteo; Viscusi, Gianluigi

    In recent years, public sector investments in eGovernment initiatives have depended on making more reliable existing governmental ICT systems and infrastructures. Furthermore, we assist at a change in the focus of public sector management, from the disaggregation, competition and performance measurements typical of the New Public Management (NPM), to new models of governance, aiming for the reintegration of services under a new perspective in bureaucracy, namely a holistic approach to policy making which exploits the extensive digitalization of administrative operations. In this scenario, major challenges are related to support effective access to information both at the front-end level, by means of highly modular and customizable content provision, and at the back-end level, by means of information integration initiatives. Repositories of information about data and services that exploit semantic models and technologies can support these goals by bridging the gap between the data-level representations and the human-level knowledge involved in accessing information and in searching for services. Moreover, semantic repository technologies can reach a new level of automation for different tasks involved in interoperability programs, both related to data integration techniques and service-oriented computing approaches. In this chapter, we discuss the above topics by referring to techniques and experiences where repositories based on conceptual models and ontologies are used at different levels in eGovernment initiatives: at the back-end level to produce a comprehensive view of the information managed in the public administrations' (PA) information systems, and at the front-end level to support effective service delivery.

  1. When zebras become painted donkeys: Grammatical gender and semantic priming interact during picture integration in a spoken Spanish sentence

    PubMed Central

    Wicha, Nicole Y. Y.; Orozco-Figueroa, Araceli; Reyes, Iliana; Hernandez, Arturo; de Barreto, Lourdes Gavaldón; Bates, Elizabeth A.

    2012-01-01

    This study investigates the contribution of grammatical gender to integrating depicted nouns into sentences during on-line comprehension, and whether semantic congruity and gender agreement interact using two tasks: naming and semantic judgement of pictures. Native Spanish speakers comprehended spoken Spanish sentences with an embedded line drawing, which replaced a noun that either made sense or not with the preceding sentence context and either matched or mismatched the gender of the preceding article. In Experiment 1a (picture naming) slower naming times were found for gender mismatching pictures than matches, as well as for semantically incongruous pictures than congruous ones. In addition, the effects of gender agreement and semantic congruity interacted; specifically, pictures that were both semantically incongruous and gender mismatching were named slowest, but not as slow as if adding independent delays from both violations. Compared with a neutral baseline, with pictures embedded in simple command sentences like “Now please say ____”, both facilitative and inhibitory effects were observed. Experiment 1b replicated these results with low-cloze gender-neutral sentences, more similar in structure and processing demands to the experimental sentences. In Experiment 2, participants judged a picture’s semantic fit within a sentence by button-press; gender agreement and semantic congruity again interacted, with gender agreement having an effect on congruous but not incongruous pictures. Two distinct effects of gender are hypothesised: a “global” predictive effect (observed with and without overt noun production), and a “local” inhibitory effect (observed only with production of gender-discordant nouns). PMID:22773871

  2. Lowering the Barriers to Integrative Aquatic Ecosystem Science: Semantic Provenance, Open Linked Data, and Workflows

    NASA Astrophysics Data System (ADS)

    Harmon, T.; Hofmann, A. F.; Utz, R.; Deelman, E.; Hanson, P. C.; Szekely, P.; Villamizar, S. R.; Knoblock, C.; Guo, Q.; Crichton, D. J.; McCann, M. P.; Gil, Y.

    2011-12-01

    Environmental cyber-observatory (ECO) planning and implementation has been ongoing for more than a decade now, and several major efforts have recently come online or will soon. Some investigators in the relevant research communities will use ECO data, traditionally by developing their own client-side services to acquire data and then manually create custom tools to integrate and analyze it. However, a significant portion of the aquatic ecosystem science community will need more custom services to manage locally collected data. The latter group represents enormous intellectual capacity when one envisions thousands of ecosystems scientists supplementing ECO baseline data by sharing their own locally intensive observational efforts. This poster summarizes the outcomes of the June 2011 Workshop for Aquatic Ecosystem Sustainability (WAES) which focused on the needs of aquatic ecosystem research on inland waters and oceans. Here we advocate new approaches to support scientists to model, integrate, and analyze data based on: 1) a new breed of software tools in which semantic provenance is automatically created and used by the system, 2) the use of open standards based on RDF and Linked Data Principles to facilitate sharing of data and provenance annotations, 3) the use of workflows to represent explicitly all data preparation, integration, and processing steps in a way that is automatically repeatable. Aquatic ecosystems workflow exemplars are provided and discussed in terms of their potential broaden data sharing, analysis and synthesis thereby increasing the impact of aquatic ecosystem research.

  3. Closed-Loop Lifecycle Management of Service and Product in the Internet of Things: Semantic Framework for Knowledge Integration.

    PubMed

    Yoo, Min-Jung; Grozel, Clément; Kiritsis, Dimitris

    2016-07-08

    This paper describes our conceptual framework of closed-loop lifecycle information sharing for product-service in the Internet of Things (IoT). The framework is based on the ontology model of product-service and a type of IoT message standard, Open Messaging Interface (O-MI) and Open Data Format (O-DF), which ensures data communication. (1) BACKGROUND: Based on an existing product lifecycle management (PLM) methodology, we enhanced the ontology model for the purpose of integrating efficiently the product-service ontology model that was newly developed; (2) METHODS: The IoT message transfer layer is vertically integrated into a semantic knowledge framework inside which a Semantic Info-Node Agent (SINA) uses the message format as a common protocol of product-service lifecycle data transfer; (3) RESULTS: The product-service ontology model facilitates information retrieval and knowledge extraction during the product lifecycle, while making more information available for the sake of service business creation. The vertical integration of IoT message transfer, encompassing all semantic layers, helps achieve a more flexible and modular approach to knowledge sharing in an IoT environment; (4) Contribution: A semantic data annotation applied to IoT can contribute to enhancing collected data types, which entails a richer knowledge extraction. The ontology-based PLM model enables as well the horizontal integration of heterogeneous PLM data while breaking traditional vertical information silos; (5) CONCLUSION: The framework was applied to a fictive case study with an electric car service for the purpose of demonstration. For the purpose of demonstrating the feasibility of the approach, the semantic model is implemented in Sesame APIs, which play the role of an Internet-connected Resource Description Framework (RDF) database.

  4. Closed-Loop Lifecycle Management of Service and Product in the Internet of Things: Semantic Framework for Knowledge Integration

    PubMed Central

    Yoo, Min-Jung; Grozel, Clément; Kiritsis, Dimitris

    2016-01-01

    This paper describes our conceptual framework of closed-loop lifecycle information sharing for product-service in the Internet of Things (IoT). The framework is based on the ontology model of product-service and a type of IoT message standard, Open Messaging Interface (O-MI) and Open Data Format (O-DF), which ensures data communication. (1) Background: Based on an existing product lifecycle management (PLM) methodology, we enhanced the ontology model for the purpose of integrating efficiently the product-service ontology model that was newly developed; (2) Methods: The IoT message transfer layer is vertically integrated into a semantic knowledge framework inside which a Semantic Info-Node Agent (SINA) uses the message format as a common protocol of product-service lifecycle data transfer; (3) Results: The product-service ontology model facilitates information retrieval and knowledge extraction during the product lifecycle, while making more information available for the sake of service business creation. The vertical integration of IoT message transfer, encompassing all semantic layers, helps achieve a more flexible and modular approach to knowledge sharing in an IoT environment; (4) Contribution: A semantic data annotation applied to IoT can contribute to enhancing collected data types, which entails a richer knowledge extraction. The ontology-based PLM model enables as well the horizontal integration of heterogeneous PLM data while breaking traditional vertical information silos; (5) Conclusion: The framework was applied to a fictive case study with an electric car service for the purpose of demonstration. For the purpose of demonstrating the feasibility of the approach, the semantic model is implemented in Sesame APIs, which play the role of an Internet-connected Resource Description Framework (RDF) database. PMID:27399717

  5. Closed-Loop Lifecycle Management of Service and Product in the Internet of Things: Semantic Framework for Knowledge Integration.

    PubMed

    Yoo, Min-Jung; Grozel, Clément; Kiritsis, Dimitris

    2016-01-01

    This paper describes our conceptual framework of closed-loop lifecycle information sharing for product-service in the Internet of Things (IoT). The framework is based on the ontology model of product-service and a type of IoT message standard, Open Messaging Interface (O-MI) and Open Data Format (O-DF), which ensures data communication. (1) BACKGROUND: Based on an existing product lifecycle management (PLM) methodology, we enhanced the ontology model for the purpose of integrating efficiently the product-service ontology model that was newly developed; (2) METHODS: The IoT message transfer layer is vertically integrated into a semantic knowledge framework inside which a Semantic Info-Node Agent (SINA) uses the message format as a common protocol of product-service lifecycle data transfer; (3) RESULTS: The product-service ontology model facilitates information retrieval and knowledge extraction during the product lifecycle, while making more information available for the sake of service business creation. The vertical integration of IoT message transfer, encompassing all semantic layers, helps achieve a more flexible and modular approach to knowledge sharing in an IoT environment; (4) Contribution: A semantic data annotation applied to IoT can contribute to enhancing collected data types, which entails a richer knowledge extraction. The ontology-based PLM model enables as well the horizontal integration of heterogeneous PLM data while breaking traditional vertical information silos; (5) CONCLUSION: The framework was applied to a fictive case study with an electric car service for the purpose of demonstration. For the purpose of demonstrating the feasibility of the approach, the semantic model is implemented in Sesame APIs, which play the role of an Internet-connected Resource Description Framework (RDF) database. PMID:27399717

  6. Computational approaches for pharmacovigilance signal detection: toward integrated and semantically-enriched frameworks.

    PubMed

    Koutkias, Vassilis G; Jaulent, Marie-Christine

    2015-03-01

    Computational signal detection constitutes a key element of postmarketing drug monitoring and surveillance. Diverse data sources are considered within the 'search space' of pharmacovigilance scientists, and respective data analysis methods are employed, all with their qualities and shortcomings, towards more timely and accurate signal detection. Recent systematic comparative studies highlighted not only event-based and data-source-based differential performance across methods but also their complementarity. These findings reinforce the arguments for exploiting all possible information sources for drug safety and the parallel use of multiple signal detection methods. Combinatorial signal detection has been pursued in few studies up to now, employing a rather limited number of methods and data sources but illustrating well-promising outcomes. However, the large-scale realization of this approach requires systematic frameworks to address the challenges of the concurrent analysis setting. In this paper, we argue that semantic technologies provide the means to address some of these challenges, and we particularly highlight their contribution in (a) annotating data sources and analysis methods with quality attributes to facilitate their selection given the analysis scope; (b) consistently defining study parameters such as health outcomes and drugs of interest, and providing guidance for study setup; (c) expressing analysis outcomes in a common format enabling data sharing and systematic comparisons; and (d) assessing/supporting the novelty of the aggregated outcomes through access to reference knowledge sources related to drug safety. A semantically-enriched framework can facilitate seamless access and use of different data sources and computational methods in an integrated fashion, bringing a new perspective for large-scale, knowledge-intensive signal detection.

  7. Using Linked Open Data and Semantic Integration to Search Across Geoscience Repositories

    NASA Astrophysics Data System (ADS)

    Mickle, A.; Raymond, L. M.; Shepherd, A.; Arko, R. A.; Carbotte, S. M.; Chandler, C. L.; Cheatham, M.; Fils, D.; Hitzler, P.; Janowicz, K.; Jones, M.; Krisnadhi, A.; Lehnert, K. A.; Narock, T.; Schildhauer, M.; Wiebe, P. H.

    2014-12-01

    The MBLWHOI Library is a partner in the OceanLink project, an NSF EarthCube Building Block, applying semantic technologies to enable knowledge discovery, sharing and integration. OceanLink is testing ontology design patterns that link together: two data repositories, Rolling Deck to Repository (R2R), Biological and Chemical Oceanography Data Management Office (BCO-DMO); the MBLWHOI Library Institutional Repository (IR) Woods Hole Open Access Server (WHOAS); National Science Foundation (NSF) funded awards; and American Geophysical Union (AGU) conference presentations. The Library is collaborating with scientific users, data managers, DSpace engineers, experts in ontology design patterns, and user interface developers to make WHOAS, a DSpace repository, linked open data enabled. The goal is to allow searching across repositories without any of the information providers having to change how they manage their collections. The tools developed for DSpace will be made available to the community of users. There are 257 registered DSpace repositories in the United Stated and over 1700 worldwide. Outcomes include: Integration of DSpace with OpenRDF Sesame triple store to provide SPARQL endpoint for the storage and query of RDF representation of DSpace resources, Mapping of DSpace resources to OceanLink ontology, and DSpace "data" add on to provide resolvable linked open data representation of DSpace resources.

  8. Semantic Web Ontology and Data Integration: a Case Study in Aiding Psychiatric Drug Repurposing

    PubMed Central

    Liang, Chen; Sun, Jingchun; Tao, Cui

    2016-01-01

    Despite ongoing progress towards treating mental illness, there remain significant difficulties in selecting probable candidate drugs from the existing database. We describe an ontology — oriented approach aims to represent the nexus between genes, drugs, phenotypes, symptoms, and diseases from multiple information sources. Along with this approach, we report a case study in which we attempted to explore the candidate drugs that effective for both bipolar disorder and epilepsy. We constructed an ontology that incorporates the knowledge between the two diseases and performed semantic reasoning task on the ontology. The reasoning results suggested 48 candidate drugs that hold promise for a further breakthrough. The evaluation was performed and demonstrated the validity of the proposed ontology. The overarching goal of this research is to build a framework of ontology — based data integration underpinning psychiatric drug repurposing. This approach prioritizes the candidate drugs that have potential associations among genes, phenotypes and symptoms, and thus facilitates the data integration and drug repurposing in psychiatric disorders. PMID:27570661

  9. An Embedded Rule-Based Diagnostic Expert System in Ada

    NASA Technical Reports Server (NTRS)

    Jones, Robert E.; Liberman, Eugene M.

    1992-01-01

    Ada is becoming an increasingly popular programming language for large Government-funded software projects. Ada with it portability, transportability, and maintainability lends itself well to today's complex programming environment. In addition, expert systems have also assumed a growing role in providing human-like reasoning capability expertise for computer systems. The integration is discussed of expert system technology with Ada programming language, especially a rule-based expert system using an ART-Ada (Automated Reasoning Tool for Ada) system shell. NASA Lewis was chosen as a beta test site for ART-Ada. The test was conducted by implementing the existing Autonomous Power EXpert System (APEX), a Lisp-based power expert system, in ART-Ada. Three components, the rule-based expert systems, a graphics user interface, and communications software make up SMART-Ada (Systems fault Management with ART-Ada). The rules were written in the ART-Ada development environment and converted to Ada source code. The graphics interface was developed with the Transportable Application Environment (TAE) Plus, which generates Ada source code to control graphics images. SMART-Ada communicates with a remote host to obtain either simulated or real data. The Ada source code generated with ART-Ada, TAE Plus, and communications code was incorporated into an Ada expert system that reads the data from a power distribution test bed, applies the rule to determine a fault, if one exists, and graphically displays it on the screen. The main objective, to conduct a beta test on the ART-Ada rule-based expert system shell, was achieved. The system is operational. New Ada tools will assist in future successful projects. ART-Ada is one such tool and is a viable alternative to the straight Ada code when an application requires a rule-based or knowledge-based approach.

  10. Software Uncertainty in Integrated Environmental Modelling: the role of Semantics and Open Science

    NASA Astrophysics Data System (ADS)

    de Rigo, Daniele

    2013-04-01

    Computational aspects increasingly shape environmental sciences [1]. Actually, transdisciplinary modelling of complex and uncertain environmental systems is challenging computational science (CS) and also the science-policy interface [2-7]. Large spatial-scale problems falling within this category - i.e. wide-scale transdisciplinary modelling for environment (WSTMe) [8-10] - often deal with factors (a) for which deep-uncertainty [2,11-13] may prevent usual statistical analysis of modelled quantities and need different ways for providing policy-making with science-based support. Here, practical recommendations are proposed for tempering a peculiar - not infrequently underestimated - source of uncertainty. Software errors in complex WSTMe may subtly affect the outcomes with possible consequences even on collective environmental decision-making. Semantic transparency in CS [2,8,10,14,15] and free software [16,17] are discussed as possible mitigations (b) . Software uncertainty, black-boxes and free software. Integrated natural resources modelling and management (INRMM) [29] frequently exploits chains of nontrivial data-transformation models (D- TM), each of them affected by uncertainties and errors. Those D-TM chains may be packaged as monolithic specialized models, maybe only accessible as black-box executables (if accessible at all) [50]. For end-users, black-boxes merely transform inputs in the final outputs, relying on classical peer-reviewed publications for describing the internal mechanism. While software tautologically plays a vital role in CS, it is often neglected in favour of more theoretical aspects. This paradox has been provocatively described as "the invisibility of software in published science. Almost all published papers required some coding, but almost none mention software, let alone include or link to source code" [51]. Recently, this primacy of theory over reality [52-54] has been challenged by new emerging hybrid approaches [55] and by the

  11. Semantic data integration and knowledge management to represent biological network associations.

    PubMed

    Losko, Sascha; Heumann, Klaus

    2009-01-01

    The vast quantities of information generated by academic and industrial research groups are reflected in a rapidly growing body of scientific literature and exponentially expanding resources of formalized data including experimental data from "-omics" platforms, phenotype information, and clinical data. For bioinformatics, several challenges remain: to structure this information as biological networks enabling scientists to identify relevant information; to integrate this information as specific "knowledge bases"; and to formalize this knowledge across multiple scientific domains to facilitate hypothesis generation and validation and, thus, the generation of new knowledge. Risk management in drug discovery and clinical research is used as a typical example to illustrate this approach. In this chapter we will introduce techniques and concepts (such as ontologies, semantic objects, typed relationships, contexts, graphs, and information layers) that are used to represent complex biomedical networks. The BioXM Knowledge Management Environment is used as an example to demonstrate how a domain such as oncology is represented and how this representation is utilized for research.

  12. BSQA: integrated text mining using entity relation semantics extracted from biological literature of insects.

    PubMed

    He, Xin; Li, Yanen; Khetani, Radhika; Sanders, Barry; Lu, Yue; Ling, Xu; Zhai, Chengxiang; Schatz, Bruce

    2010-07-01

    Text mining is one promising way of extracting information automatically from the vast biological literature. To maximize its potential, the knowledge encoded in the text should be translated to some semantic representation such as entities and relations, which could be analyzed by machines. But large-scale practical systems for this purpose are rare. We present BeeSpace question/answering (BSQA) system that performs integrated text mining for insect biology, covering diverse aspects from molecular interactions of genes to insect behavior. BSQA recognizes a number of entities and relations in Medline documents about the model insect, Drosophila melanogaster. For any text query, BSQA exploits entity annotation of retrieved documents to identify important concepts in different categories. By utilizing the extracted relations, BSQA is also able to answer many biologically motivated questions, from simple ones such as, which anatomical part is a gene expressed in, to more complex ones involving multiple types of relations. BSQA is freely available at http://www.beespace.uiuc.edu/QuestionAnswer.

  13. Novel semantic similarity measure improves an integrative approach to predicting gene functional associations

    PubMed Central

    2013-01-01

    Background Elucidation of the direct/indirect protein interactions and gene associations is required to fully understand the workings of the cell. This can be achieved through the use of both low- and high-throughput biological experiments and in silico methods. We present GAP (Gene functional Association Predictor), an integrative method for predicting and characterizing gene functional associations. GAP integrates different biological features using a novel taxonomy-based semantic similarity measure in predicting and prioritizing high-quality putative gene associations. The proposed similarity measure increases information gain from the available gene annotations. The annotation information is incorporated from several public pathway databases, Gene Ontology annotations as well as drug and disease associations from the scientific literature. Results We evaluated GAP by comparing its prediction performance with several other well-known functional interaction prediction tools over a comprehensive dataset of known direct and indirect interactions, and observed significantly better prediction performance. We also selected a small set of GAP’s highly-scored novel predicted pairs (i.e., currently not found in any known database or dataset), and by manually searching the literature for experimental evidence accessible in the public domain, we confirmed different categories of predicted functional associations with available evidence of interaction. We also provided extra supporting evidence for subset of the predicted functionally-associated pairs using an expert curated database of genes associated to autism spectrum disorders. Conclusions GAP’s predicted “functional interactome” contains ≈1M highly-scored predicted functional associations out of which about 90% are novel (i.e., not experimentally validated). GAP’s novel predictions connect disconnected components and singletons to the main connected component of the known interactome. It can, therefore, be

  14. The DebugIT core ontology: semantic integration of antibiotics resistance patterns.

    PubMed

    Schober, Daniel; Boeker, Martin; Bullenkamp, Jessica; Huszka, Csaba; Depraetere, Kristof; Teodoro, Douglas; Nadah, Nadia; Choquet, Remy; Daniel, Christel; Schulz, Stefan

    2010-01-01

    Antibiotics resistance development poses a significant problem in today's hospital care. Massive amounts of clinical data are being collected and stored in proprietary and unconnected systems in heterogeneous format. The DebugIT EU project promises to make this data geographically and semantically interoperable for case-based knowledge analysis approaches aiming at the discovery of patterns that help to align antibiotics treatment schemes. The semantic glue for this endeavor is DCO, an application ontology that enables data miners to query distributed clinical information systems in a semantically rich and content driven manner. DCO will hence serve as the core component of the interoperability platform for the DebugIT project. Here we present DCO and an approach thet uses the semantic web query language SPARQL to bind and ontologically query hospital database content using DCO and information model mediators. We provide a query example that indicates that ontological querying over heterogeneous information models is feasible via SPARQL construct- and resource mapping queries.

  15. Integrating semantic web technologies and geospatial catalog services for geospatial information discovery and processing in cyberinfrastructure

    SciTech Connect

    Yue, Peng; Gong, Jianya; Di, Liping; He, Lianlian; Wei, Yaxing

    2011-04-01

    Abstract A geospatial catalogue service provides a network-based meta-information repository and interface for advertising and discovering shared geospatial data and services. Descriptive information (i.e., metadata) for geospatial data and services is structured and organized in catalogue services. The approaches currently available for searching and using that information are often inadequate. Semantic Web technologies show promise for better discovery methods by exploiting the underlying semantics. Such development needs special attention from the Cyberinfrastructure perspective, so that the traditional focus on discovery of and access to geospatial data can be expanded to support the increased demand for processing of geospatial information and discovery of knowledge. Semantic descriptions for geospatial data, services, and geoprocessing service chains are structured, organized, and registered through extending elements in the ebXML Registry Information Model (ebRIM) of a geospatial catalogue service, which follows the interface specifications of the Open Geospatial Consortium (OGC) Catalogue Services for the Web (CSW). The process models for geoprocessing service chains, as a type of geospatial knowledge, are captured, registered, and discoverable. Semantics-enhanced discovery for geospatial data, services/service chains, and process models is described. Semantic search middleware that can support virtual data product materialization is developed for the geospatial catalogue service. The creation of such a semantics-enhanced geospatial catalogue service is important in meeting the demands for geospatial information discovery and analysis in Cyberinfrastructure.

  16. The integration of geophysical and enhanced Moderate Resolution Imaging Spectroradiometer Normalized Difference Vegetation Index data into a rule-based, piecewise regression-tree model to estimate cheatgrass beginning of spring growth

    USGS Publications Warehouse

    Boyte, Stephen P.; Wylie, Bruce K.; Major, Donald J.; Brown, Jesslyn F.

    2015-01-01

    Cheatgrass exhibits spatial and temporal phenological variability across the Great Basin as described by ecological models formed using remote sensing and other spatial data-sets. We developed a rule-based, piecewise regression-tree model trained on 99 points that used three data-sets – latitude, elevation, and start of season time based on remote sensing input data – to estimate cheatgrass beginning of spring growth (BOSG) in the northern Great Basin. The model was then applied to map the location and timing of cheatgrass spring growth for the entire area. The model was strong (R2 = 0.85) and predicted an average cheatgrass BOSG across the study area of 29 March–4 April. Of early cheatgrass BOSG areas, 65% occurred at elevations below 1452 m. The highest proportion of cheatgrass BOSG occurred between mid-April and late May. Predicted cheatgrass BOSG in this study matched well with previous Great Basin cheatgrass green-up studies.

  17. Integration of Neuroimaging and Microarray Datasets through Mapping and Model-Theoretic Semantic Decomposition of Unstructured Phenotypes.

    PubMed

    Pantazatos, Spiro P; Li, Jianrong; Pavlidis, Paul; Lussier, Yves A

    2009-06-01

    An approach towards heterogeneous neuroscience dataset integration is proposed that uses Natural Language Processing (NLP) and a knowledge-based phenotype organizer system (PhenOS) to link ontology-anchored terms to underlying data from each database, and then maps these terms based on a computable model of disease (SNOMED CT(R)). The approach was implemented using sample datasets from fMRIDC, GEO, The Whole Brain Atlas and Neuronames and allowed for complex queries such as "List all disorders with a finding site of brain region X, and then find the semantically related references in all participating databases based on the ontological model of the disease or its anatomical and morphological attributes". Precision of the NLP-derived coding of the unstructured phenotypes in each dataset was 88% (n=50), and precision of the semantic mapping between these terms across datasets was 98% (n=100). To our knowledge, this is the first example of the use of both semantic decomposition of disease relationships and hierarchical information found in ontologies to integrate heterogeneous phenotypes across clinical and molecular datasets.

  18. Integration of Neuroimaging and Microarray Datasets through Mapping and Model-Theoretic Semantic Decomposition of Unstructured Phenotypes.

    PubMed

    Pantazatos, Spiro P; Li, Jianrong; Pavlidis, Paul; Lussier, Yves A

    2009-03-01

    An approach towards heterogeneous neuroscience dataset integration is proposed that uses Natural Language Processing (NLP) and a knowledgebased phenotype organizer system (PhenOS) to link ontology-anchored terms to underlying data from each database, and then maps these terms based on a computable model of disease (SNOMED CT®). The approach was implemented using sample datasets from fMRIDC, GEO and Neuronames and allowed for complex queries such as "List all disorders with a finding site of brain region X, and then find the semantically related references in all participating databases based on the ontological model of the disease or its anatomical and morphological attributes". Precision of the NLP-derived coding of the unstructured phenotypes in each datasets was 88% (n=50), and precision of the semantic mapping between these terms across datasets was 98% (n=100). To our knowledge, this is the first example of the use of both semantic decomposition of disease relationships and hierarchical information found in ontologies to integrate heterogeneous phenotypes across clinical and molecular datasets.

  19. Toward Open Science at the European Scale: Geospatial Semantic Array Programming for Integrated Environmental Modelling

    NASA Astrophysics Data System (ADS)

    de Rigo, Daniele; Corti, Paolo; Caudullo, Giovanni; McInerney, Daniel; Di Leo, Margherita; San-Miguel-Ayanz, Jesús

    2013-04-01

    of the science-policy interface, INRMM should be able to provide citizens and policy-makers with a clear, accurate understanding of the implications of the technical apparatus on collective environmental decision-making [1]. Complexity of course should not be intended as an excuse for obscurity [27-29]. Geospatial Semantic Array Programming. Concise array-based mathematical formulation and implementation (with array programming tools, see (b) ) have proved helpful in supporting and mitigating the complexity of WSTMe [40-47] when complemented with generalized modularization and terse array-oriented semantic constraints. This defines the paradigm of Semantic Array Programming (SemAP) [35,36] where semantic transparency also implies free software use (although black-boxes [12] - e.g. legacy code - might easily be semantically interfaced). A new approach for WSTMe has emerged by formalizing unorganized best practices and experience-driven informal patterns. The approach introduces a lightweight (non-intrusive) integration of SemAP and geospatial tools (c) - called Geospatial Semantic Array Programming (GeoSemAP). GeoSemAP (d) exploits the joint semantics provided by SemAP and geospatial tools to split a complex D- TM into logical blocks which are easier to check by means of mathematical array-based and geospatial constraints. Those constraints take the form of precondition, invariant and postcondition semantic checks. This way, even complex WSTMe may be described as the composition of simpler GeoSemAP blocks, each of them structured as (d). GeoSemAP allows intermediate data and information layers to be more easily an formally semantically described so as to increase fault-tolerance [17], transparency and reproducibility of WSTMe. This might also help to better communicate part of the policy-relevant knowledge, often difficult to transfer from technical WSTMe to the science-policy interface [1,15]. References de Rigo, D., 2013. Behind the horizon of reproducible

  20. The semantics and acquisition of number words: integrating linguistic and developmental perspectives.

    PubMed

    Musolino, Julien

    2004-08-01

    This article brings together two independent lines of research on numerally quantified expressions, e.g. two girls. One stems from work in linguistic theory and asks what truth conditional contributions such expressions make to the utterances in which they are used--in other words, what do numerals mean? The other comes from the study of language development and asks when and how children learn the meaning of such expressions. My goal is to show that when integrated, these two perspectives can both constrain and enrich each other in ways hitherto not considered. Specifically, work in linguistic theory suggests that in addition to their 'exactly n' interpretation, numerally quantified NPs such as two hoops can also receive an 'at least n' and an 'at most n' interpretation, e.g. you need to put two hoops on the pole to win (i.e. at least two hoops) and you can miss two shots and still win (i.e. at most two shots). I demonstrate here through the results of three sets of experiments that by the age of 5 children have implicit knowledge of the fact that expressions like two N can be interpreted as 'at least two N' and 'at most two N' while they do not yet know the meaning of corresponding expressions such as at least/most two N which convey these senses explicitly. I show that these results have important implications for theories of the semantics of numerals and that they raise new questions for developmental accounts of the number vocabulary.

  1. Rule-Based Policy Representations and Reasoning

    NASA Astrophysics Data System (ADS)

    Bonatti, Piero Andrea; de Coi, Juri Luca; Olmedilla, Daniel; Sauro, Luigi

    Trust and policies are going to play a crucial role in enabling the potential of many web applications. Policies are a well-known approach to protecting security and privacy of users in the context of the Semantic Web: in the last years a number of policy languages were proposed to address different application scenarios.

  2. Brain network of semantic integration in sentence reading: insights from independent component analysis and graph theoretical analysis.

    PubMed

    Ye, Zheng; Doñamayor, Nuria; Münte, Thomas F

    2014-02-01

    A set of cortical and sub-cortical brain structures has been linked with sentence-level semantic processes. However, it remains unclear how these brain regions are organized to support the semantic integration of a word into sentential context. To look into this issue, we conducted a functional magnetic resonance imaging (fMRI) study that required participants to silently read sentences with semantically congruent or incongruent endings and analyzed the network properties of the brain with two approaches, independent component analysis (ICA) and graph theoretical analysis (GTA). The GTA suggested that the whole-brain network is topologically stable across conditions. The ICA revealed a network comprising the supplementary motor area (SMA), left inferior frontal gyrus, left middle temporal gyrus, left caudate nucleus, and left angular gyrus, which was modulated by the incongruity of sentence ending. Furthermore, the GTA specified that the connections between the left SMA and left caudate nucleus as well as that between the left caudate nucleus and right thalamus were stronger in response to incongruent vs. congruent endings.

  3. Audiovisual speech integration in autism spectrum disorder: ERP evidence for atypicalities in lexical-semantic processing

    PubMed Central

    Megnin, Odette; Flitton, Atlanta; Jones, Catherine; de Haan, Michelle; Baldeweg, Torsten; Charman, Tony

    2013-01-01

    Lay Abstract Language and communicative impairments are among the primary characteristics of autism spectrum disorders (ASD). Previous studies have examined auditory language processing in ASD. However, during face-to-face conversation, auditory and visual speech inputs provide complementary information, and little is known about audiovisual (AV) speech processing in ASD. It is possible to elucidate the neural correlates of AV integration by examining the effects of seeing the lip movements accompanying the speech (visual speech) on electrophysiological event-related potentials (ERP) to spoken words. Moreover, electrophysiological techniques have a high temporal resolution and thus enable us to track the time-course of spoken word processing in ASD and typical development (TD). The present study examined the ERP correlates of AV effects in three time windows that are indicative of hierarchical stages of word processing. We studied a group of TD adolescent boys (n=14) and a group of high-functioning boys with ASD (n=14). Significant group differences were found in AV integration of spoken words in the 200–300ms time window when spoken words start to be processed for meaning. These results suggest that the neural facilitation by visual speech of spoken word processing is reduced in individuals with ASD. Scientific Abstract In typically developing (TD) individuals, behavioural and event-related potential (ERP) studies suggest that audiovisual (AV) integration enables faster and more efficient processing of speech. However, little is known about AV speech processing in individuals with autism spectrum disorder (ASD). The present study examined ERP responses to spoken words to elucidate the effects of visual speech (the lip movements accompanying a spoken word) on the range of auditory speech processing stages from sound onset detection to semantic integration. The study also included an AV condition which paired spoken words with a dynamic scrambled face in order to

  4. Local and global semantic integration in an argument structure: ERP evidence from Korean.

    PubMed

    Nam, Yunju; Hong, Upyong

    2016-07-01

    The neural responses of Korean speakers were recorded while they read sentences that included local semantic mismatch between adjectives (A) and nouns (N) or/and global semantic mismatch between object nouns (N) and verbs (V), as well as the corresponding control sentences without any semantic anomalies. In Experiment 1 using verb-final declarative sentences (Nsubject [A-N]object V), the local A-N incongruence yielded an N400 effect at the object noun and a combination of N400 and a late negativity effect at the sentence final verb, whereas the global N-V incongruence yielded a biphasic N400 and P600 ERP pattern at the verb compared with the ERPs of same words in the control sentences respectively; in Experiment 2 using verb-initial object relative clause constructions ([Nsubject _V]rel [A-N]object …..) derived from the materials of Experiment 1, the effect of local incongruence changed notably such that not only an N400 but also an additional P600 effect was observed at the object noun, whereas the effect of the global incongruence remained largely the same (N400 and P600). Our theoretical interpretation of these results specifically focused on the reason for the P600 effects observed across different experiment conditions, which turned out to be attributable to (i) coordination of a semantic conflict, (ii) prediction disconfirmation, or (iii) argument structure processing breakdown.

  5. The 3rd DBCLS BioHackathon: improving life science data integration with Semantic Web technologies

    PubMed Central

    2013-01-01

    Background BioHackathon 2010 was the third in a series of meetings hosted by the Database Center for Life Sciences (DBCLS) in Tokyo, Japan. The overall goal of the BioHackathon series is to improve the quality and accessibility of life science research data on the Web by bringing together representatives from public databases, analytical tool providers, and cyber-infrastructure researchers to jointly tackle important challenges in the area of in silico biological research. Results The theme of BioHackathon 2010 was the 'Semantic Web', and all attendees gathered with the shared goal of producing Semantic Web data from their respective resources, and/or consuming or interacting those data using their tools and interfaces. We discussed on topics including guidelines for designing semantic data and interoperability of resources. We consequently developed tools and clients for analysis and visualization. Conclusion We provide a meeting report from BioHackathon 2010, in which we describe the discussions, decisions, and breakthroughs made as we moved towards compliance with Semantic Web technologies - from source provider, through middleware, to the end-consumer. PMID:23398680

  6. A service-oriented distributed semantic mediator: integrating multiscale biomedical information.

    PubMed

    Mora, Oscar; Engelbrecht, Gerhard; Bisbal, Jesus

    2012-11-01

    Biomedical research continuously generates large amounts of heterogeneous and multimodal data spread over multiple data sources. These data, if appropriately shared and exploited, could dramatically improve the research practice itself, and ultimately the quality of health care delivered. This paper presents DISMED (DIstributed Semantic MEDiator), an open source semantic mediator that provides a unified view of a federated environment of multiscale biomedical data sources. DISMED is a Web-based software application to query and retrieve information distributed over a set of registered data sources, using semantic technologies. It also offers a userfriendly interface specifically designed to simplify the usage of these technologies by non-expert users. Although the architecture of the software mediator is generic and domain independent, in the context of this paper, DISMED has been evaluated for managing biomedical environments and facilitating research with respect to the handling of scientific data distributed in multiple heterogeneous data sources. As part of this contribution, a quantitative evaluation framework has been developed. It consist of a benchmarking scenario and the definition of five realistic use-cases. This framework, created entirely with public datasets, has been used to compare the performance of DISMED against other available mediators. It is also available to the scientific community in order to evaluate progress in the domain of semantic mediation, in a systematic and comparable manner. The results show an average improvement in the execution time by DISMED of 55% compared to the second best alternative in four out of the five use-cases of the experimental evaluation.

  7. Semantic Integration and Age of Acquisition Effects in Code-Blend Comprehension

    ERIC Educational Resources Information Center

    Giezen, Marcel R.; Emmorey, Karen

    2016-01-01

    Semantic and lexical decision tasks were used to investigate the mechanisms underlying code-blend facilitation: the finding that hearing bimodal bilinguals comprehend signs in American Sign Language (ASL) and spoken English words more quickly when they are presented together simultaneously than when each is presented alone. More robust…

  8. CI-Miner: A Semantic Methodology to Integrate Scientists, Data and Documents through the Use of Cyber-Infrastructure

    NASA Astrophysics Data System (ADS)

    Pinheiro da Silva, P.; CyberShARE Center of Excellence

    2011-12-01

    Scientists today face the challenge of rethinking the manner in which they document and make available their processes and data in an international cyber-infrastructure of shared resources. Some relevant examples of new scientific practices in the realm of computational and data extraction sciences include: large scale data discovery; data integration; data sharing across distinct scientific domains, systematic management of trust and uncertainty; and comprehensive support for explaining processes and results. This talk introduces CI-Miner - an innovative hands-on, open-source, community-driven methodology to integrate these new scientific practices. It has been developed in collaboration with scientists, with the purpose of capturing, storing and retrieving knowledge about scientific processes and their products, thereby further supporting a new generation of science techniques based on data exploration. CI-Miner uses semantic annotations in the form of W3C Ontology Web Language-based ontologies and Proof Markup Language (PML)-based provenance to represent knowledge. This methodology specializes in general-purpose ontologies, projected into workflow-driven ontologies(WDOs) and into semantic abstract workflows (SAWs). Provenance in PML is CI-Miner's integrative component, which allows scientists to retrieve and reason with the knowledge represented in these new semantic documents. It serves additionally as a platform to share such collected knowledge with the scientific community participating in the international cyber-infrastructure. The integrated semantic documents that are tailored for the use of human epistemic agents may also be utilized by machine epistemic agents, since the documents are based on W3C Resource Description Framework (RDF) notation. This talk is grounded upon interdisciplinary lessons learned through the use of CI-Miner in support of government-funded national and international cyber-infrastructure initiatives in the areas of geo

  9. Semantics-Based Composition of Integrated Cardiomyocyte Models Motivated by Real-World Use Cases.

    PubMed

    Neal, Maxwell L; Carlson, Brian E; Thompson, Christopher T; James, Ryan C; Kim, Karam G; Tran, Kenneth; Crampin, Edmund J; Cook, Daniel L; Gennari, John H

    2015-01-01

    Semantics-based model composition is an approach for generating complex biosimulation models from existing components that relies on capturing the biological meaning of model elements in a machine-readable fashion. This approach allows the user to work at the biological rather than computational level of abstraction and helps minimize the amount of manual effort required for model composition. To support this compositional approach, we have developed the SemGen software, and here report on SemGen's semantics-based merging capabilities using real-world modeling use cases. We successfully reproduced a large, manually-encoded, multi-model merge: the "Pandit-Hinch-Niederer" (PHN) cardiomyocyte excitation-contraction model, previously developed using CellML. We describe our approach for annotating the three component models used in the PHN composition and for merging them at the biological level of abstraction within SemGen. We demonstrate that we were able to reproduce the original PHN model results in a semi-automated, semantics-based fashion and also rapidly generate a second, novel cardiomyocyte model composed using an alternative, independently-developed tension generation component. We discuss the time-saving features of our compositional approach in the context of these merging exercises, the limitations we encountered, and potential solutions for enhancing the approach.

  10. Semantics-Based Composition of Integrated Cardiomyocyte Models Motivated by Real-World Use Cases

    PubMed Central

    Neal, Maxwell L.; Carlson, Brian E.; Thompson, Christopher T.; James, Ryan C.; Kim, Karam G.; Tran, Kenneth; Crampin, Edmund J.; Cook, Daniel L.; Gennari, John H.

    2015-01-01

    Semantics-based model composition is an approach for generating complex biosimulation models from existing components that relies on capturing the biological meaning of model elements in a machine-readable fashion. This approach allows the user to work at the biological rather than computational level of abstraction and helps minimize the amount of manual effort required for model composition. To support this compositional approach, we have developed the SemGen software, and here report on SemGen’s semantics-based merging capabilities using real-world modeling use cases. We successfully reproduced a large, manually-encoded, multi-model merge: the “Pandit-Hinch-Niederer” (PHN) cardiomyocyte excitation-contraction model, previously developed using CellML. We describe our approach for annotating the three component models used in the PHN composition and for merging them at the biological level of abstraction within SemGen. We demonstrate that we were able to reproduce the original PHN model results in a semi-automated, semantics-based fashion and also rapidly generate a second, novel cardiomyocyte model composed using an alternative, independently-developed tension generation component. We discuss the time-saving features of our compositional approach in the context of these merging exercises, the limitations we encountered, and potential solutions for enhancing the approach. PMID:26716837

  11. Ontology Design Patterns: Bridging the Gap Between Local Semantic Use Cases and Large-Scale, Long-Term Data Integration

    NASA Astrophysics Data System (ADS)

    Shepherd, Adam; Arko, Robert; Krisnadhi, Adila; Hitzler, Pascal; Janowicz, Krzysztof; Chandler, Cyndy; Narock, Tom; Cheatham, Michelle; Schildhauer, Mark; Jones, Matt; Raymond, Lisa; Mickle, Audrey; Finin, Tim; Fils, Doug; Carbotte, Suzanne; Lehnert, Kerstin

    2015-04-01

    Integrating datasets for new use cases is one of the common drivers for adopting semantic web technologies. Even though linked data principles enables this type of activity over time, the task of reconciling new ontological commitments for newer use cases can be daunting. This situation was faced by the Biological and Chemical Oceanography Data Management Office (BCO-DMO) as it sought to integrate its existing linked data with other data repositories to address newer scientific use cases as a partner in the GeoLink Project. To achieve a successful integration with other GeoLink partners, BCO-DMO's metadata would need to be described using the new ontologies developed by the GeoLink partners - a situation that could impact semantic inferencing, pre-existing software and external users of BCO-DMO's linked data. This presentation describes the process of how GeoLink is bridging the gap between local, pre-existing ontologies to achieve scientific metadata integration for all its partners through the use of ontology design patterns. GeoLink, an NSF EarthCube Building Block, brings together experts from the geosciences, computer science, and library science in an effort to improve discovery and reuse of data and knowledge. Its participating repositories include content from field expeditions, laboratory analyses, journal publications, conference presentations, theses/reports, and funding awards that span scientific studies from marine geology to marine ecology and biogeochemistry to paleoclimatology. GeoLink's outcomes include a set of reusable ontology design patterns (ODPs) that describe core geoscience concepts, a network of Linked Data published by participating repositories using those ODPs, and tools to facilitate discovery of related content in multiple repositories.

  12. A rule-based software test data generator

    NASA Technical Reports Server (NTRS)

    Deason, William H.; Brown, David B.; Chang, Kai-Hsiung; Cross, James H., II

    1991-01-01

    Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rule base is developed further.

  13. From ontology selection and semantic web to an integrated information system for food-borne diseases and food safety.

    PubMed

    Yan, Xianghe; Peng, Yun; Meng, Jianghong; Ruzante, Juliana; Fratamico, Pina M; Huang, Lihan; Juneja, Vijay; Needleman, David S

    2011-01-01

    Several factors have hindered effective use of information and resources related to food safety due to inconsistency among semantically heterogeneous data resources, lack of knowledge on profiling of food-borne pathogens, and knowledge gaps among research communities, government risk assessors/managers, and end-users of the information. This paper discusses technical aspects in the establishment of a comprehensive food safety information system consisting of the following steps: (a) computational collection and compiling publicly available information, including published pathogen genomic, proteomic, and metabolomic data; (b) development of ontology libraries on food-borne pathogens and design automatic algorithms with formal inference and fuzzy and probabilistic reasoning to address the consistency and accuracy of distributed information resources (e.g., PulseNet, FoodNet, OutbreakNet, PubMed, NCBI, EMBL, and other online genetic databases and information); (c) integration of collected pathogen profiling data, Foodrisk.org ( http://www.foodrisk.org ), PMP, Combase, and other relevant information into a user-friendly, searchable, "homogeneous" information system available to scientists in academia, the food industry, and government agencies; and (d) development of a computational model in semantic web for greater adaptability and robustness.

  14. Integrating Dynamic Data and Sensors with Semantic 3D City Models in the Context of Smart Cities

    NASA Astrophysics Data System (ADS)

    Chaturvedi, K.; Kolbe, T. H.

    2016-10-01

    Smart cities provide effective integration of human, physical and digital systems operating in the built environment. The advancements in city and landscape models, sensor web technologies, and simulation methods play a significant role in city analyses and improving quality of life of citizens and governance of cities. Semantic 3D city models can provide substantial benefits and can become a central information backbone for smart city infrastructures. However, current generation semantic 3D city models are static in nature and do not support dynamic properties and sensor observations. In this paper, we propose a new concept called Dynamizer allowing to represent highly dynamic data and providing a method for injecting dynamic variations of city object properties into the static representation. The approach also provides direct capability to model complex patterns based on statistics and general rules and also, real-time sensor observations. The concept is implemented as an Application Domain Extension for the CityGML standard. However, it could also be applied to other GML-based application schemas including the European INSPIRE data themes and national standards for topography and cadasters like the British Ordnance Survey Mastermap or the German cadaster standard ALKIS.

  15. Applications and methods utilizing the Simple Semantic Web Architecture and Protocol (SSWAP) for bioinformatics resource discovery and disparate data and service integration

    PubMed Central

    2010-01-01

    Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap") offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS]) used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL), genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL), Resource Description Framework (RDF) and eXtensible Markup Language (XML) documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST). Conclusions The need for semantic integration technologies has preceded available solutions. We

  16. A Theory of Conditioning: Inductive Learning within Rule-Based Default Hierarchies.

    ERIC Educational Resources Information Center

    Holyoak, Keith J.; And Others

    1989-01-01

    A theory of classical conditioning is presented, which is based on a parallel, rule-based performance system integrated with mechanisms for inductive learning. A major inferential heuristic incorporated into the theory involves "unusualness," which is focused on novel cues. The theory is implemented via computer simulation. (TJH)

  17. Semantic-Based Framework for Integration and Personalization of Television Related Media

    NASA Astrophysics Data System (ADS)

    Bellekens, Pieter; Aroyo, Lora; Houben, Geert-Jan

    In this paper we try to identify requirements, opportunities and problems in home media centers and we propose an approach to address them by describing an intelligent home media environment. The major issues investigated are coping with the information overflow in the current provision of TV programs and channels and the need for personalization to specific users by adapting to their age, interests, language abilities, and various context characteristics. The research presented in this paper follows from a collaboration between Eindhoven University of Technology, the Philips Applied Technologies group and Stoneroos Interactive Television. The work has been partially carried out within the ITEA-funded European project Passepartout, which also includes partners like Thomson, INRIA and ETRI. In the following chapter we describe the motivation and research problem in relation to related work, followed by an illustrative use case scenario. Afterwards, we explain our data model which starts with explaining the TV-Anytime structure and its enrichments with semantic knowledge from various ontologies and vocabularies. The data model description then serves as the background for understanding our proposed system architecture SenSee. Afterwards we go deeper into the user modeling part and explain how our personalization approach works. The latter elaborates on a design targeting interoperability and on semantic techniques for enabling intelligent context-aware personalization. In the implementation chapter we describe some practical issues as well as our main interface showcase, iFanzy. Future work and conclusions end this chapter.

  18. Rational integration of noisy evidence and prior semantic expectations in sentence interpretation.

    PubMed

    Gibson, Edward; Bergen, Leon; Piantadosi, Steven T

    2013-05-14

    Sentence processing theories typically assume that the input to our language processing mechanisms is an error-free sequence of words. However, this assumption is an oversimplification because noise is present in typical language use (for instance, due to a noisy environment, producer errors, or perceiver errors). A complete theory of human sentence comprehension therefore needs to explain how humans understand language given imperfect input. Indeed, like many cognitive systems, language processing mechanisms may even be "well designed"--in this case for the task of recovering intended meaning from noisy utterances. In particular, comprehension mechanisms may be sensitive to the types of information that an idealized statistical comprehender would be sensitive to. Here, we evaluate four predictions about such a rational (Bayesian) noisy-channel language comprehender in a sentence comprehension task: (i) semantic cues should pull sentence interpretation towards plausible meanings, especially if the wording of the more plausible meaning is close to the observed utterance in terms of the number of edits; (ii) this process should asymmetrically treat insertions and deletions due to the Bayesian "size principle"; such nonliteral interpretation of sentences should (iii) increase with the perceived noise rate of the communicative situation and (iv) decrease if semantically anomalous meanings are more likely to be communicated. These predictions are borne out, strongly suggesting that human language relies on rational statistical inference over a noisy channel. PMID:23637344

  19. Rational integration of noisy evidence and prior semantic expectations in sentence interpretation

    PubMed Central

    Gibson, Edward; Bergen, Leon; Piantadosi, Steven T.

    2013-01-01

    Sentence processing theories typically assume that the input to our language processing mechanisms is an error-free sequence of words. However, this assumption is an oversimplification because noise is present in typical language use (for instance, due to a noisy environment, producer errors, or perceiver errors). A complete theory of human sentence comprehension therefore needs to explain how humans understand language given imperfect input. Indeed, like many cognitive systems, language processing mechanisms may even be “well designed”–in this case for the task of recovering intended meaning from noisy utterances. In particular, comprehension mechanisms may be sensitive to the types of information that an idealized statistical comprehender would be sensitive to. Here, we evaluate four predictions about such a rational (Bayesian) noisy-channel language comprehender in a sentence comprehension task: (i) semantic cues should pull sentence interpretation towards plausible meanings, especially if the wording of the more plausible meaning is close to the observed utterance in terms of the number of edits; (ii) this process should asymmetrically treat insertions and deletions due to the Bayesian “size principle”; such nonliteral interpretation of sentences should (iii) increase with the perceived noise rate of the communicative situation and (iv) decrease if semantically anomalous meanings are more likely to be communicated. These predictions are borne out, strongly suggesting that human language relies on rational statistical inference over a noisy channel. PMID:23637344

  20. Integrated Syntactic/Semantic XML Data Validation with a Reusable Software Component

    ERIC Educational Resources Information Center

    Golikov, Steven

    2013-01-01

    Data integration is a critical component of enterprise system integration, and XML data validation is the foundation for sound data integration of XML-based information systems. Since B2B e-commerce relies on data validation as one of the critical components for enterprise integration, it is imperative for financial industries and e-commerce…

  1. Adaptive Rule Based Fetal QRS Complex Detection Using Hilbert Transform

    PubMed Central

    Ulusar, Umit D.; Govindan, R.B.; Wilson, James D.; Lowery, Curtis L.; Preissl, Hubert; Eswaran, Hari

    2010-01-01

    In this paper we introduce an adaptive rule based QRS detection algorithm using the Hilbert transform (adHQRS) for fetal magnetocardiography processing. Hilbert transform is used to combine multiple channel measurements and the adaptive rule based decision process is used to eliminate spurious beats. The algorithm has been tested with a large number of datasets and promising results were obtained. PMID:19964648

  2. Integration of multi-scale biosimulation models via light-weight semantics.

    PubMed

    Gennari, John H; Neal, Maxwell L; Carlson, Brian E; Cook, Daniel L

    2008-01-01

    Currently, biosimulation researchers use a variety of computational environments and languages to model biological processes. Ideally, researchers should be able to semiautomatically merge models to more effectively build larger, multi-scale models. However, current modeling methods do not capture the underlying semantics of these models sufficiently to support this type of model construction. In this paper, we both propose a general approach to solve this problem, and we provide a specific example that demonstrates the benefits of our methodology. In particular, we describe three biosimulation models: (1) a cardio-vascular fluid dynamics model, (2) a model of heart rate regulation via baroreceptor control, and (3) a sub-cellular-level model of the arteriolar smooth muscle. Within a light-weight ontological framework, we leverage reference ontologies to match concepts across models. The light-weight ontology then helps us combine our three models into a merged model that can answer questions beyond the scope of any single model.

  3. Dispositional mindfulness and semantic integration of emotional words: Evidence from event-related brain potentials.

    PubMed

    Dorjee, Dusana; Lally, Níall; Darrall-Rew, Jonathan; Thierry, Guillaume

    2015-08-01

    Initial research shows that mindfulness training can enhance attention and modulate the affective response. However, links between mindfulness and language processing remain virtually unexplored despite the prominent role of overt and silent negative ruminative speech in depressive and anxiety-related symptomatology. Here, we measured dispositional mindfulness and recorded participants' event-related brain potential responses to positive and negative target words preceded by words congruent or incongruent with the targets in terms of semantic relatedness and emotional valence. While the low mindfulness group showed similar N400 effect pattern for positive and negative targets, high dispositional mindfulness was associated with larger N400 effect to negative targets. This result suggests that negative meanings are less readily accessible in people with high dispositional mindfulness. Furthermore, high dispositional mindfulness was associated with reduced P600 amplitudes to emotional words, suggesting less post-analysis and attentional effort which possibly relates to a lower inclination to ruminate. Overall, these findings provide initial evidence on associations between modifications in language systems and mindfulness.

  4. A logical model of cooperating rule-based systems

    NASA Technical Reports Server (NTRS)

    Bailin, Sidney C.; Moore, John M.; Hilberg, Robert H.; Murphy, Elizabeth D.; Bahder, Shari A.

    1989-01-01

    A model is developed to assist in the planning, specification, development, and verification of space information systems involving distributed rule-based systems. The model is based on an analysis of possible uses of rule-based systems in control centers. This analysis is summarized as a data-flow model for a hypothetical intelligent control center. From this data-flow model, the logical model of cooperating rule-based systems is extracted. This model consists of four layers of increasing capability: (1) communicating agents, (2) belief-sharing knowledge sources, (3) goal-sharing interest areas, and (4) task-sharing job roles.

  5. Rule-based fault-tolerant flight control

    NASA Technical Reports Server (NTRS)

    Handelman, Dave

    1988-01-01

    Fault tolerance has always been a desirable characteristic of aircraft. The ability to withstand unexpected changes in aircraft configuration has a direct impact on the ability to complete a mission effectively and safely. The possible synergistic effects of combining techniques of modern control theory, statistical hypothesis testing, and artificial intelligence in the attempt to provide failure accommodation for aircraft are investigated. This effort has resulted in the definition of a theory for rule based control and a system for development of such a rule based controller. Although presented here in response to the goal of aircraft fault tolerance, the rule based control technique is applicable to a wide range of complex control problems.

  6. Generative Semantics.

    ERIC Educational Resources Information Center

    King, Margaret

    The first section of this paper deals with the attempts within the framework of transformational grammar to make semantics a systematic part of linguistic description, and outlines the characteristics of the generative semantics position. The second section takes a critical look at generative semantics in its later manifestations, and makes a case…

  7. Semantic Integration: Effects of Imagery, Enaction, and Sentence Repetition Training on Prereaders' Recall for Pictograph Sentences.

    ERIC Educational Resources Information Center

    Ledger, George W.; Ryan, Ellen Bouchard

    1985-01-01

    Over a two-week period, examined the effectiveness of integrative imagery strategy over concrete enaction and repetition strategies for improving kindergartners' recall of pictograph sentences. (Author/BE)

  8. Techniques and implementation of the embedded rule-based expert system using Ada

    NASA Technical Reports Server (NTRS)

    Liberman, Eugene M.; Jones, Robert E.

    1991-01-01

    Ada is becoming an increasingly popular programming language for large Government-funded software projects. Ada with its portability, transportability, and maintainability lends itself well to today's complex programming environment. In addition, expert systems have also assured a growing role in providing human-like reasoning capability and expertise for computer systems. The integration of expert system technology with Ada programming language, specifically a rule-based expert system using an ART-Ada (Automated Reasoning Tool for Ada) system shell is discussed. The NASA Lewis Research Center was chosen as a beta test site for ART-Ada. The test was conducted by implementing the existing Autonomous Power EXpert System (APEX), a Lisp-base power expert system, in ART-Ada. Three components, the rule-based expert system, a graphics user interface, and communications software make up SMART-Ada (Systems fault Management with ART-Ada). The main objective, to conduct a beta test on the ART-Ada rule-based expert system shell, was achieved. The system is operational. New Ada tools will assist in future successful projects. ART-Ada is one such tool and is a viable alternative to the straight Ada code when an application requires a rule-based or knowledge-based approach.

  9. Automated implementation of rule-based expert systems with neural networks for time-critical applications

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, P. A.; Huang, Song; Govind, Girish

    1991-01-01

    In fault diagnosis, control and real-time monitoring, both timing and accuracy are critical for operators or machines to reach proper solutions or appropriate actions. Expert systems are becoming more popular in the manufacturing community for dealing with such problems. In recent years, neural networks have revived and their applications have spread to many areas of science and engineering. A method of using neural networks to implement rule-based expert systems for time-critical applications is discussed here. This method can convert a given rule-based system into a neural network with fixed weights and thresholds. The rules governing the translation are presented along with some examples. We also present the results of automated machine implementation of such networks from the given rule-base. This significantly simplifies the translation process to neural network expert systems from conventional rule-based systems. Results comparing the performance of the proposed approach based on neural networks vs. the classical approach are given. The possibility of very large scale integration (VLSI) realization of such neural network expert systems is also discussed.

  10. Semantics by analogy for illustrative volume visualization☆

    PubMed Central

    Gerl, Moritz; Rautek, Peter; Isenberg, Tobias; Gröller, Eduard

    2012-01-01

    We present an interactive graphical approach for the explicit specification of semantics for volume visualization. This explicit and graphical specification of semantics for volumetric features allows us to visually assign meaning to both input and output parameters of the visualization mapping. This is in contrast to the implicit way of specifying semantics using transfer functions. In particular, we demonstrate how to realize a dynamic specification of semantics which allows to flexibly explore a wide range of mappings. Our approach is based on three concepts. First, we use semantic shader augmentation to automatically add rule-based rendering functionality to static visualization mappings in a shader program, while preserving the visual abstraction that the initial shader encodes. With this technique we extend recent developments that define a mapping between data attributes and visual attributes with rules, which are evaluated using fuzzy logic. Second, we let users define the semantics by analogy through brushing on renderings of the data attributes of interest. Third, the rules are specified graphically in an interface that provides visual clues for potential modifications. Together, the presented methods offer a high degree of freedom in the specification and exploration of rule-based mappings and avoid the limitations of a linguistic rule formulation. PMID:23576827

  11. A case study: semantic integration of gene-disease associations for type 2 diabetes mellitus from literature and biomedical data resources.

    PubMed

    Rebholz-Schuhmann, Dietrich; Grabmüller, Christoph; Kavaliauskas, Silvestras; Croset, Samuel; Woollard, Peter; Backofen, Rolf; Filsell, Wendy; Clark, Dominic

    2014-07-01

    In the Semantic Enrichment of the Scientific Literature (SESL) project, researchers from academia and from life science and publishing companies collaborated in a pre-competitive way to integrate and share information for type 2 diabetes mellitus (T2DM) in adults. This case study exposes benefits from semantic interoperability after integrating the scientific literature with biomedical data resources, such as UniProt Knowledgebase (UniProtKB) and the Gene Expression Atlas (GXA). We annotated scientific documents in a standardized way, by applying public terminological resources for diseases and proteins, and other text-mining approaches. Eventually, we compared the genetic causes of T2DM across the data resources to demonstrate the benefits from the SESL triple store. Our solution enables publishers to distribute their content with little overhead into remote data infrastructures, such as into any Virtual Knowledge Broker. PMID:24201223

  12. A case study: semantic integration of gene-disease associations for type 2 diabetes mellitus from literature and biomedical data resources.

    PubMed

    Rebholz-Schuhmann, Dietrich; Grabmüller, Christoph; Kavaliauskas, Silvestras; Croset, Samuel; Woollard, Peter; Backofen, Rolf; Filsell, Wendy; Clark, Dominic

    2014-07-01

    In the Semantic Enrichment of the Scientific Literature (SESL) project, researchers from academia and from life science and publishing companies collaborated in a pre-competitive way to integrate and share information for type 2 diabetes mellitus (T2DM) in adults. This case study exposes benefits from semantic interoperability after integrating the scientific literature with biomedical data resources, such as UniProt Knowledgebase (UniProtKB) and the Gene Expression Atlas (GXA). We annotated scientific documents in a standardized way, by applying public terminological resources for diseases and proteins, and other text-mining approaches. Eventually, we compared the genetic causes of T2DM across the data resources to demonstrate the benefits from the SESL triple store. Our solution enables publishers to distribute their content with little overhead into remote data infrastructures, such as into any Virtual Knowledge Broker.

  13. A Semantic Analysis of XML Schema Matching for B2B Systems Integration

    ERIC Educational Resources Information Center

    Kim, Jaewook

    2011-01-01

    One of the most critical steps to integrating heterogeneous e-Business applications using different XML schemas is schema matching, which is known to be costly and error-prone. Many automatic schema matching approaches have been proposed, but the challenge is still daunting because of the complexity of schemas and immaturity of technologies in…

  14. The Development of the Ability to Semantically Integrate Information in Speech and Iconic Gesture in Comprehension

    ERIC Educational Resources Information Center

    Sekine, Kazuki; Sowden, Hannah; Kita, Sotaro

    2015-01-01

    We examined whether children's ability to integrate speech and gesture follows the pattern of a broader developmental shift between 3- and 5-year-old children (Ramscar & Gitcho, 2007) regarding the ability to process two pieces of information simultaneously. In Experiment 1, 3-year-olds, 5-year-olds, and adults were presented with either an…

  15. Semantic Domain-Specific Functional Integration for Action-Related vs. Abstract Concepts

    ERIC Educational Resources Information Center

    Ghio, Marta; Tettamanti, Marco

    2010-01-01

    A central topic in cognitive neuroscience concerns the representation of concepts and the specific neural mechanisms that mediate conceptual knowledge. Recently proposed modal theories assert that concepts are grounded on the integration of multimodal, distributed representations. The aim of the present work is to complement the available…

  16. Developing a semantic web model for medical differential diagnosis recommendation.

    PubMed

    Mohammed, Osama; Benlamri, Rachid

    2014-10-01

    In this paper we describe a novel model for differential diagnosis designed to make recommendations by utilizing semantic web technologies. The model is a response to a number of requirements, ranging from incorporating essential clinical diagnostic semantics to the integration of data mining for the process of identifying candidate diseases that best explain a set of clinical features. We introduce two major components, which we find essential to the construction of an integral differential diagnosis recommendation model: the evidence-based recommender component and the proximity-based recommender component. Both approaches are driven by disease diagnosis ontologies designed specifically to enable the process of generating diagnostic recommendations. These ontologies are the disease symptom ontology and the patient ontology. The evidence-based diagnosis process develops dynamic rules based on standardized clinical pathways. The proximity-based component employs data mining to provide clinicians with diagnosis predictions, as well as generates new diagnosis rules from provided training datasets. This article describes the integration between these two components along with the developed diagnosis ontologies to form a novel medical differential diagnosis recommendation model. This article also provides test cases from the implementation of the overall model, which shows quite promising diagnostic recommendation results.

  17. Developing a semantic web model for medical differential diagnosis recommendation.

    PubMed

    Mohammed, Osama; Benlamri, Rachid

    2014-10-01

    In this paper we describe a novel model for differential diagnosis designed to make recommendations by utilizing semantic web technologies. The model is a response to a number of requirements, ranging from incorporating essential clinical diagnostic semantics to the integration of data mining for the process of identifying candidate diseases that best explain a set of clinical features. We introduce two major components, which we find essential to the construction of an integral differential diagnosis recommendation model: the evidence-based recommender component and the proximity-based recommender component. Both approaches are driven by disease diagnosis ontologies designed specifically to enable the process of generating diagnostic recommendations. These ontologies are the disease symptom ontology and the patient ontology. The evidence-based diagnosis process develops dynamic rules based on standardized clinical pathways. The proximity-based component employs data mining to provide clinicians with diagnosis predictions, as well as generates new diagnosis rules from provided training datasets. This article describes the integration between these two components along with the developed diagnosis ontologies to form a novel medical differential diagnosis recommendation model. This article also provides test cases from the implementation of the overall model, which shows quite promising diagnostic recommendation results. PMID:25178271

  18. Geo-Semantic Framework for Integrating Long-Tail Data and Model Resources for Advancing Earth System Science

    NASA Astrophysics Data System (ADS)

    Elag, M.; Kumar, P.

    2014-12-01

    Often, scientists and small research groups collect data, which target to address issues and have limited geographic or temporal range. A large number of such collections together constitute a large database that is of immense value to Earth Science studies. Complexity of integrating these data include heterogeneity in dimensions, coordinate systems, scales, variables, providers, users and contexts. They have been defined as long-tail data. Similarly, we use "long-tail models" to characterize a heterogeneous collection of models and/or modules developed for targeted problems by individuals and small groups, which together provide a large valuable collection. Complexity of integrating across these models include differing variable names and units for the same concept, model runs at different time steps and spatial resolution, use of differing naming and reference conventions, etc. Ability to "integrate long-tail models and data" will provide an opportunity for the interoperability and reusability of communities' resources, where not only models can be combined in a workflow, but each model will be able to discover and (re)use data in application specific context of space, time and questions. This capability is essential to represent, understand, predict, and manage heterogeneous and interconnected processes and activities by harnessing the complex, heterogeneous, and extensive set of distributed resources. Because of the staggering production rate of long-tail models and data resulting from the advances in computational, sensing, and information technologies, an important challenge arises: how can geoinformatics bring together these resources seamlessly, given the inherent complexity among model and data resources that span across various domains. We will present a semantic-based framework to support integration of "long-tail" models and data. This builds on existing technologies including: (i) SEAD (Sustainable Environmental Actionable Data) which supports curation

  19. Case Study for Integration of an Oncology Clinical Site in a Semantic Interoperability Solution based on HL7 v3 and SNOMED-CT: Data Transformation Needs.

    PubMed

    Ibrahim, Ahmed; Bucur, Anca; Perez-Rey, David; Alonso, Enrique; de Hoog, Matthy; Dekker, Andre; Marshall, M Scott

    2015-01-01

    This paper describes the data transformation pipeline defined to support the integration of a new clinical site in a standards-based semantic interoperability environment. The available datasets combined structured and free-text patient data in Dutch, collected in the context of radiation therapy in several cancer types. Our approach aims at both efficiency and data quality. We combine custom-developed scripts, standard tools and manual validation by clinical and knowledge experts. We identified key challenges emerging from the several sources of heterogeneity in our case study (systems, language, data structure, clinical domain) and implemented solutions that we will further generalize for the integration of new sites. We conclude that the required effort for data transformation is manageable which supports the feasibility of our semantic interoperability solution. The achieved semantic interoperability will be leveraged for the deployment and evaluation at the clinical site of applications enabling secondary use of care data for research. This work has been funded by the European Commission through the INTEGRATE (FP7-ICT-2009-6-270253) and EURECA (FP7-ICT-2011-288048) projects.

  20. The Role of Semantics in Open-World, Integrative, Collaborative Science Data Platforms

    NASA Astrophysics Data System (ADS)

    Fox, Peter; Chen, Yanning; Wang, Han; West, Patrick; Erickson, John; Ma, Marshall

    2014-05-01

    As collaborative science spreads into more and more Earth and space science fields, both participants and funders are expressing stronger needs for highly functional data and information capabilities. Characteristics include a) easy to use, b) highly integrated, c) leverage investments, d) accommodate rapid technical change, and e) do not incur undue expense or time to build or maintain - these are not a small set of requirements. Based on our accumulated experience over the last ~ decade and several key technical approaches, we adapt, extend, and integrate several open source applications and frameworks to handle major portions of functionality for these platforms. This includes: an object-type repository, collaboration tools, identity management, all within a portal managing diverse content and applications. In this contribution, we present our methods and results of information models, adaptation, integration and evolution of a networked data science architecture based on several open source technologies (Drupal, VIVO, the Comprehensive Knowledge Archive Network; CKAN, and the Global Handle System; GHS). In particular we present the Deep Carbon Observatory - a platform for international science collaboration. We present and discuss key functional and non-functional attributes, and discuss the general applicability of the platform.

  1. Integrating the automatic and the controlled: strategies in semantic priming in an attractor network with latching dynamics.

    PubMed

    Lerner, Itamar; Bentin, Shlomo; Shriki, Oren

    2014-01-01

    Semantic priming has long been recognized to reflect, along with automatic semantic mechanisms, the contribution of controlled strategies. However, previous theories of controlled priming were mostly qualitative, lacking common grounds with modern mathematical models of automatic priming based on neural networks. Recently, we introduced a novel attractor network model of automatic semantic priming with latching dynamics. Here, we extend this work to show how the same model can also account for important findings regarding controlled processes. Assuming the rate of semantic transitions in the network can be adapted using simple reinforcement learning, we show how basic findings attributed to controlled processes in priming can be achieved, including their dependency on stimulus onset asynchrony and relatedness proportion and their unique effect on associative, category-exemplar, mediated and backward prime-target relations. We discuss how our mechanism relates to the classic expectancy theory and how it can be further extended in future developments of the model.

  2. The Enterprise Data Trust at Mayo Clinic: a semantically integrated warehouse of biomedical data.

    PubMed

    Chute, Christopher G; Beck, Scott A; Fisk, Thomas B; Mohr, David N

    2010-01-01

    Mayo Clinic's Enterprise Data Trust is a collection of data from patient care, education, research, and administrative transactional systems, organized to support information retrieval, business intelligence, and high-level decision making. Structurally it is a top-down, subject-oriented, integrated, time-variant, and non-volatile collection of data in support of Mayo Clinic's analytic and decision-making processes. It is an interconnected piece of Mayo Clinic's Enterprise Information Management initiative, which also includes Data Governance, Enterprise Data Modeling, the Enterprise Vocabulary System, and Metadata Management. These resources enable unprecedented organization of enterprise information about patient, genomic, and research data. While facile access for cohort definition or aggregate retrieval is supported, a high level of security, retrieval audit, and user authentication ensures privacy, confidentiality, and respect for the trust imparted by our patients for the respectful use of information about their conditions.

  3. The Enterprise Data Trust at Mayo Clinic: a semantically integrated warehouse of biomedical data

    PubMed Central

    Beck, Scott A; Fisk, Thomas B; Mohr, David N

    2010-01-01

    Mayo Clinic's Enterprise Data Trust is a collection of data from patient care, education, research, and administrative transactional systems, organized to support information retrieval, business intelligence, and high-level decision making. Structurally it is a top-down, subject-oriented, integrated, time-variant, and non-volatile collection of data in support of Mayo Clinic's analytic and decision-making processes. It is an interconnected piece of Mayo Clinic's Enterprise Information Management initiative, which also includes Data Governance, Enterprise Data Modeling, the Enterprise Vocabulary System, and Metadata Management. These resources enable unprecedented organization of enterprise information about patient, genomic, and research data. While facile access for cohort definition or aggregate retrieval is supported, a high level of security, retrieval audit, and user authentication ensures privacy, confidentiality, and respect for the trust imparted by our patients for the respectful use of information about their conditions. PMID:20190054

  4. Semantic Bim and GIS Modelling for Energy-Efficient Buildings Integrated in a Healthcare District

    NASA Astrophysics Data System (ADS)

    Sebastian, R.; Böhms, H. M.; Bonsma, P.; van den Helm, P. W.

    2013-09-01

    The subject of energy-efficient buildings (EeB) is among the most urgent research priorities in the European Union (EU). In order to achieve the broadest impact, innovative approaches to EeB need to resolve challenges at the neighbourhood level, instead of only focusing on improvements of individual buildings. For this purpose, the design phase of new building projects as well as building retrofitting projects is the crucial moment for integrating multi-scale EeB solutions. In EeB design process, clients, architects, technical designers, contractors, and end-users altogether need new methods and tools for designing energy-efficiency buildings integrated in their neighbourhoods. Since the scope of designing covers multiple dimensions, the new design methodology relies on the inter-operability between Building Information Modelling (BIM) and Geospatial Information Systems (GIS). Design for EeB optimisation needs to put attention on the inter-connections between the architectural systems and the MEP/HVAC systems, as well as on the relation of Product Lifecycle Modelling (PLM), Building Management Systems (BMS), BIM and GIS. This paper is descriptive and it presents an actual EU FP7 large-scale collaborative research project titled STREAMER. The research on the inter-operability between BIM and GIS for holistic design of energy-efficient buildings in neighbourhood scale is supported by real case studies of mixed-use healthcare districts. The new design methodology encompasses all scales and all lifecycle phases of the built environment, as well as the whole lifecycle of the information models that comprises: Building Information Model (BIM), Building Assembly Model (BAM), Building Energy Model (BEM), and Building Operation Optimisation Model (BOOM).

  5. Semantic integration of physiology phenotypes with an application to the Cellular Phenotype Ontology

    PubMed Central

    Hoehndorf, Robert; Harris, Midori A.; Herre, Heinrich; Rustici, Gabriella; Gkoutos, Georgios V.

    2012-01-01

    Motivation: The systematic observation of phenotypes has become a crucial tool of functional genomics, and several large international projects are currently underway to identify and characterize the phenotypes that are associated with genotypes in several species. To integrate phenotype descriptions within and across species, phenotype ontologies have been developed. Applying ontologies to unify phenotype descriptions in the domain of physiology has been a particular challenge due to the high complexity of the underlying domain. Results: In this study, we present the outline of a theory and its implementation for an ontology of physiology-related phenotypes. We provide a formal description of process attributes and relate them to the attributes of their temporal parts and participants. We apply our theory to create the Cellular Phenotype Ontology (CPO). The CPO is an ontology of morphological and physiological phenotypic characteristics of cells, cell components and cellular processes. Its prime application is to provide terms and uniform definition patterns for the annotation of cellular phenotypes. The CPO can be used for the annotation of observed abnormalities in domains, such as systems microscopy, in which cellular abnormalities are observed and for which no phenotype ontology has been created. Availability and implementation: The CPO and the source code we generated to create the CPO are freely available on http://cell-phenotype.googlecode.com. Contact: rh497@cam.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22539675

  6. Changes in Knowledge Structures from Building Semantic Net versus Production Rule Representations of Subject Content.

    ERIC Educational Resources Information Center

    Jonassen, David H.

    1993-01-01

    Compares the effects on the knowledge structure of the learners of using two different Mindtools--semantic networks and rule-based expert systems--for representing the content of a course. Results showed that students in the semantic network class possessed more hierarchical knowledge structures than the other group. (Contains 29 references.) (JLB)

  7. A Semantic Rule-Based Framework for Efficient Retrieval of Educational Materials

    ERIC Educational Resources Information Center

    Mahmoudi, Maryam Tayefeh; Taghiyareh, Fattaneh; Badie, Kambiz

    2013-01-01

    Retrieving resources in an appropriate manner has a promising role in increasing the performance of educational support systems. A variety of works have been done to organize materials for educational purposes using tagging techniques. Despite the effectiveness of these techniques within certain domains, organizing resources in a way being…

  8. Context-Based Semantic Annotations in CoPEs: An Ontological and Rule-Based Approach

    ERIC Educational Resources Information Center

    Boudebza, Souâad; Berkani, Lamia; Azouaou, Faiçal

    2013-01-01

    Knowledge capitalization is one of many problems facing online communities of practice (CoPs). Knowledge accumulated through the participation in the community must be capitalized for future reuse. Most of proposals are specific and focus on knowledge modeling disregarding the reuse of that knowledge. In this paper, we are particularly interested…

  9. Semantic Data Integration and Ontology Use within the Global Earth Observation System of Systems (GEOSS) Global Water Cycle Data Integration System

    NASA Astrophysics Data System (ADS)

    Pozzi, W.; Fekete, B.; Piasecki, M.; McGuinness, D.; Fox, P.; Lawford, R.; Vorosmarty, C.; Houser, P.; Imam, B.

    2008-12-01

    The inadequacies of water cycle observations for monitoring long-term changes in the global water system, as well as their feedback into the climate system, poses a major constraint on sustainable development of water resources and improvement of water management practices. Hence, The Group on Earth Observations (GEO) has established Task WA-08-01, "Integration of in situ and satellite data for water cycle monitoring," an integrative initiative combining different types of satellite and in situ observations related to key variables of the water cycle with model outputs for improved accuracy and global coverage. This presentation proposes development of the Rapid, Integrated Monitoring System for the Water Cycle (Global-RIMS)--already employed by the GEO Global Terrestrial Network for Hydrology (GTN-H)--as either one of the main components or linked with the Asian system to constitute the modeling system of GEOSS for water cycle monitoring. We further propose expanded, augmented capability to run multiple grids to embrace some of the heterogeneous methods and formats of the Earth Science, Hydrology, and Hydraulic Engineering communities. Different methodologies are employed by the Earth Science (land surface modeling), the Hydrological (GIS), and the Hydraulic Engineering Communities; with each community employing models that require different input data. Data will be routed as input variables to the models through web services, allowing satellite and in situ data to be integrated together within the modeling framework. Semantic data integration will provide the automation to enable this system to operate in near-real-time. Multiple data collections for ground water, precipitation, soil moisture satellite data, such as SMAP, and lake data will require multiple low level ontologies, and an upper level ontology will permit user-friendly water management knowledge to be synthesized. These ontologies will have to have overlapping terms mapped and linked together. so

  10. Rule-Based Category Learning in Down Syndrome

    ERIC Educational Resources Information Center

    Phillips, B. Allyson; Conners, Frances A.; Merrill, Edward; Klinger, Mark R.

    2014-01-01

    Rule-based category learning was examined in youths with Down syndrome (DS), youths with intellectual disability (ID), and typically developing (TD) youths. Two tasks measured category learning: the Modified Card Sort task (MCST) and the Concept Formation test of the Woodcock-Johnson-III (Woodcock, McGrew, & Mather, 2001). In regression-based…

  11. Optimal Test Design with Rule-Based Item Generation

    ERIC Educational Resources Information Center

    Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.

    2013-01-01

    Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…

  12. Rule based fuzzy logic approach for classification of fibromyalgia syndrome.

    PubMed

    Arslan, Evren; Yildiz, Sedat; Albayrak, Yalcin; Koklukaya, Etem

    2016-06-01

    Fibromyalgia syndrome (FMS) is a chronic muscle and skeletal system disease observed generally in women, manifesting itself with a widespread pain and impairing the individual's quality of life. FMS diagnosis is made based on the American College of Rheumatology (ACR) criteria. However, recently the employability and sufficiency of ACR criteria are under debate. In this context, several evaluation methods, including clinical evaluation methods were proposed by researchers. Accordingly, ACR had to update their criteria announced back in 1990, 2010 and 2011. Proposed rule based fuzzy logic method aims to evaluate FMS at a different angle as well. This method contains a rule base derived from the 1990 ACR criteria and the individual experiences of specialists. The study was conducted using the data collected from 60 inpatient and 30 healthy volunteers. Several tests and physical examination were administered to the participants. The fuzzy logic rule base was structured using the parameters of tender point count, chronic widespread pain period, pain severity, fatigue severity and sleep disturbance level, which were deemed important in FMS diagnosis. It has been observed that generally fuzzy predictor was 95.56 % consistent with at least of the specialists, who are not a creator of the fuzzy rule base. Thus, in diagnosis classification where the severity of FMS was classified as well, consistent findings were obtained from the comparison of interpretations and experiences of specialists and the fuzzy logic approach. The study proposes a rule base, which could eliminate the shortcomings of 1990 ACR criteria during the FMS evaluation process. Furthermore, the proposed method presents a classification on the severity of the disease, which was not available with the ACR criteria. The study was not limited to only disease classification but at the same time the probability of occurrence and severity was classified. In addition, those who were not suffering from FMS were

  13. Rule based fuzzy logic approach for classification of fibromyalgia syndrome.

    PubMed

    Arslan, Evren; Yildiz, Sedat; Albayrak, Yalcin; Koklukaya, Etem

    2016-06-01

    Fibromyalgia syndrome (FMS) is a chronic muscle and skeletal system disease observed generally in women, manifesting itself with a widespread pain and impairing the individual's quality of life. FMS diagnosis is made based on the American College of Rheumatology (ACR) criteria. However, recently the employability and sufficiency of ACR criteria are under debate. In this context, several evaluation methods, including clinical evaluation methods were proposed by researchers. Accordingly, ACR had to update their criteria announced back in 1990, 2010 and 2011. Proposed rule based fuzzy logic method aims to evaluate FMS at a different angle as well. This method contains a rule base derived from the 1990 ACR criteria and the individual experiences of specialists. The study was conducted using the data collected from 60 inpatient and 30 healthy volunteers. Several tests and physical examination were administered to the participants. The fuzzy logic rule base was structured using the parameters of tender point count, chronic widespread pain period, pain severity, fatigue severity and sleep disturbance level, which were deemed important in FMS diagnosis. It has been observed that generally fuzzy predictor was 95.56 % consistent with at least of the specialists, who are not a creator of the fuzzy rule base. Thus, in diagnosis classification where the severity of FMS was classified as well, consistent findings were obtained from the comparison of interpretations and experiences of specialists and the fuzzy logic approach. The study proposes a rule base, which could eliminate the shortcomings of 1990 ACR criteria during the FMS evaluation process. Furthermore, the proposed method presents a classification on the severity of the disease, which was not available with the ACR criteria. The study was not limited to only disease classification but at the same time the probability of occurrence and severity was classified. In addition, those who were not suffering from FMS were

  14. Spatial rule-based modeling: a method and its application to the human mitotic kinetochore.

    PubMed

    Ibrahim, Bashar; Henze, Richard; Gruenert, Gerd; Egbert, Matthew; Huwald, Jan; Dittrich, Peter

    2013-01-01

    A common problem in the analysis of biological systems is the combinatorial explosion that emerges from the complexity of multi-protein assemblies. Conventional formalisms, like differential equations, Boolean networks and Bayesian networks, are unsuitable for dealing with the combinatorial explosion, because they are designed for a restricted state space with fixed dimensionality. To overcome this problem, the rule-based modeling language, BioNetGen, and the spatial extension, SRSim, have been developed. Here, we describe how to apply rule-based modeling to integrate experimental data from different sources into a single spatial simulation model and how to analyze the output of that model. The starting point for this approach can be a combination of molecular interaction data, reaction network data, proximities, binding and diffusion kinetics and molecular geometries at different levels of detail. We describe the technique and then use it to construct a model of the human mitotic inner and outer kinetochore, including the spindle assembly checkpoint signaling pathway. This allows us to demonstrate the utility of the procedure, show how a novel perspective for understanding such complex systems becomes accessible and elaborate on challenges that arise in the formulation, simulation and analysis of spatial rule-based models. PMID:24709796

  15. Spatial Rule-Based Modeling: A Method and Its Application to the Human Mitotic Kinetochore

    PubMed Central

    Ibrahim, Bashar; Henze, Richard; Gruenert, Gerd; Egbert, Matthew; Huwald, Jan; Dittrich, Peter

    2013-01-01

    A common problem in the analysis of biological systems is the combinatorial explosion that emerges from the complexity of multi-protein assemblies. Conventional formalisms, like differential equations, Boolean networks and Bayesian networks, are unsuitable for dealing with the combinatorial explosion, because they are designed for a restricted state space with fixed dimensionality. To overcome this problem, the rule-based modeling language, BioNetGen, and the spatial extension, SRSim, have been developed. Here, we describe how to apply rule-based modeling to integrate experimental data from different sources into a single spatial simulation model and how to analyze the output of that model. The starting point for this approach can be a combination of molecular interaction data, reaction network data, proximities, binding and diffusion kinetics and molecular geometries at different levels of detail. We describe the technique and then use it to construct a model of the human mitotic inner and outer kinetochore, including the spindle assembly checkpoint signaling pathway. This allows us to demonstrate the utility of the procedure, show how a novel perspective for understanding such complex systems becomes accessible and elaborate on challenges that arise in the formulation, simulation and analysis of spatial rule-based models. PMID:24709796

  16. Benefits and Costs of Lexical Decomposition and Semantic Integration during the Processing of Transparent and Opaque English Compounds

    ERIC Educational Resources Information Center

    Ji, Hongbo; Gagne, Christina L.; Spalding, Thomas L.

    2011-01-01

    Six lexical decision experiments were conducted to examine the influence of complex structure on the processing speed of English compounds. All experiments revealed that semantically transparent compounds (e.g., "rosebud") were processed more quickly than matched monomorphemic words (e.g., "giraffe"). Opaque compounds (e.g., "hogwash") were also…

  17. Reading Development Electrified: Semantic and Syntactic Integration during Sentence Comprehension in School-Age Children and Young Adults

    ERIC Educational Resources Information Center

    VanDyke, Justine M.

    2011-01-01

    Adults are able to access semantic and syntactic information rapidly as they hear or read in real-time in order to interpret sentences. Young children, on the other hand, tend to rely on syntactically-based parsing routines, adopting the first noun as the agent of a sentence regardless of plausibility, at least during oral comprehension. Little is…

  18. Using Eye Tracking to Investigate Semantic and Spatial Representations of Scientific Diagrams during Text-Diagram Integration

    ERIC Educational Resources Information Center

    Jian, Yu-Cin; Wu, Chao-Jung

    2015-01-01

    We investigated strategies used by readers when reading a science article with a diagram and assessed whether semantic and spatial representations were constructed while reading the diagram. Seventy-one undergraduate participants read a scientific article while tracking their eye movements and then completed a reading comprehension test. Our…

  19. Semantic Desktop

    NASA Astrophysics Data System (ADS)

    Sauermann, Leo; Kiesel, Malte; Schumacher, Kinga; Bernardi, Ansgar

    In diesem Beitrag wird gezeigt, wie der Arbeitsplatz der Zukunft aussehen könnte und wo das Semantic Web neue Möglichkeiten eröffnet. Dazu werden Ansätze aus dem Bereich Semantic Web, Knowledge Representation, Desktop-Anwendungen und Visualisierung vorgestellt, die es uns ermöglichen, die bestehenden Daten eines Benutzers neu zu interpretieren und zu verwenden. Dabei bringt die Kombination von Semantic Web und Desktop Computern besondere Vorteile - ein Paradigma, das unter dem Titel Semantic Desktop bekannt ist. Die beschriebenen Möglichkeiten der Applikationsintegration sind aber nicht auf den Desktop beschränkt, sondern können genauso in Web-Anwendungen Verwendung finden.

  20. Personal semantics: at the crossroads of semantic and episodic memory.

    PubMed

    Renoult, Louis; Davidson, Patrick S R; Palombo, Daniela J; Moscovitch, Morris; Levine, Brian

    2012-11-01

    Declarative memory is usually described as consisting of two systems: semantic and episodic memory. Between these two poles, however, may lie a third entity: personal semantics (PS). PS concerns knowledge of one's past. Although typically assumed to be an aspect of semantic memory, it is essentially absent from existing models of knowledge. Furthermore, like episodic memory (EM), PS is idiosyncratically personal (i.e., not culturally-shared). We show that, depending on how it is operationalized, the neural correlates of PS can look more similar to semantic memory, more similar to EM, or dissimilar to both. We consider three different perspectives to better integrate PS into existing models of declarative memory and suggest experimental strategies for disentangling PS from semantic and episodic memory.

  1. The semantic priming project.

    PubMed

    Hutchison, Keith A; Balota, David A; Neely, James H; Cortese, Michael J; Cohen-Shikora, Emily R; Tse, Chi-Shing; Yap, Melvin J; Bengson, Jesse J; Niemeyer, Dale; Buchanan, Erin

    2013-12-01

    Speeded naming and lexical decision data for 1,661 target words following related and unrelated primes were collected from 768 subjects across four different universities. These behavioral measures have been integrated with demographic information for each subject and descriptive characteristics for every item. Subjects also completed portions of the Woodcock-Johnson reading battery, three attentional control tasks, and a circadian rhythm measure. These data are available at a user-friendly Internet-based repository ( http://spp.montana.edu ). This Web site includes a search engine designed to generate lists of prime-target pairs with specific characteristics (e.g., length, frequency, associative strength, latent semantic similarity, priming effect in standardized and raw reaction times). We illustrate the types of questions that can be addressed via the Semantic Priming Project. These data represent the largest behavioral database on semantic priming and are available to researchers to aid in selecting stimuli, testing theories, and reducing potential confounds in their studies.

  2. SSWAP: A Simple Semantic Web Architecture and Protocol for Semantic Web Services

    Technology Transfer Automated Retrieval System (TEKTRAN)

    SSWAP (Simple Semantic Web Architecture and Protocol) is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP is the driving technology behind the Virtual Plant Information Network, an NSF-funded semantic w...

  3. Computing probability masses in rule-based systems

    SciTech Connect

    Dillard, R.A.

    1982-09-01

    This report describes a method of computing confidences in rule-based inference systems by using the Dempster-Shafter theory. The theory is applicable to tactical decision problems which can be formulated in terms of sets of exhaustive and mutually exclusive propositions. Dempster's combining procedure, a generalization of Bayesian inference, can be used to combine probability mass assignments supplied by independent bodies of evidence. This report describes the use of Dempster's combining method and Shafer's representation framework in rule-based inference systems. It is shown that many kinds of data fusion problems can be represented in a way such that the constraints are met. Although computational problems remain to be solved, the theory should provide a versatile and consistent way of combining confidences for a large class of inferencing problems.

  4. Perspectives on the use of rule-based control

    NASA Technical Reports Server (NTRS)

    Handelman, David A.; Stengel, Robert F.

    1990-01-01

    Issues regarding the application of artificial intelligence techniques to real-time control are discussed. Advantages associated with knowledge-based programming are discussed. A proposed rule-based control technique is summarized and applied to the problem of automated aircraft emergency procedure execution. Although emergency procedures are by definition predominately procedural, their numerous evaluation and decision points make a declarative representation of the knowledge they encode highly attractive, resulting in an organized and easily maintained software hierarchy. Simulation results demonstrate that real-time performance can be obtained using a microprocessor-based controller. It is concluded that a rule-based control system design approach may prove more useful than conventional methods under certain circumstances, and that declarative rules with embedded procedural code provide a sound basis for the construction of complex, yet economical, control systems.

  5. Dopaminergic Genetic Polymorphisms Predict Rule-Based Category Learning

    PubMed Central

    Byrne, Kaileigh A.; Davis, Tyler; Worthy, Darrell A.

    2016-01-01

    Dopaminergic genes play an important role in cognitive function. DRD2 and DARPP-32 dopamine receptor gene polymorphisms affect striatal dopamine binding potential, while the Val158Met single nucleotide polymorphism of the COMT gene moderates dopamine availability in the prefrontal cortex. Our study assesses the role of these gene polymorphisms on performance in two rule-based category learning tasks. Participants completed unidimensional and conjunctive rule-based tasks. In the unidimensional task, a rule along a single stimulus dimension can be used to distinguish category members. In contrast, a conjunctive rule utilizes a combination of two dimensions to distinguish category members. DRD2 C957T TT homozygotes outperformed C allele carriers on both tasks, and DARPP-32 AA homozygotes outperformed G allele carriers on both tasks. However, we found an interaction between COMT and task-type where Met allele carriers outperformed Val homozygotes in the conjunctive rule task, but both groups performed equally well in the unidimensional task. Thus, striatal dopamine binding may play a critical role in both types of rule-based tasks, while prefrontal dopamine binding is important for learning more complex conjunctive rule tasks. Modeling results suggest that striatal dopaminergic genes influence selective attention processes while cortical genes mediate the ability to update complex rule-representations. PMID:26918585

  6. Rule-based category use in preschool children.

    PubMed

    Mathy, Fabien; Friedman, Ori; Courenq, Brigitte; Laurent, Lucie; Millot, Jean-Louis

    2015-03-01

    We report two experiments suggesting that development of rule use in children can be predicted by applying metrics of complexity from studies of rule-based category learning in adults. In Experiment 1, 124 3- to 5-year-olds completed three new rule-use tasks. The tasks featured similar instructions but varied in the complexity of the rule structures that could be abstracted from the instructions. This measure of complexity predicted children's difficulty with the tasks. Children also completed a version of the Advanced Dimensional Change Card Sorting task. Although this task featured quite different instructions from those in our "complex" task, performance on these two tasks was correlated, as predicted by the rule-based category approach. Experiment 2 predicted findings of the relative difficulty of the three new tasks in 36 5-year-olds and also showed that response times varied with rule structure complexity. Together, these findings suggest that children's rule use depends on processes also involved in rule-based category learning. The findings likewise suggest that the development of rule use during childhood is protracted, and the findings bolster claims that some of children's difficulty in rule use stems from limits in their ability to represent complex rule structures. PMID:25463350

  7. A rule-based system for well log correlation

    SciTech Connect

    Startzman, R.A.; Kuo, T.B.

    1987-09-01

    Computer-assisted approaches to well log correlation are of interest to engineers and geologists for two reasons. In large field studies, a computer can be used simply to reduce the time required to correlate zones of interest. It is also possible that computer-assisted correlations may suggest zonal matches of interest and originality that might not have been considered. This paper presents a new approach to the computer-assisted log correlation. Geologic horizons are correlated between wells by use of an artificial intelligence rule-based technique. Using the symbol-manipulation capabilities of a computer language called List Processing (LISP), the author developed a prototype rule-bases system that has symbolic representation of log data, recognizes log shapes from traces, identifies geologic zones from a sequence of shapes in a log, characterizes the zones, correlates zones from well to well with a set of ''if/then'' type of rules, and uses a forward-chaining inference scheme to execute the rule base. The purpose of this paper is to describe the use of the LISP language and the methodology involved in the development of this system.

  8. Syntactic processing in the absence of awareness and semantics.

    PubMed

    Hung, Shao-Min; Hsieh, Po-Jang

    2015-10-01

    The classical view that multistep rule-based operations require consciousness has recently been challenged by findings that both multiword semantic processing and multistep arithmetic equations can be processed unconsciously. It remains unclear, however, whether pure rule-based cognitive processes can occur unconsciously in the absence of semantics. Here, after presenting 2 words consciously, we suppressed the third with continuous flash suppression. First, we showed that the third word in the subject-verb-verb format (syntactically incongruent) broke suppression significantly faster than the third word in the subject-verb-object format (syntactically congruent). Crucially, the same effect was observed even with sentences composed of pseudowords (pseudo subject-verb-adjective vs. pseudo subject-verb-object) without any semantic information. This is the first study to show that syntactic congruency can be processed unconsciously in the complete absence of semantics. Our findings illustrate how abstract rule-based processing (e.g., syntactic categories) can occur in the absence of visual awareness, even when deprived of semantics.

  9. Individual differences in the joint effects of semantic priming and word frequency: The role of lexical integrity

    PubMed Central

    Yap, Melvin J.; Tse, Chi-Shing; Balota, David A.

    2009-01-01

    Word frequency and semantic priming effects are among the most robust effects in visual word recognition, and it has been generally assumed that these two variables produce interactive effects in lexical decision performance, with larger priming effects for low-frequency targets. The results from four lexical decision experiments indicate that the joint effects of semantic priming and word frequency are critically dependent upon differences in the vocabulary knowledge of the participants. Specifically, across two Universities, additive effects of the two variables were observed in participants with more vocabulary knowledge, while interactive effects were observed in participants with less vocabulary knowledge. These results are discussed with reference to Borowsky and Besner’s (1993) multistage account and Plaut and Booth’s (2000) single-mechanism model. In general, the findings are also consistent with a flexible lexical processing system that optimizes performance based on processing fluency and task demands. PMID:20161653

  10. Live Social Semantics

    NASA Astrophysics Data System (ADS)

    Alani, Harith; Szomszor, Martin; Cattuto, Ciro; van den Broeck, Wouter; Correndo, Gianluca; Barrat, Alain

    Social interactions are one of the key factors to the success of conferences and similar community gatherings. This paper describes a novel application that integrates data from the semantic web, online social networks, and a real-world contact sensing platform. This application was successfully deployed at ESWC09, and actively used by 139 people. Personal profiles of the participants were automatically generated using several Web 2.0 systems and semantic academic data sources, and integrated in real-time with face-to-face contact networks derived from wearable sensors. Integration of all these heterogeneous data layers made it possible to offer various services to conference attendees to enhance their social experience such as visualisation of contact data, and a site to explore and connect with other participants. This paper describes the architecture of the application, the services we provided, and the results we achieved in this deployment.

  11. Fuzzylot: a novel self-organising fuzzy-neural rule-based pilot system for automated vehicles.

    PubMed

    Pasquier, M; Quek, C; Toh, M

    2001-10-01

    This paper presents part of our research work concerned with the realisation of an Intelligent Vehicle and the technologies required for its routing, navigation, and control. An automated driver prototype has been developed using a self-organising fuzzy rule-based system (POPFNN-CRI(S)) to model and subsequently emulate human driving expertise. The ability of fuzzy logic to represent vague information using linguistic variables makes it a powerful tool to develop rule-based control systems when an exact working model is not available, as is the case of any vehicle-driving task. Designing a fuzzy system, however, is a complex endeavour, due to the need to define the variables and their associated fuzzy sets, and determine a suitable rule base. Many efforts have thus been devoted to automating this process, yielding the development of learning and optimisation techniques. One of them is the family of POP-FNNs, or Pseudo-Outer Product Fuzzy Neural Networks (TVR, AARS(S), AARS(NS), CRI, Yager). These generic self-organising neural networks developed at the Intelligent Systems Laboratory (ISL/NTU) are based on formal fuzzy mathematical theory and are able to objectively extract a fuzzy rule base from training data. In this application, a driving simulator has been developed, that integrates a detailed model of the car dynamics, complete with engine characteristics and environmental parameters, and an OpenGL-based 3D-simulation interface coupled with driving wheel and accelerator/ brake pedals. The simulator has been used on various road scenarios to record from a human pilot driving data consisting of steering and speed control actions associated to road features. Specifically, the POPFNN-CRI(S) system is used to cluster the data and extract a fuzzy rule base modelling the human driving behaviour. Finally, the effectiveness of the generated rule base has been validated using the simulator in autopilot mode. PMID:11681754

  12. Fuzzylot: a novel self-organising fuzzy-neural rule-based pilot system for automated vehicles.

    PubMed

    Pasquier, M; Quek, C; Toh, M

    2001-10-01

    This paper presents part of our research work concerned with the realisation of an Intelligent Vehicle and the technologies required for its routing, navigation, and control. An automated driver prototype has been developed using a self-organising fuzzy rule-based system (POPFNN-CRI(S)) to model and subsequently emulate human driving expertise. The ability of fuzzy logic to represent vague information using linguistic variables makes it a powerful tool to develop rule-based control systems when an exact working model is not available, as is the case of any vehicle-driving task. Designing a fuzzy system, however, is a complex endeavour, due to the need to define the variables and their associated fuzzy sets, and determine a suitable rule base. Many efforts have thus been devoted to automating this process, yielding the development of learning and optimisation techniques. One of them is the family of POP-FNNs, or Pseudo-Outer Product Fuzzy Neural Networks (TVR, AARS(S), AARS(NS), CRI, Yager). These generic self-organising neural networks developed at the Intelligent Systems Laboratory (ISL/NTU) are based on formal fuzzy mathematical theory and are able to objectively extract a fuzzy rule base from training data. In this application, a driving simulator has been developed, that integrates a detailed model of the car dynamics, complete with engine characteristics and environmental parameters, and an OpenGL-based 3D-simulation interface coupled with driving wheel and accelerator/ brake pedals. The simulator has been used on various road scenarios to record from a human pilot driving data consisting of steering and speed control actions associated to road features. Specifically, the POPFNN-CRI(S) system is used to cluster the data and extract a fuzzy rule base modelling the human driving behaviour. Finally, the effectiveness of the generated rule base has been validated using the simulator in autopilot mode.

  13. Algorithms and semantic infrastructure for mutation impact extraction and grounding

    PubMed Central

    2010-01-01

    Background Mutation impact extraction is a hitherto unaccomplished task in state of the art mutation extraction systems. Protein mutations and their impacts on protein properties are hidden in scientific literature, making them poorly accessible for protein engineers and inaccessible for phenotype-prediction systems that currently depend on manually curated genomic variation databases. Results We present the first rule-based approach for the extraction of mutation impacts on protein properties, categorizing their directionality as positive, negative or neutral. Furthermore protein and mutation mentions are grounded to their respective UniProtKB IDs and selected protein properties, namely protein functions to concepts found in the Gene Ontology. The extracted entities are populated to an OWL-DL Mutation Impact ontology facilitating complex querying for mutation impacts using SPARQL. We illustrate retrieval of proteins and mutant sequences for a given direction of impact on specific protein properties. Moreover we provide programmatic access to the data through semantic web services using the SADI (Semantic Automated Discovery and Integration) framework. Conclusion We address the problem of access to legacy mutation data in unstructured form through the creation of novel mutation impact extraction methods which are evaluated on a corpus of full-text articles on haloalkane dehalogenases, tagged by domain experts. Our approaches show state of the art levels of precision and recall for Mutation Grounding and respectable level of precision but lower recall for the task of Mutant-Impact relation extraction. The system is deployed using text mining and semantic web technologies with the goal of publishing to a broad spectrum of consumers. PMID:21143808

  14. SemanticOrganizer Brings Teams Together

    NASA Technical Reports Server (NTRS)

    Laufenberg, Lawrence

    2003-01-01

    SemanticOrganizer enables researchers in different locations to share, search for, and integrate data. Its customizable semantic links offer fast access to interrelated information. This knowledge management and information integration tool also supports real-time instrument data collection and collaborative image annotation.

  15. Simulation of large-scale rule-based models

    SciTech Connect

    Hlavacek, William S; Monnie, Michael I; Colvin, Joshua; Faseder, James

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  16. Using Eye Tracking to Investigate Semantic and Spatial Representations of Scientific Diagrams During Text-Diagram Integration

    NASA Astrophysics Data System (ADS)

    Jian, Yu-Cin; Wu, Chao-Jung

    2015-02-01

    We investigated strategies used by readers when reading a science article with a diagram and assessed whether semantic and spatial representations were constructed while reading the diagram. Seventy-one undergraduate participants read a scientific article while tracking their eye movements and then completed a reading comprehension test. Our results showed that the text-diagram referencing strategy was commonly used. However, some readers adopted other reading strategies, such as reading the diagram or text first. We found all readers who had referred to the diagram spent roughly the same amount of time reading and performed equally well. However, some participants who ignored the diagram performed more poorly on questions that tested understanding of basic facts. This result indicates that dual coding theory may be a possible theory to explain the phenomenon. Eye movement patterns indicated that at least some readers had extracted semantic information of the scientific terms when first looking at the diagram. Readers who read the scientific terms on the diagram first tended to spend less time looking at the same terms in the text, which they read after. Besides, presented clear diagrams can help readers process both semantic and spatial information, thereby facilitating an overall understanding of the article. In addition, although text-first and diagram-first readers spent similar total reading time on the text and diagram parts of the article, respectively, text-first readers had significantly less number of saccades of text and diagram than diagram-first readers. This result might be explained as text-directed reading.

  17. Classification of Contaminated Sites Using a Fuzzy Rule Based System

    SciTech Connect

    Lemos, F.L. de; Van Velzen, K.; Ross, T.

    2006-07-01

    This paper presents the general framework of a multi level model to manage contaminated sites that is being developed. A rule based system along with a scoring system for ranking sites for phase 1 ESA is being proposed (Level 1). Level 2, which consists of the recommendation of the consultant based on their phase 1 ESA is reasonably straightforward. Level 3 which consists of classifying sites which already had a phase 2 ESA conducted on them will involve a multi-objective decision making tool. Fuzzy set theory, which includes the concept of membership functions, was adjudged as the best way to deal with uncertain and non-random information. (authors)

  18. Generative Semantics

    ERIC Educational Resources Information Center

    Bagha, Karim Nazari

    2011-01-01

    Generative semantics is (or perhaps was) a research program within linguistics, initiated by the work of George Lakoff, John R. Ross, Paul Postal and later McCawley. The approach developed out of transformational generative grammar in the mid 1960s, but stood largely in opposition to work by Noam Chomsky and his students. The nature and genesis of…

  19. Meaningful physical changes mediate lexical-semantic integration: top-down and form-based bottom-up information sources interact in the N400.

    PubMed

    Lotze, Netaya; Tune, Sarah; Schlesewsky, Matthias; Bornkessel-Schlesewsky, Ina

    2011-11-01

    Models of how the human brain reconstructs an intended meaning from a linguistic input often draw upon the N400 event-related potential (ERP) component as evidence. Current accounts of the N400 emphasise either the role of contextually induced lexical preactivation of a critical word (Lau, Phillips, & Poeppel, 2008) or the ease of integration into the overall discourse context including a wide variety of influencing factors (Hagoort & van Berkum, 2007). The present ERP study challenges both types of accounts by demonstrating a contextually independent and purely form-based bottom-up influence on the N400: the N400 effect for implausible sentence-endings was attenuated when the critical sentence-final word was capitalised (following a lowercase sentence context). By contrast, no N400 modulation occurred when the critical word involved a change from uppercase (sentence context) to lowercase. Thus, the N400 was only affected by a change to uppercase letters, as is often employed in computer-mediated communication as a sign of emphasis. This result indicates that N400 amplitude is reduced for unexpected words when a bottom-up (orthographic) cue signals that the word is likely to be highly informative. The lexical-semantic N400 thereby reflects the degree to which the semantic informativity of a critical word matches expectations, as determined by an interplay between top-down and bottom-up information sources, including purely form-based bottom-up information. PMID:21939678

  20. A semantic grid infrastructure enabling integrated access and analysis of multilevel biomedical data in support of postgenomic clinical trials on cancer.

    PubMed

    Tsiknakis, Manolis; Brochhausen, Mathias; Nabrzyski, Jarek; Pucacki, Juliusz; Sfakianakis, Stelios G; Potamias, George; Desmedt, Cristine; Kafetzopoulos, Dimitris

    2008-03-01

    This paper reports on original results of the Advancing Clinico-Genomic Trials on Cancer integrated project focusing on the design and development of a European biomedical grid infrastructure in support of multicentric, postgenomic clinical trials (CTs) on cancer. Postgenomic CTs use multilevel clinical and genomic data and advanced computational analysis and visualization tools to test hypothesis in trying to identify the molecular reasons for a disease and the stratification of patients in terms of treatment. This paper provides a presentation of the needs of users involved in postgenomic CTs, and presents such needs in the form of scenarios, which drive the requirements engineering phase of the project. Subsequently, the initial architecture specified by the project is presented, and its services are classified and discussed. A key set of such services are those used for wrapping heterogeneous clinical trial management systems and other public biological databases. Also, the main technological challenge, i.e. the design and development of semantically rich grid services is discussed. In achieving such an objective, extensive use of ontologies and metadata are required. The Master Ontology on Cancer, developed by the project, is presented, and our approach to develop the required metadata registries, which provide semantically rich information about available data and computational services, is provided. Finally, a short discussion of the work lying ahead is included.

  1. Towards computerizing intensive care sedation guidelines: design of a rule-based architecture for automated execution of clinical guidelines

    PubMed Central

    2010-01-01

    Background Computerized ICUs rely on software services to convey the medical condition of their patients as well as assisting the staff in taking treatment decisions. Such services are useful for following clinical guidelines quickly and accurately. However, the development of services is often time-consuming and error-prone. Consequently, many care-related activities are still conducted based on manually constructed guidelines. These are often ambiguous, which leads to unnecessary variations in treatments and costs. The goal of this paper is to present a semi-automatic verification and translation framework capable of turning manually constructed diagrams into ready-to-use programs. This framework combines the strengths of the manual and service-oriented approaches while decreasing their disadvantages. The aim is to close the gap in communication between the IT and the medical domain. This leads to a less time-consuming and error-prone development phase and a shorter clinical evaluation phase. Methods A framework is proposed that semi-automatically translates a clinical guideline, expressed as an XML-based flow chart, into a Drools Rule Flow by employing semantic technologies such as ontologies and SWRL. An overview of the architecture is given and all the technology choices are thoroughly motivated. Finally, it is shown how this framework can be integrated into a service-oriented architecture (SOA). Results The applicability of the Drools Rule language to express clinical guidelines is evaluated by translating an example guideline, namely the sedation protocol used for the anaesthetization of patients, to a Drools Rule Flow and executing and deploying this Rule-based application as a part of a SOA. The results show that the performance of Drools is comparable to other technologies such as Web Services and increases with the number of decision nodes present in the Rule Flow. Most delays are introduced by loading the Rule Flows. Conclusions The framework is an

  2. g.infer: A GRASS GIS module for rule-based data-driven classification and workflow control.

    NASA Astrophysics Data System (ADS)

    Löwe, Peter

    2013-04-01

    This poster describes the internal architecture of the new GRASS GIS module g.infer [1] and demonstrates application scenarios . The new module for GRASS GIS Version 6.x and 7.x enables rule-based analysis and workflow management via data-driven inference processes based on the C Language Integrated Production System (CLIPS) [2]. g.infer uses the pyClips module [3] to provide an Python-based environment for CLIPS within the GRASS GIS environment for rule-based knowledge engineering. Application scenarios range from rule-based classification tasks, event-driven workflow-control to complex simulations for tasks such as Soil Erosion Monitoring and Disaster Early Warning [4]. References: [1] Löwe P.: Introducing the new GRASS module g.infer for data-driven rule-based applications, Vol.8 2012-08, Geoinformatics FCE CTU, ISSN 1802-2669 [2] http://clipsrules.sourceforge.net/ [3] http://pyclips.sourceforge.net/web/ [4] Löwe P.: A Spatial Decision Support System for Radar-metereology Data in South Africa, Transactions in GIS 2004, (2): 235-244

  3. Rule-based OPC and MPC interaction for implant layers

    NASA Astrophysics Data System (ADS)

    Fu, Nan; Ning, Guoxiang; Werle, Florian; Roling, Stefan; Hecker, Sandra; Ackmann, Paul; Buergel, Christian

    2015-10-01

    Implant layers must cover both logic and SRAM devices with good fidelity even if feature density and pitch differ very much. The coverage design rules of implant layers for SRAM and logic to active layer can vary. Lithography targeting could be problematic, since it may cause issues of either over exposure in logic area or under exposure in SRAM area. The rule-based (RB) re-targeting in the SRAM issue features is to compensate the under exposure in SRAM area. However, the global sizing in SRAM may introduce some bridge issues. Selective targeting and communicating with active layer is necessary. Another method is to achieve different mean-to-nominal (MTN) in some special areas during the reticle process. Such implant wafer issues can also be resolved during the lithography and mask optimized data preparing flow or named as lithography tolerance mask process correction (MPC). In this manuscript, this conventional issue will be demonstrated which is either over exposure in logic area or under exposure in bitcell area. The selective rule-based re-targeting concerning active layer will also be discussed, together with the improved wafer CDSEM data. The alternative method is to achieve different mean-to-nominal in different reticle areas which can be realized by lithography tolerance MPC during reticle process. The investigation of alternative methods will be presented, as well as the trade-off between them to improve the wafer uniformity and process margin of implant layers.

  4. ALC: automated reduction of rule-based models

    PubMed Central

    Koschorreck, Markus; Gilles, Ernst Dieter

    2008-01-01

    Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705

  5. Dynamic composition of semantic pathways for medical computational problem solving by means of semantic rules.

    PubMed

    Bratsas, Charalampos; Bamidis, Panagiotis; Kehagias, Dionisis D; Kaimakamis, Evangelos; Maglaveras, Nicos

    2011-03-01

    This paper presents a semantic rule-based system for the composition of successful algorithmic pathways capable of solving medical computational problems (MCPs). A subset of medical algorithms referring to MCP solving concerns well-known medical problems and their computational algorithmic solutions. These solutions result from computations within mathematical models aiming to enhance healthcare quality via support for diagnosis and treatment automation, especially useful for educational purposes. Currently, there is a plethora of computational algorithms on the web, which pertain to MCPs and provide all computational facilities required to solve a medical problem. An inherent requirement for the successful construction of algorithmic pathways for managing real medical cases is the composition of a sequence of computational algorithms. The aim of this paper is to approach the composition of such pathways via the design of appropriate finite-state machines (FSMs), the use of ontologies, and SWRL semantic rules. The goal of semantic rules is to automatically associate different algorithms that are represented as different states of the FSM in order to result in a successful pathway. The rule-based approach is herein implemented on top of Knowledge-Based System for Intelligent Computational Search in Medicine (KnowBaSICS-M), an ontology-based system for MCP semantic management. Preliminary results have shown that the proposed system adequately produces algorithmic pathways in agreement with current international medical guidelines. PMID:21335316

  6. Dynamic composition of semantic pathways for medical computational problem solving by means of semantic rules.

    PubMed

    Bratsas, Charalampos; Bamidis, Panagiotis; Kehagias, Dionisis D; Kaimakamis, Evangelos; Maglaveras, Nicos

    2011-03-01

    This paper presents a semantic rule-based system for the composition of successful algorithmic pathways capable of solving medical computational problems (MCPs). A subset of medical algorithms referring to MCP solving concerns well-known medical problems and their computational algorithmic solutions. These solutions result from computations within mathematical models aiming to enhance healthcare quality via support for diagnosis and treatment automation, especially useful for educational purposes. Currently, there is a plethora of computational algorithms on the web, which pertain to MCPs and provide all computational facilities required to solve a medical problem. An inherent requirement for the successful construction of algorithmic pathways for managing real medical cases is the composition of a sequence of computational algorithms. The aim of this paper is to approach the composition of such pathways via the design of appropriate finite-state machines (FSMs), the use of ontologies, and SWRL semantic rules. The goal of semantic rules is to automatically associate different algorithms that are represented as different states of the FSM in order to result in a successful pathway. The rule-based approach is herein implemented on top of Knowledge-Based System for Intelligent Computational Search in Medicine (KnowBaSICS-M), an ontology-based system for MCP semantic management. Preliminary results have shown that the proposed system adequately produces algorithmic pathways in agreement with current international medical guidelines.

  7. Local Rule-Based Theory of Virus Shell Assembly

    NASA Astrophysics Data System (ADS)

    Berger, Bonnie; Shor, Peter W.; Tucker-Kellogg, Lisa; King, Jonathan

    1994-08-01

    A local rule-based theory is developed which shows that the self-assembly of icosahedral virus shells may depend on only the lower-level interactions of a protein subunit with its neighbors-i.e., on local rules rather than on larger structural building blocks. The local rule theory provides a framework for understanding the assembly of icosahedral viruses. These include both viruses that fall in the quasiequivalence theory of Caspar and Klug and the polyoma virus structure, which violates quasi-equivalence and has puzzled researchers since it was first observed. Local rules are essentially templates for energetically favorable arrangements. The tolerance margins for these rules are investigated through computer simulations. When these tolerance margins are exceeded in a particular way, the result is a "spiraling" malformation that has been observed in nature.

  8. A high-level language for rule-based modelling.

    PubMed

    Pedersen, Michael; Phillips, Andrew; Plotkin, Gordon D

    2015-01-01

    Rule-based languages such as Kappa excel in their support for handling the combinatorial complexities prevalent in many biological systems, including signalling pathways. But Kappa provides little structure for organising rules, and large models can therefore be hard to read and maintain. This paper introduces a high-level, modular extension of Kappa called LBS-κ. We demonstrate the constructs of the language through examples and three case studies: a chemotaxis switch ring, a MAPK cascade, and an insulin signalling pathway. We then provide a formal definition of LBS-κ through an abstract syntax and a translation to plain Kappa. The translation is implemented in a compiler tool which is available as a web application. We finally demonstrate how to increase the expressivity of LBS-κ through embedded scripts in a general-purpose programming language, a technique which we view as generally applicable to other domain specific languages. PMID:26043208

  9. Rule-based extrapolation: a continuing challenge for exemplar models.

    PubMed

    Denton, Stephen E; Kruschke, John K; Erickson, Michael A

    2008-08-01

    Erickson and Kruschke (1998, 2002) demonstrated that in rule-plus-exception categorization, people generalize category knowledge by extrapolating in a rule-like fashion, even when they are presented with a novel stimulus that is most similar to a known exception. Although exemplar models have been found to be deficient in explaining rule-based extrapolation, Rodrigues and Murre (2007) offered a variation of an exemplar model that was better able to account for such performance. Here, we present the results of a new rule-plus-exception experiment that yields rule-like extrapolation similar to that of previous experiments, and yet the data are not accounted for by Rodrigues and Murre's augmented exemplar model. Further, a hybrid rule-and-exemplar model is shown to better describe the data. Thus, we maintain that rule-plus-exception categorization continues to be a challenge for exemplar-only models. PMID:18792504

  10. A high-level language for rule-based modelling.

    PubMed

    Pedersen, Michael; Phillips, Andrew; Plotkin, Gordon D

    2015-01-01

    Rule-based languages such as Kappa excel in their support for handling the combinatorial complexities prevalent in many biological systems, including signalling pathways. But Kappa provides little structure for organising rules, and large models can therefore be hard to read and maintain. This paper introduces a high-level, modular extension of Kappa called LBS-κ. We demonstrate the constructs of the language through examples and three case studies: a chemotaxis switch ring, a MAPK cascade, and an insulin signalling pathway. We then provide a formal definition of LBS-κ through an abstract syntax and a translation to plain Kappa. The translation is implemented in a compiler tool which is available as a web application. We finally demonstrate how to increase the expressivity of LBS-κ through embedded scripts in a general-purpose programming language, a technique which we view as generally applicable to other domain specific languages.

  11. A Rule-Based Industrial Boiler Selection System

    NASA Astrophysics Data System (ADS)

    Tan, C. F.; Khalil, S. N.; Karjanto, J.; Tee, B. T.; Wahidin, L. S.; Chen, W.; Rauterberg, G. W. M.; Sivarao, S.; Lim, T. L.

    2015-09-01

    Boiler is a device used for generating the steam for power generation, process use or heating, and hot water for heating purposes. Steam boiler consists of the containing vessel and convection heating surfaces only, whereas a steam generator covers the whole unit, encompassing water wall tubes, super heaters, air heaters and economizers. The selection of the boiler is very important to the industry for conducting the operation system successfully. The selection criteria are based on rule based expert system and multi-criteria weighted average method. The developed system consists of Knowledge Acquisition Module, Boiler Selection Module, User Interface Module and Help Module. The system capable of selecting the suitable boiler based on criteria weighted. The main benefits from using the system is to reduce the complexity in the decision making for selecting the most appropriate boiler to palm oil process plant.

  12. Rule-based category learning in Down syndrome.

    PubMed

    Phillips, B Allyson; Conners, Frances A; Merrill, Edward; Klinger, Mark R

    2014-05-01

    Rule-based category learning was examined in youths with Down syndrome (DS), youths with intellectual disability (ID), and typically developing (TD) youths. Two tasks measured category learning: the Modified Card Sort task (MCST) and the Concept Formation test of the Woodcock-Johnson-III ( Woodock, McGrew, & Mather, 2001 ). In regression-based analyses, DS and ID groups performed below the level expected for their nonverbal ability. In cross-sectional developmental trajectory analyses, results depended on the task. On the MCST, the DS and ID groups were similar to the TD group. On the Concept Formation test, the DS group had slower cross-sectional change than the other 2 groups. Category learning may be an area of difficulty for those with ID, but task-related factors may affect trajectories for youths with DS.

  13. Genetic learning in rule-based and neural systems

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.

    1993-01-01

    The design of neural networks and fuzzy systems can involve complex, nonlinear, and ill-conditioned optimization problems. Often, traditional optimization schemes are inadequate or inapplicable for such tasks. Genetic Algorithms (GA's) are a class of optimization procedures whose mechanics are based on those of natural genetics. Mathematical arguments show how GAs bring substantial computational leverage to search problems, without requiring the mathematical characteristics often necessary for traditional optimization schemes (e.g., modality, continuity, availability of derivative information, etc.). GA's have proven effective in a variety of search tasks that arise in neural networks and fuzzy systems. This presentation begins by introducing the mechanism and theoretical underpinnings of GA's. GA's are then related to a class of rule-based machine learning systems called learning classifier systems (LCS's). An LCS implements a low-level production-system that uses a GA as its primary rule discovery mechanism. This presentation illustrates how, despite its rule-based framework, an LCS can be thought of as a competitive neural network. Neural network simulator code for an LCS is presented. In this context, the GA is doing more than optimizing and objective function. It is searching for an ecology of hidden nodes with limited connectivity. The GA attempts to evolve this ecology such that effective neural network performance results. The GA is particularly well adapted to this task, given its naturally-inspired basis. The LCS/neural network analogy extends itself to other, more traditional neural networks. Conclusions to the presentation discuss the implications of using GA's in ecological search problems that arise in neural and fuzzy systems.

  14. Fuzzy-rule-based image reconstruction for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Mondal, Partha P.; Rajan, K.

    2005-09-01

    Positron emission tomography (PET) and single-photon emission computed tomography have revolutionized the field of medicine and biology. Penalized iterative algorithms based on maximum a posteriori (MAP) estimation eliminate noisy artifacts by utilizing available prior information in the reconstruction process but often result in a blurring effect. MAP-based algorithms fail to determine the density class in the reconstructed image and hence penalize the pixels irrespective of the density class. Reconstruction with better edge information is often difficult because prior knowledge is not taken into account. The recently introduced median-root-prior (MRP)-based algorithm preserves the edges, but a steplike streaking effect is observed in the reconstructed image, which is undesirable. A fuzzy approach is proposed for modeling the nature of interpixel interaction in order to build an artifact-free edge-preserving reconstruction. The proposed algorithm consists of two elementary steps: (1) edge detection, in which fuzzy-rule-based derivatives are used for the detection of edges in the nearest neighborhood window (which is equivalent to recognizing nearby density classes), and (2) fuzzy smoothing, in which penalization is performed only for those pixels for which no edge is detected in the nearest neighborhood. Both of these operations are carried out iteratively until the image converges. Analysis shows that the proposed fuzzy-rule-based reconstruction algorithm is capable of producing qualitatively better reconstructed images than those reconstructed by MAP and MRP algorithms. The reconstructed images are sharper, with small features being better resolved owing to the nature of the fuzzy potential function.

  15. Timescale analysis of rule-based biochemical reaction networks

    PubMed Central

    Klinke, David J.; Finley, Stacey D.

    2012-01-01

    The flow of information within a cell is governed by a series of protein-protein interactions that can be described as a reaction network. Mathematical models of biochemical reaction networks can be constructed by repetitively applying specific rules that define how reactants interact and what new species are formed upon reaction. To aid in understanding the underlying biochemistry, timescale analysis is one method developed to prune the size of the reaction network. In this work, we extend the methods associated with timescale analysis to reaction rules instead of the species contained within the network. To illustrate this approach, we applied timescale analysis to a simple receptor-ligand binding model and a rule-based model of Interleukin-12 (IL-12) signaling in näive CD4+ T cells. The IL-12 signaling pathway includes multiple protein-protein interactions that collectively transmit information; however, the level of mechanistic detail sufficient to capture the observed dynamics has not been justified based upon the available data. The analysis correctly predicted that reactions associated with JAK2 and TYK2 binding to their corresponding receptor exist at a pseudo-equilibrium. In contrast, reactions associated with ligand binding and receptor turnover regulate cellular response to IL-12. An empirical Bayesian approach was used to estimate the uncertainty in the timescales. This approach complements existing rank- and flux-based methods that can be used to interrogate complex reaction networks. Ultimately, timescale analysis of rule-based models is a computational tool that can be used to reveal the biochemical steps that regulate signaling dynamics. PMID:21954150

  16. A Programme for Semantics; Semantics and Its Critics; Semantics Shamantics.

    ERIC Educational Resources Information Center

    Goldstein, Laurence; Harris, Roy

    1990-01-01

    In a statement-response-reply format, a proposition concerning the study of semantics is made and debated in three papers by two authors. In the first paper, it is proposed that semantics is not the study of the concept of meaning, but rather a neurolinguistic issue, despite the fact that semantics is linked to context. It is argued that semantic…

  17. Semantically aided interpretation and querying of Jefferson Project data using the SemantEco framework

    NASA Astrophysics Data System (ADS)

    Patton, E. W.; Pinheiro, P.; McGuinness, D. L.

    2014-12-01

    We will describe the benefits we realized using semantic technologies to address the often challenging and resource intensive task of ontology alignment in service of data integration. Ontology alignment became relatively simple as we reused our existing semantic data integration framework, SemantEco. We work in the context of the Jefferson Project (JP), an effort to monitor and predict the health of Lake George in NY by deploying a large-scale sensor network in the lake, and analyzing the high-resolution sensor data. SemantEco is an open-source framework for building semantically-aware applications to assist users, particularly non-experts, in exploration and interpretation of integrated scientific data. SemantEco applications are composed of a set of modules that incorporate new datasets, extend the semantic capabilities of the system to integrate and reason about data, and provide facets for extending or controlling semantic queries. Whereas earlier SemantEco work focused on integration of water, air, and species data from government sources, we focus on redeploying it to provide a provenance-aware, semantic query and interpretation interface for JP's sensor data. By employing a minor alignment between SemantEco's ontology and the Human-Aware Sensor Network Ontology used to model the JP's sensor deployments, we were able to bring SemantEco's capabilities to bear on the JP sensor data and metadata. This alignment enabled SemantEco to perform the following tasks: (1) select JP datasets related to water quality; (2) understand how the JP's notion of water quality relates to water quality concepts in previous work; and (3) reuse existing SemantEco interactive data facets, e.g. maps and time series visualizations, and modules, e.g. the regulation module that interprets water quality data through the lens of various federal and state regulations. Semantic technologies, both as the engine driving SemantEco and the means of modeling the JP data, enabled us to rapidly

  18. On Decision-Making Among Multiple Rule-Bases in Fuzzy Control Systems

    NASA Technical Reports Server (NTRS)

    Tunstel, Edward; Jamshidi, Mo

    1997-01-01

    Intelligent control of complex multi-variable systems can be a challenge for single fuzzy rule-based controllers. This class of problems cam often be managed with less difficulty by distributing intelligent decision-making amongst a collection of rule-bases. Such an approach requires that a mechanism be chosen to ensure goal-oriented interaction between the multiple rule-bases. In this paper, a hierarchical rule-based approach is described. Decision-making mechanisms based on generalized concepts from single-rule-based fuzzy control are described. Finally, the effects of different aggregation operators on multi-rule-base decision-making are examined in a navigation control problem for mobile robots.

  19. Rule-based deduplication of article records from bibliographic databases.

    PubMed

    Jiang, Yu; Lin, Can; Meng, Weiyi; Yu, Clement; Cohen, Aaron M; Smalheiser, Neil R

    2014-01-01

    We recently designed and deployed a metasearch engine, Metta, that sends queries and retrieves search results from five leading biomedical databases: PubMed, EMBASE, CINAHL, PsycINFO and the Cochrane Central Register of Controlled Trials. Because many articles are indexed in more than one of these databases, it is desirable to deduplicate the retrieved article records. This is not a trivial problem because data fields contain a lot of missing and erroneous entries, and because certain types of information are recorded differently (and inconsistently) in the different databases. The present report describes our rule-based method for deduplicating article records across databases and includes an open-source script module that can be deployed freely. Metta was designed to satisfy the particular needs of people who are writing systematic reviews in evidence-based medicine. These users want the highest possible recall in retrieval, so it is important to err on the side of not deduplicating any records that refer to distinct articles, and it is important to perform deduplication online in real time. Our deduplication module is designed with these constraints in mind. Articles that share the same publication year are compared sequentially on parameters including PubMed ID number, digital object identifier, journal name, article title and author list, using text approximation techniques. In a review of Metta searches carried out by public users, we found that the deduplication module was more effective at identifying duplicates than EndNote without making any erroneous assignments.

  20. A Novel Rules Based Approach for Estimating Software Birthmark

    PubMed Central

    Binti Alias, Norma; Anwar, Sajid

    2015-01-01

    Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark. PMID:25945363

  1. A novel rules based approach for estimating software birthmark.

    PubMed

    Nazir, Shah; Shahzad, Sara; Khan, Sher Afzal; Alias, Norma Binti; Anwar, Sajid

    2015-01-01

    Software birthmark is a unique quality of software to detect software theft. Comparing birthmarks of software can tell us whether a program or software is a copy of another. Software theft and piracy are rapidly increasing problems of copying, stealing, and misusing the software without proper permission, as mentioned in the desired license agreement. The estimation of birthmark can play a key role in understanding the effectiveness of a birthmark. In this paper, a new technique is presented to evaluate and estimate software birthmark based on the two most sought-after properties of birthmarks, that is, credibility and resilience. For this purpose, the concept of soft computing such as probabilistic and fuzzy computing has been taken into account and fuzzy logic is used to estimate properties of birthmark. The proposed fuzzy rule based technique is validated through a case study and the results show that the technique is successful in assessing the specified properties of the birthmark, its resilience and credibility. This, in turn, shows how much effort will be required to detect the originality of the software based on its birthmark.

  2. A fuzzy rule based framework for noise annoyance modeling.

    PubMed

    Botteldooren, Dick; Verkeyn, Andy; Lercher, Peter

    2003-09-01

    Predicting the effect of noise on individual people and small groups is an extremely difficult task due to the influence of a multitude of factors that vary from person to person and from context to context. Moreover, noise annoyance is inherently a vague concept. That is why, in this paper, it is argued that noise annoyance models should identify a fuzzy set of possible effects rather than seek a very accurate crisp prediction. Fuzzy rule based models seem ideal candidates for this task. This paper provides the theoretical background for building these models. Existing empirical knowledge is used to extract a few typical rules that allow making the model more specific for small groups of individuals. The resulting model is tested on two large-scale social surveys augmented with exposure simulations. The testing demonstrates how this new way of thinking about noise effect modeling can be used in practice both in management support as a "noise annoyance adviser" and in social science for testing hypotheses such as the effect of noise sensitivity or the degree of urbanization.

  3. Semantic search among heterogeneous biological databases based on gene ontology.

    PubMed

    Cao, Shun-Liang; Qin, Lei; He, Wei-Zhong; Zhong, Yang; Zhu, Yang-Yong; Li, Yi-Xue

    2004-05-01

    Semantic search is a key issue in integration of heterogeneous biological databases. In this paper, we present a methodology for implementing semantic search in BioDW, an integrated biological data warehouse. Two tables are presented: the DB2GO table to correlate Gene Ontology (GO) annotated entries from BioDW data sources with GO, and the semantic similarity table to record similarity scores derived from any pair of GO terms. Based on the two tables, multifarious ways for semantic search are provided and the corresponding entries in heterogeneous biological databases in semantic terms can be expediently searched.

  4. Rule-based Cross-matching of Very Large Catalogs

    NASA Astrophysics Data System (ADS)

    Ogle, P. M.; Mazzarella, J.; Ebert, R.; Fadda, D.; Lo, T.; Terek, S.; Schmitz, M.; NED Team

    2015-09-01

    The NASA Extragalactic Database (NED) has deployed a new rule-based cross-matching algorithm called Match Expert (MatchEx), capable of cross-matching very large catalogs (VLCs) with >10 million objects. MatchEx goes beyond traditional position-based cross-matching algorithms by using other available data together with expert logic to determine which candidate match is the best. Furthermore, the local background density of sources is used to determine and minimize the false-positive match rate and to estimate match completeness. The logical outcome and statistical probability of each match decision is stored in the database and may be used to tune the algorithm and adjust match parameter thresholds. For our first production run, we cross-matched the GALEX All Sky Survey Catalog (GASC), containing nearly 40 million NUV-detected sources, against a directory of 180 million objects in NED. Candidate matches were identified for each GASC source within a 7''.5 radius. These candidates were filtered on position-based matching probability and on other criteria including object type and object name. We estimate a match completeness of 97.6% and a match accuracy of 99.75%. Over the next year, we will be cross-matching over 2 billion catalog sources to NED, including the Spitzer Source List, the 2MASS point-source catalog, AllWISE, and SDSS DR 10. We expect to add new capabilities to filter candidate matches based on photometry, redshifts, and refined object classifications. We will also extend MatchEx to handle more heterogenous datasets federated from smaller catalogs through NED's literature pipeline.

  5. Neural substrates of similarity and rule-based strategies in judgment

    PubMed Central

    von Helversen, Bettina; Karlsson, Linnea; Rasch, Björn; Rieskamp, Jörg

    2014-01-01

    Making accurate judgments is a core human competence and a prerequisite for success in many areas of life. Plenty of evidence exists that people can employ different judgment strategies to solve identical judgment problems. In categorization, it has been demonstrated that similarity-based and rule-based strategies are associated with activity in different brain regions. Building on this research, the present work tests whether solving two identical judgment problems recruits different neural substrates depending on people's judgment strategies. Combining cognitive modeling of judgment strategies at the behavioral level with functional magnetic resonance imaging (fMRI), we compare brain activity when using two archetypal judgment strategies: a similarity-based exemplar strategy and a rule-based heuristic strategy. Using an exemplar-based strategy should recruit areas involved in long-term memory processes to a larger extent than a heuristic strategy. In contrast, using a heuristic strategy should recruit areas involved in the application of rules to a larger extent than an exemplar-based strategy. Largely consistent with our hypotheses, we found that using an exemplar-based strategy led to relatively higher BOLD activity in the anterior prefrontal and inferior parietal cortex, presumably related to retrieval and selective attention processes. In contrast, using a heuristic strategy led to relatively higher activity in areas in the dorsolateral prefrontal and the temporal-parietal cortex associated with cognitive control and information integration. Thus, even when people solve identical judgment problems, different neural substrates can be recruited depending on the judgment strategy involved. PMID:25360099

  6. Neural substrates of similarity and rule-based strategies in judgment.

    PubMed

    von Helversen, Bettina; Karlsson, Linnea; Rasch, Björn; Rieskamp, Jörg

    2014-01-01

    Making accurate judgments is a core human competence and a prerequisite for success in many areas of life. Plenty of evidence exists that people can employ different judgment strategies to solve identical judgment problems. In categorization, it has been demonstrated that similarity-based and rule-based strategies are associated with activity in different brain regions. Building on this research, the present work tests whether solving two identical judgment problems recruits different neural substrates depending on people's judgment strategies. Combining cognitive modeling of judgment strategies at the behavioral level with functional magnetic resonance imaging (fMRI), we compare brain activity when using two archetypal judgment strategies: a similarity-based exemplar strategy and a rule-based heuristic strategy. Using an exemplar-based strategy should recruit areas involved in long-term memory processes to a larger extent than a heuristic strategy. In contrast, using a heuristic strategy should recruit areas involved in the application of rules to a larger extent than an exemplar-based strategy. Largely consistent with our hypotheses, we found that using an exemplar-based strategy led to relatively higher BOLD activity in the anterior prefrontal and inferior parietal cortex, presumably related to retrieval and selective attention processes. In contrast, using a heuristic strategy led to relatively higher activity in areas in the dorsolateral prefrontal and the temporal-parietal cortex associated with cognitive control and information integration. Thus, even when people solve identical judgment problems, different neural substrates can be recruited depending on the judgment strategy involved. PMID:25360099

  7. The development of co-speech gesture and its semantic integration with speech in 6- to 12-year-old children with autism spectrum disorders.

    PubMed

    So, Wing-Chee; Wong, Miranda Kit-Yi; Lui, Ming; Yip, Virginia

    2015-11-01

    Previous work leaves open the question of whether children with autism spectrum disorders aged 6-12 years have delay in producing gestures compared to their typically developing peers. This study examined gestural production among school-aged children in a naturalistic context and how their gestures are semantically related to the accompanying speech. Delay in gestural production was found in children with autism spectrum disorders through their middle to late childhood. Compared to their typically developing counterparts, children with autism spectrum disorders gestured less often and used fewer types of gestures, in particular markers, which carry culture-specific meaning. Typically developing children's gestural production was related to language and cognitive skills, but among children with autism spectrum disorders, gestural production was more strongly related to the severity of socio-communicative impairment. Gesture impairment also included the failure to integrate speech with gesture: in particular, supplementary gestures are absent in children with autism spectrum disorders. The findings extend our understanding of gestural production in school-aged children with autism spectrum disorders during spontaneous interaction. The results can help guide new therapies for gestural production for children with autism spectrum disorders in middle and late childhood.

  8. A Semantic Approach with Decision Support for Safety Service in Smart Home Management.

    PubMed

    Huang, Xiaoci; Yi, Jianjun; Zhu, Xiaomin; Chen, Shaoli

    2016-01-01

    Research on smart homes (SHs) has increased significantly in recent years because of the convenience provided by having an assisted living environment. The functions of SHs as mentioned in previous studies, particularly safety services, are seldom discussed or mentioned. Thus, this study proposes a semantic approach with decision support for safety service in SH management. The focus of this contribution is to explore a context awareness and reasoning approach for risk recognition in SH that enables the proper decision support for flexible safety service provision. The framework of SH based on a wireless sensor network is described from the perspective of neighbourhood management. This approach is based on the integration of semantic knowledge in which a reasoner can make decisions about risk recognition and safety service. We present a management ontology for a SH and relevant monitoring contextual information, which considers its suitability in a pervasive computing environment and is service-oriented. We also propose a rule-based reasoning method to provide decision support through reasoning techniques and context-awareness. A system prototype is developed to evaluate the feasibility, time response and extendibility of the approach. The evaluation of our approach shows that it is more effective in daily risk event recognition. The decisions for service provision are shown to be accurate.

  9. A Semantic Approach with Decision Support for Safety Service in Smart Home Management

    PubMed Central

    Huang, Xiaoci; Yi, Jianjun; Zhu, Xiaomin; Chen, Shaoli

    2016-01-01

    Research on smart homes (SHs) has increased significantly in recent years because of the convenience provided by having an assisted living environment. The functions of SHs as mentioned in previous studies, particularly safety services, are seldom discussed or mentioned. Thus, this study proposes a semantic approach with decision support for safety service in SH management. The focus of this contribution is to explore a context awareness and reasoning approach for risk recognition in SH that enables the proper decision support for flexible safety service provision. The framework of SH based on a wireless sensor network is described from the perspective of neighbourhood management. This approach is based on the integration of semantic knowledge in which a reasoner can make decisions about risk recognition and safety service. We present a management ontology for a SH and relevant monitoring contextual information, which considers its suitability in a pervasive computing environment and is service-oriented. We also propose a rule-based reasoning method to provide decision support through reasoning techniques and context-awareness. A system prototype is developed to evaluate the feasibility, time response and extendibility of the approach. The evaluation of our approach shows that it is more effective in daily risk event recognition. The decisions for service provision are shown to be accurate. PMID:27527170

  10. A Semantic Approach with Decision Support for Safety Service in Smart Home Management.

    PubMed

    Huang, Xiaoci; Yi, Jianjun; Zhu, Xiaomin; Chen, Shaoli

    2016-01-01

    Research on smart homes (SHs) has increased significantly in recent years because of the convenience provided by having an assisted living environment. The functions of SHs as mentioned in previous studies, particularly safety services, are seldom discussed or mentioned. Thus, this study proposes a semantic approach with decision support for safety service in SH management. The focus of this contribution is to explore a context awareness and reasoning approach for risk recognition in SH that enables the proper decision support for flexible safety service provision. The framework of SH based on a wireless sensor network is described from the perspective of neighbourhood management. This approach is based on the integration of semantic knowledge in which a reasoner can make decisions about risk recognition and safety service. We present a management ontology for a SH and relevant monitoring contextual information, which considers its suitability in a pervasive computing environment and is service-oriented. We also propose a rule-based reasoning method to provide decision support through reasoning techniques and context-awareness. A system prototype is developed to evaluate the feasibility, time response and extendibility of the approach. The evaluation of our approach shows that it is more effective in daily risk event recognition. The decisions for service provision are shown to be accurate. PMID:27527170

  11. Cross border semantic interoperability for clinical research: the EHR4CR semantic resources and services.

    PubMed

    Daniel, Christel; Ouagne, David; Sadou, Eric; Forsberg, Kerstin; Gilchrist, Mark Mc; Zapletal, Eric; Paris, Nicolas; Hussain, Sajjad; Jaulent, Marie-Christine; Md, Dipka Kalra

    2016-01-01

    With the development of platforms enabling the use of routinely collected clinical data in the context of international clinical research, scalable solutions for cross border semantic interoperability need to be developed. Within the context of the IMI EHR4CR project, we first defined the requirements and evaluation criteria of the EHR4CR semantic interoperability platform and then developed the semantic resources and supportive services and tooling to assist hospital sites in standardizing their data for allowing the execution of the project use cases. The experience gained from the evaluation of the EHR4CR platform accessing to semantically equivalent data elements across 11 European participating EHR systems from 5 countries demonstrated how far the mediation model and mapping efforts met the expected requirements of the project. Developers of semantic interoperability platforms are beginning to address a core set of requirements in order to reach the goal of developing cross border semantic integration of data. PMID:27570649

  12. Cross border semantic interoperability for clinical research: the EHR4CR semantic resources and services

    PubMed Central

    Daniel, Christel; Ouagne, David; Sadou, Eric; Forsberg, Kerstin; Gilchrist, Mark Mc; Zapletal, Eric; Paris, Nicolas; Hussain, Sajjad; Jaulent, Marie-Christine; MD, Dipka Kalra

    2016-01-01

    With the development of platforms enabling the use of routinely collected clinical data in the context of international clinical research, scalable solutions for cross border semantic interoperability need to be developed. Within the context of the IMI EHR4CR project, we first defined the requirements and evaluation criteria of the EHR4CR semantic interoperability platform and then developed the semantic resources and supportive services and tooling to assist hospital sites in standardizing their data for allowing the execution of the project use cases. The experience gained from the evaluation of the EHR4CR platform accessing to semantically equivalent data elements across 11 European participating EHR systems from 5 countries demonstrated how far the mediation model and mapping efforts met the expected requirements of the project. Developers of semantic interoperability platforms are beginning to address a core set of requirements in order to reach the goal of developing cross border semantic integration of data. PMID:27570649

  13. Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2009-01-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the exemplar…

  14. Research of Expended Production Rule Based on Fuzzy Conceptual Graphs*

    NASA Astrophysics Data System (ADS)

    Liu, Peiqi; Li, Longji; Zhang, Linye; Li, Zengzhi

    In the knowledge engineering, the fuzzy conceptual graphs and the production rule are two important knowledge representation methods. Because the confidence information can't be represented in the fuzzy conceptual graphs and the fuzzy knowledge can't be represented in the production rules, the ability of their knowledge representation is grievous insufficiency. In this paper, the extended production rule which is a new knowledge representation method has been presented. In the extended production rule, the antecedent and consequent of a rule are represented by fuzzy conceptual graphs, and the sustaining relation between antecedent and consequent is the confidence. The rule combines the fuzzy knowledge with the confidence effectually. It not only retains the semantic plentifulness of facts and proposition, but also makes the reasoning results more effectively. According to the extended production rule, the uncertain reasoning algorithm based on fuzzy conceptual graphs is designed. By the experiment test and analysis, the reasoning effects of the extended production rule are more in reason. The researching results are applied in the designed of uncertain inference engine in fuzzy expert system.

  15. Fuzzy rule base design using tabu search algorithm for nonlinear system modeling.

    PubMed

    Bagis, Aytekin

    2008-01-01

    This paper presents an approach to fuzzy rule base design using tabu search algorithm (TSA) for nonlinear system modeling. TSA is used to evolve the structure and the parameter of fuzzy rule base. The use of the TSA, in conjunction with a systematic neighbourhood structure for the determination of fuzzy rule base parameters, leads to a significant improvement in the performance of the model. To demonstrate the effectiveness of the presented method, several numerical examples given in the literature are examined. The results obtained by means of the identified fuzzy rule bases are compared with those belonging to other modeling approaches in the literature. The simulation results indicate that the method based on the use of a TSA performs an important and very effective modeling procedure in fuzzy rule base design in the modeling of the nonlinear or complex systems. PMID:17945233

  16. LORD: a phenotype-genotype semantically integrated biomedical data tool to support rare disease diagnosis coding in health information systems

    PubMed Central

    Choquet, Remy; Maaroufi, Meriem; Fonjallaz, Yannick; de Carrara, Albane; Vandenbussche, Pierre-Yves; Dhombres, Ferdinand; Landais, Paul

    2015-01-01

    Characterizing a rare disease diagnosis for a given patient is often made through expert’s networks. It is a complex task that could evolve over time depending on the natural history of the disease and the evolution of the scientific knowledge. Most rare diseases have genetic causes and recent improvements of sequencing techniques contribute to the discovery of many new diseases every year. Diagnosis coding in the rare disease field requires data from multiple knowledge bases to be aggregated in order to offer the clinician a global information space from possible diagnosis to clinical signs (phenotypes) and known genetic mutations (genotype). Nowadays, the major barrier to the coding activity is the lack of consolidation of such information scattered in different thesaurus such as Orphanet, OMIM or HPO. The Linking Open data for Rare Diseases (LORD) web portal we developed stands as the first attempt to fill this gap by offering an integrated view of 8,400 rare diseases linked to more than 14,500 signs and 3,270 genes. The application provides a browsing feature to navigate through the relationships between diseases, signs and genes, and some Application Programming Interfaces to help its integration in health information systems in routine. PMID:26958175

  17. LORD: a phenotype-genotype semantically integrated biomedical data tool to support rare disease diagnosis coding in health information systems.

    PubMed

    Choquet, Remy; Maaroufi, Meriem; Fonjallaz, Yannick; de Carrara, Albane; Vandenbussche, Pierre-Yves; Dhombres, Ferdinand; Landais, Paul

    2015-01-01

    Characterizing a rare disease diagnosis for a given patient is often made through expert's networks. It is a complex task that could evolve over time depending on the natural history of the disease and the evolution of the scientific knowledge. Most rare diseases have genetic causes and recent improvements of sequencing techniques contribute to the discovery of many new diseases every year. Diagnosis coding in the rare disease field requires data from multiple knowledge bases to be aggregated in order to offer the clinician a global information space from possible diagnosis to clinical signs (phenotypes) and known genetic mutations (genotype). Nowadays, the major barrier to the coding activity is the lack of consolidation of such information scattered in different thesaurus such as Orphanet, OMIM or HPO. The Linking Open data for Rare Diseases (LORD) web portal we developed stands as the first attempt to fill this gap by offering an integrated view of 8,400 rare diseases linked to more than 14,500 signs and 3,270 genes. The application provides a browsing feature to navigate through the relationships between diseases, signs and genes, and some Application Programming Interfaces to help its integration in health information systems in routine. PMID:26958175

  18. LORD: a phenotype-genotype semantically integrated biomedical data tool to support rare disease diagnosis coding in health information systems.

    PubMed

    Choquet, Remy; Maaroufi, Meriem; Fonjallaz, Yannick; de Carrara, Albane; Vandenbussche, Pierre-Yves; Dhombres, Ferdinand; Landais, Paul

    2015-01-01

    Characterizing a rare disease diagnosis for a given patient is often made through expert's networks. It is a complex task that could evolve over time depending on the natural history of the disease and the evolution of the scientific knowledge. Most rare diseases have genetic causes and recent improvements of sequencing techniques contribute to the discovery of many new diseases every year. Diagnosis coding in the rare disease field requires data from multiple knowledge bases to be aggregated in order to offer the clinician a global information space from possible diagnosis to clinical signs (phenotypes) and known genetic mutations (genotype). Nowadays, the major barrier to the coding activity is the lack of consolidation of such information scattered in different thesaurus such as Orphanet, OMIM or HPO. The Linking Open data for Rare Diseases (LORD) web portal we developed stands as the first attempt to fill this gap by offering an integrated view of 8,400 rare diseases linked to more than 14,500 signs and 3,270 genes. The application provides a browsing feature to navigate through the relationships between diseases, signs and genes, and some Application Programming Interfaces to help its integration in health information systems in routine.

  19. Application of a rule-based knowledge system using CLIPS for the taxonomy of selected Opuntia species

    NASA Technical Reports Server (NTRS)

    Heymans, Bart C.; Onema, Joel P.; Kuti, Joseph O.

    1991-01-01

    A rule based knowledge system was developed in CLIPS (C Language Integrated Production System) for identifying Opuntia species in the family Cactaceae, which contains approx. 1500 different species. This botanist expert tool system is capable of identifying selected Opuntia plants from the family level down to the species level when given some basic characteristics of the plants. Many plants are becoming of increasing importance because of their nutrition and human health potential, especially in the treatment of diabetes mellitus. The expert tool system described can be extremely useful in an unequivocal identification of many useful Opuntia species.

  20. The Semantic eScience Framework

    NASA Astrophysics Data System (ADS)

    Fox, P. A.; McGuinness, D. L.

    2009-12-01

    The goal of this effort is to design and implement a configurable and extensible semantic eScience framework (SESF). Configuration requires research into accommodating different levels of semantic expressivity and user requirements from use cases. Extensibility is being achieved in a modular approach to the semantic encodings (i.e. ontologies) performed in community settings, i.e. an ontology framework into which specific applications all the way up to communities can extend the semantics for their needs.We report on how we are accommodating the rapid advances in semantic technologies and tools and the sustainable software path for the future (certain) technical advances. In addition to a generalization of the current data science interface, we will present plans for an upper-level interface suitable for use by clearinghouses, and/or educational portals, digital libraries, and other disciplines.SESF builds upon previous work in the Virtual Solar-Terrestrial Observatory. The VSTO utilizes leading edge knowledge representation, query and reasoning techniques to support knowledge-enhanced search, data access, integration, and manipulation. It encodes term meanings and their inter-relationships in ontologies anduses these ontologies and associated inference engines to semantically enable the data services. The Semantically-Enabled Science Data Integration (SESDI) project implemented data integration capabilities among three sub-disciplines; solar radiation, volcanic outgassing and atmospheric structure using extensions to existingmodular ontolgies and used the VSTO data framework, while adding smart faceted search and semantic data registrationtools. The Semantic Provenance Capture in Data Ingest Systems (SPCDIS) has added explanation provenance capabilities to an observational data ingest pipeline for images of the Sun providing a set of tools to answer diverseend user questions such as ``Why does this image look bad?.

  1. The Semantic eScience Framework

    NASA Astrophysics Data System (ADS)

    McGuinness, Deborah; Fox, Peter; Hendler, James

    2010-05-01

    The goal of this effort is to design and implement a configurable and extensible semantic eScience framework (SESF). Configuration requires research into accommodating different levels of semantic expressivity and user requirements from use cases. Extensibility is being achieved in a modular approach to the semantic encodings (i.e. ontologies) performed in community settings, i.e. an ontology framework into which specific applications all the way up to communities can extend the semantics for their needs.We report on how we are accommodating the rapid advances in semantic technologies and tools and the sustainable software path for the future (certain) technical advances. In addition to a generalization of the current data science interface, we will present plans for an upper-level interface suitable for use by clearinghouses, and/or educational portals, digital libraries, and other disciplines.SESF builds upon previous work in the Virtual Solar-Terrestrial Observatory. The VSTO utilizes leading edge knowledge representation, query and reasoning techniques to support knowledge-enhanced search, data access, integration, and manipulation. It encodes term meanings and their inter-relationships in ontologies anduses these ontologies and associated inference engines to semantically enable the data services. The Semantically-Enabled Science Data Integration (SESDI) project implemented data integration capabilities among three sub-disciplines; solar radiation, volcanic outgassing and atmospheric structure using extensions to existingmodular ontolgies and used the VSTO data framework, while adding smart faceted search and semantic data registrationtools. The Semantic Provenance Capture in Data Ingest Systems (SPCDIS) has added explanation provenance capabilities to an observational data ingest pipeline for images of the Sun providing a set of tools to answer diverseend user questions such as ``Why does this image look bad?. http://tw.rpi.edu/portal/SESF

  2. An ontology-based hierarchical semantic modeling approach to clinical pathway workflows.

    PubMed

    Ye, Yan; Jiang, Zhibin; Diao, Xiaodi; Yang, Dong; Du, Gang

    2009-08-01

    This paper proposes an ontology-based approach of modeling clinical pathway workflows at the semantic level for facilitating computerized clinical pathway implementation and efficient delivery of high-quality healthcare services. A clinical pathway ontology (CPO) is formally defined in OWL web ontology language (OWL) to provide common semantic foundation for meaningful representation and exchange of pathway-related knowledge. A CPO-based semantic modeling method is then presented to describe clinical pathways as interconnected hierarchical models including the top-level outcome flow and intervention workflow level along a care timeline. Furthermore, relevant temporal knowledge can be fully represented by combing temporal entities in CPO and temporal rules based on semantic web rule language (SWRL). An illustrative example about a clinical pathway for cesarean section shows the applicability of the proposed methodology in enabling structured semantic descriptions of any real clinical pathway.

  3. Semantic-Web Technology: Applications at NASA

    NASA Technical Reports Server (NTRS)

    Ashish, Naveen

    2004-01-01

    We provide a description of work at the National Aeronautics and Space Administration (NASA) on building system based on semantic-web concepts and technologies. NASA has been one of the early adopters of semantic-web technologies for practical applications. Indeed there are several ongoing 0 endeavors on building semantics based systems for use in diverse NASA domains ranging from collaborative scientific activity to accident and mishap investigation to enterprise search to scientific information gathering and integration to aviation safety decision support We provide a brief overview of many applications and ongoing work with the goal of informing the external community of these NASA endeavors.

  4. Semantics via Machine Translation

    ERIC Educational Resources Information Center

    Culhane, P. T.

    1977-01-01

    Recent experiments in machine translation have given the semantic elements of collocation in Russian more objective criteria. Soviet linguists in search of semantic relationships have attempted to devise a semantic synthesis for construction of a basic language for machine translation. One such effort is summarized. (CHK)

  5. SEMANTICS AND CRITICAL READING.

    ERIC Educational Resources Information Center

    FLANIGAN, MICHAEL C.

    PROFICIENCY IN CRITICAL READING CAN BE ACCELERATED BY MAKING STUDENTS AWARE OF VARIOUS SEMANTIC DEVICES THAT HELP CLARIFY MEANINGS AND PURPOSES. EXCERPTS FROM THE ARTICLE "TEEN-AGE CORRUPTION" FROM THE NINTH-GRADE SEMANTICS UNIT WRITTEN BY THE PROJECT ENGLISH DEMONSTRATION CENTER AT EUCLID, OHIO, ARE USED TO ILLUSTRATE HOW SEMANTICS RELATE TO…

  6. Semantic SenseLab: implementing the vision of the Semantic Web in neuroscience

    PubMed Central

    Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi

    2011-01-01

    Summary Objective Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Methods Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. Conclusion We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/ PMID:20006477

  7. Semantic networks of English.

    PubMed

    Miller, G A; Fellbaum, C

    1991-12-01

    Principles of lexical semantics developed in the course of building an on-line lexical database are discussed. The approach is relational rather than componential. The fundamental semantic relation is synonymy, which is required in order to define the lexicalized concepts that words can be used to express. Other semantic relations between these concepts are then described. No single set of semantic relations or organizational structure is adequate for the entire lexicon: nouns, adjectives, and verbs each have their own semantic relations and their own organization determined by the role they must play in the construction of linguistic messages.

  8. Significance testing of rules in rule-based models of human problem solving

    NASA Technical Reports Server (NTRS)

    Lewis, C. M.; Hammer, J. M.

    1986-01-01

    Rule-based models of human problem solving have typically not been tested for statistical significance. Three methods of testing rules - analysis of variance, randomization, and contingency tables - are presented. Advantages and disadvantages of the methods are also described.

  9. Traditional versus rule-based programming techniques - Application to the control of optional flight information

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.; Abbott, Kathy H.

    1987-01-01

    A traditional programming technique for controlling the display of optional flight information in a civil transport cockpit is compared to a rule-based technique for the same function. This application required complex decision logic and a frequently modified rule base. The techniques are evaluated for execution efficiency and implementation ease; the criterion used to calculate the execution efficiency is the total number of steps required to isolate hypotheses that were true and the criteria used to evaluate the implementability are ease of modification and verification and explanation capability. It is observed that the traditional program is more efficient than the rule-based program; however, the rule-based programming technique is more applicable for improving programmer productivity.

  10. RFID sensor-tags feeding a context-aware rule-based healthcare monitoring system.

    PubMed

    Catarinucci, Luca; Colella, Riccardo; Esposito, Alessandra; Tarricone, Luciano; Zappatore, Marco

    2012-12-01

    Along with the growing of the aging population and the necessity of efficient wellness systems, there is a mounting demand for new technological solutions able to support remote and proactive healthcare. An answer to this need could be provided by the joint use of the emerging Radio Frequency Identification (RFID) technologies and advanced software choices. This paper presents a proposal for a context-aware infrastructure for ubiquitous and pervasive monitoring of heterogeneous healthcare-related scenarios, fed by RFID-based wireless sensors nodes. The software framework is based on a general purpose architecture exploiting three key implementation choices: ontology representation, multi-agent paradigm and rule-based logic. From the hardware point of view, the sensing and gathering of context-data is demanded to a new Enhanced RFID Sensor-Tag. This new device, de facto, makes possible the easy integration between RFID and generic sensors, guaranteeing flexibility and preserving the benefits in terms of simplicity of use and low cost of UHF RFID technology. The system is very efficient and versatile and its customization to new scenarios requires a very reduced effort, substantially limited to the update/extension of the ontology codification. Its effectiveness is demonstrated by reporting both customization effort and performance results obtained from validation in two different healthcare monitoring contexts.

  11. Biomedical semantics in the Semantic Web

    PubMed Central

    2011-01-01

    The Semantic Web offers an ideal platform for representing and linking biomedical information, which is a prerequisite for the development and application of analytical tools to address problems in data-intensive areas such as systems biology and translational medicine. As for any new paradigm, the adoption of the Semantic Web offers opportunities and poses questions and challenges to the life sciences scientific community: which technologies in the Semantic Web stack will be more beneficial for the life sciences? Is biomedical information too complex to benefit from simple interlinked representations? What are the implications of adopting a new paradigm for knowledge representation? What are the incentives for the adoption of the Semantic Web, and who are the facilitators? Is there going to be a Semantic Web revolution in the life sciences? We report here a few reflections on these questions, following discussions at the SWAT4LS (Semantic Web Applications and Tools for Life Sciences) workshop series, of which this Journal of Biomedical Semantics special issue presents selected papers from the 2009 edition, held in Amsterdam on November 20th. PMID:21388570

  12. Biomedical semantics in the Semantic Web.

    PubMed

    Splendiani, Andrea; Burger, Albert; Paschke, Adrian; Romano, Paolo; Marshall, M Scott

    2011-03-07

    The Semantic Web offers an ideal platform for representing and linking biomedical information, which is a prerequisite for the development and application of analytical tools to address problems in data-intensive areas such as systems biology and translational medicine. As for any new paradigm, the adoption of the Semantic Web offers opportunities and poses questions and challenges to the life sciences scientific community: which technologies in the Semantic Web stack will be more beneficial for the life sciences? Is biomedical information too complex to benefit from simple interlinked representations? What are the implications of adopting a new paradigm for knowledge representation? What are the incentives for the adoption of the Semantic Web, and who are the facilitators? Is there going to be a Semantic Web revolution in the life sciences?We report here a few reflections on these questions, following discussions at the SWAT4LS (Semantic Web Applications and Tools for Life Sciences) workshop series, of which this Journal of Biomedical Semantics special issue presents selected papers from the 2009 edition, held in Amsterdam on November 20th.

  13. Requirements engineering of a medical information system using rule-based refinement of Petri nets

    SciTech Connect

    Ermel, C.; Padberg, J.; Ehrig, H.

    1996-12-31

    This paper is concerned with the application of a formal technique to software engineering. In this case study we have used rule-based refinement of algebraic high-level nets for the requirements engineering of a medical information system. We outline the basic ideas of rule-based refinement and discuss how this technique is applied to the development from actual state analysis to functional essence.

  14. Episodic memory, semantic memory, and amnesia.

    PubMed

    Squire, L R; Zola, S M

    1998-01-01

    Episodic memory and semantic memory are two types of declarative memory. There have been two principal views about how this distinction might be reflected in the organization of memory functions in the brain. One view, that episodic memory and semantic memory are both dependent on the integrity of medial temporal lobe and midline diencephalic structures, predicts that amnesic patients with medial temporal lobe/diencephalic damage should be proportionately impaired in both episodic and semantic memory. An alternative view is that the capacity for semantic memory is spared, or partially spared, in amnesia relative to episodic memory ability. This article reviews two kinds of relevant data: 1) case studies where amnesia has occurred early in childhood, before much of an individual's semantic knowledge has been acquired, and 2) experimental studies with amnesic patients of fact and event learning, remembering and knowing, and remote memory. The data provide no compelling support for the view that episodic and semantic memory are affected differently in medial temporal lobe/diencephalic amnesia. However, episodic and semantic memory may be dissociable in those amnesic patients who additionally have severe frontal lobe damage.

  15. Semantic framework for mapping object-oriented model to semantic web languages

    PubMed Central

    Ježek, Petr; Mouček, Roman

    2015-01-01

    The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework. PMID:25762923

  16. Semantic framework for mapping object-oriented model to semantic web languages.

    PubMed

    Ježek, Petr; Mouček, Roman

    2015-01-01

    The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework.

  17. Semantic framework for mapping object-oriented model to semantic web languages.

    PubMed

    Ježek, Petr; Mouček, Roman

    2015-01-01

    The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework. PMID:25762923

  18. The Development of Co-Speech Gesture and Its Semantic Integration with Speech in 6- to 12-Year-Old Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    So, Wing-Chee; Wong, Miranda Kit-Yi; Lui, Ming; Yip, Virginia

    2015-01-01

    Previous work leaves open the question of whether children with autism spectrum disorders aged 6-12?years have delay in producing gestures compared to their typically developing peers. This study examined gestural production among school-aged children in a naturalistic context and how their gestures are semantically related to the accompanying…

  19. SSWAP: A Simple Semantic Web Architecture and Protocol for semantic web services

    PubMed Central

    Gessler, Damian DG; Schiltz, Gary S; May, Greg D; Avraham, Shulamit; Town, Christopher D; Grant, David; Nelson, Rex T

    2009-01-01

    Background SSWAP (Simple Semantic Web Architecture and Protocol; pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies. Results There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at , developer tools at , and a portal to third-party ontologies at (a "swap meet"). Conclusion SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the confounding of content, structure, and presentation. SSWAP is novel by establishing

  20. Semantic Networks and Social Networks

    ERIC Educational Resources Information Center

    Downes, Stephen

    2005-01-01

    Purpose: To illustrate the need for social network metadata within semantic metadata. Design/methodology/approach: Surveys properties of social networks and the semantic web, suggests that social network analysis applies to semantic content, argues that semantic content is more searchable if social network metadata is merged with semantic web…

  1. Models of quantitative estimations: rule-based and exemplar-based processes compared.

    PubMed

    von Helversen, Bettina; Rieskamp, Jörg

    2009-07-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model-the mapping model-that outperformed the exemplar model in a task thought to promote exemplar-based processing. This raised questions about the assumptions of rule-based versus exemplar-based models that underlie the notion of task contingency of cognitive processes. Rule-based models, such as the mapping model, assume the abstraction of explicit task knowledge. In contrast, exemplar models should profit if storage and activation of the exemplars is facilitated. Two studies tested the importance of the two models' assumptions. When knowledge about cues existed, the rule-based mapping model predicted quantitative estimations best. In contrast, when knowledge about the cues was difficult to gain, participants' estimations were best described by an exemplar model. The results emphasize the task contingency of cognitive processes. PMID:19586258

  2. Anticipating Words and Their Gender: An Event-related Brain Potential Study of Semantic Integration, Gender Expectancy, and Gender Agreement in Spanish Sentence Reading

    PubMed Central

    Wicha, Nicole Y. Y.; Moreno, Eva M.; Kutas, Marta

    2012-01-01

    Recent studies indicate that the human brain attends to and uses grammatical gender cues during sentence comprehension. Here, we examine the nature and time course of the effect of gender on word-by-word sentence reading. Event-related brain potentials were recorded to an article and noun, while native Spanish speakers read medium- to high-constraint Spanish sentences for comprehension. The noun either fit the sentence meaning or not, and matched the preceding article in gender or not; in addition, the preceding article was either expected or unexpected based on prior sentence context. Semantically anomalous nouns elicited an N400. Gender-disagreeing nouns elicited a posterior late positivity (P600), replicating previous findings for words. Gender agreement and semantic congruity interacted in both the N400 window—with a larger negativity frontally for double violations—and the P600 window—with a larger positivity for semantic anomalies, relative to the prestimulus baseline. Finally, unexpected articles elicited an enhanced positivity (500–700 msec post onset) relative to expected articles. Overall, our data indicate that readers anticipate and attend to the gender of both articles and nouns, and use gender in real time to maintain agreement and to build sentence meaning. PMID:15453979

  3. LEARNING SEMANTICS-ENHANCED LANGUAGE MODELS APPLIED TO UNSUEPRVISED WSD

    SciTech Connect

    VERSPOOR, KARIN; LIN, SHOU-DE

    2007-01-29

    An N-gram language model aims at capturing statistical syntactic word order information from corpora. Although the concept of language models has been applied extensively to handle a variety of NLP problems with reasonable success, the standard model does not incorporate semantic information, and consequently limits its applicability to semantic problems such as word sense disambiguation. We propose a framework that integrates semantic information into the language model schema, allowing a system to exploit both syntactic and semantic information to address NLP problems. Furthermore, acknowledging the limited availability of semantically annotated data, we discuss how the proposed model can be learned without annotated training examples. Finally, we report on a case study showing how the semantics-enhanced language model can be applied to unsupervised word sense disambiguation with promising results.

  4. A Rule-Based System Implementing a Method for Translating FOL Formulas into NL Sentences

    NASA Astrophysics Data System (ADS)

    Mpagouli, Aikaterini; Hatzilygeroudis, Ioannis

    In this paper, we mainly present the implementation of a system that translates first order logic (FOL) formulas into natural language (NL) sentences. The motivation comes from an intelligent tutoring system teaching logic as a knowledge representation language, where it is used as a means for feedback to the students-users. FOL to NL conversion is achieved by using a rule-based approach, where we exploit the pattern matching capabilities of rules. So, the system consists of rule-based modules corresponding to the phases of our translation methodology. Facts are used in a lexicon providing lexical and grammatical information that helps in producing the NL sentences. The whole system is implemented in Jess, a java-implemented rule-based programming tool. Experimental results confirm the success of our choices.

  5. Evidence for rule-based processes in the inverse base-rate effect.

    PubMed

    Winman, Anders; Wennerholm, Pia; Juslin, Peter; Shanks, David R

    2005-07-01

    Three studies provide convergent evidence that the inverse base-rate effect (Medin & Edelson, 1988) is mediated by rule-based cognitive processes. Experiment 1 shows that, in contrast to adults, prior to the formal operational stage most children do not exhibit the inverse base-rate effect. Experiments 2 and 3 demonstrate that an adult sample is a mix of participants relying on associative processes who categorize according to the base-rate and participants relying on rule-based processes who exhibit a strong inverse base-rate effect. The distribution of the effect is bimodal, and removing participants independently classified as prone to rule-based processing effectively eliminates the inverse base-rate effect. The implications for current explanations of the inverse base-rate effect are discussed. PMID:16194936

  6. e-Science and biological pathway semantics

    PubMed Central

    Luciano, Joanne S; Stevens, Robert D

    2007-01-01

    Background The development of e-Science presents a major set of opportunities and challenges for the future progress of biological and life scientific research. Major new tools are required and corresponding demands are placed on the high-throughput data generated and used in these processes. Nowhere is the demand greater than in the semantic integration of these data. Semantic Web tools and technologies afford the chance to achieve this semantic integration. Since pathway knowledge is central to much of the scientific research today it is a good test-bed for semantic integration. Within the context of biological pathways, the BioPAX initiative, part of a broader movement towards the standardization and integration of life science databases, forms a necessary prerequisite for its successful application of e-Science in health care and life science research. This paper examines whether BioPAX, an effort to overcome the barrier of disparate and heterogeneous pathway data sources, addresses the needs of e-Science. Results We demonstrate how BioPAX pathway data can be used to ask and answer some useful biological questions. We find that BioPAX comes close to meeting a broad range of e-Science needs, but certain semantic weaknesses mean that these goals are missed. We make a series of recommendations for re-modeling some aspects of BioPAX to better meet these needs. Conclusion Once these semantic weaknesses are addressed, it will be possible to integrate pathway information in a manner that would be useful in e-Science. PMID:17493286

  7. Anomia as a Marker of Distinct Semantic Memory Impairments in Alzheimer’s Disease and Semantic Dementia

    PubMed Central

    Reilly, Jamie; Peelle, Jonathan E.; Antonucci, Sharon M.; Grossman, Murray

    2011-01-01

    Objective Many neurologically-constrained models of semantic memory have been informed by two primary temporal lobe pathologies: Alzheimer’s Disease (AD) and Semantic Dementia (SD). However, controversy persists regarding the nature of the semantic impairment associated with these patient populations. Some argue that AD presents as a disconnection syndrome in which linguistic impairment reflects difficulties in lexical or perceptual means of semantic access. In contrast, there is a wider consensus that SD reflects loss of core knowledge that underlies word and object meaning. Object naming provides a window into the integrity of semantic knowledge in these two populations. Method We examined naming accuracy, errors and the correlation of naming ability with neuropsychological measures (semantic ability, executive functioning, and working memory) in a large sample of patients with AD (n=36) and SD (n=21). Results Naming ability and naming errors differed between groups, as did neuropsychological predictors of naming ability. Despite a similar extent of baseline cognitive impairment, SD patients were more anomic than AD patients. Conclusions These results add to a growing body of literature supporting a dual impairment to semantic content and active semantic processing in AD, and confirm the fundamental deficit in semantic content in SD. We interpret these findings as supporting of a model of semantic memory premised upon dynamic interactivity between the process and content of conceptual knowledge. PMID:21443339

  8. Hybrid neural network and rule-based pattern recognition system capable of self-modification

    SciTech Connect

    Glover, C.W.; Silliman, M.; Walker, M.; Spelt, P.F. ); Rao, N.S.V. . Dept. of Computer Science)

    1990-01-01

    This paper describes a hybrid neural network and rule-based pattern recognition system architecture which is capable of self-modification or learning. The central research issue to be addressed for a multiclassifier hybrid system is whether such a system can perform better than the two classifiers taken by themselves. The hybrid system employs a hierarchical architecture, and it can be interfaced with one or more sensors. Feature extraction routines operating on raw sensor data produce feature vectors which serve as inputs to neural network classifiers at the next level in the hierarchy. These low-level neural networks are trained to provide further discrimination of the sensor data. A set of feature vectors is formed from a concatenation of information from the feature extraction routines and the low-level neural network results. A rule-based classifier system uses this feature set to determine if certain expected environmental states, conditions, or objects are present in the sensors' current data stream. The rule-based system has been given an a priori set of models of the expected environmental states, conditions, or objects which it is expected to identify. The rule-based system forms many candidate directed graphs of various combinations of incoming features vectors, and it uses a suitably chosen metric to measure the similarity between candidate and model directed graphs. The rule-based system must decide if there is a match between one of the candidate graphs and a model graph. If a match is found, then the rule-based system invokes a routine to create and train a new high-level neural network from the appropriate feature vector data to recognize when this model state is present in future sensor data streams. 12 refs., 3 figs.

  9. GetBonNie for building, analyzing and sharing rule-based models

    SciTech Connect

    Hu, Bin

    2008-01-01

    GetBonNie is a suite of web-based services for building, analyzing, and sharing rule-based models specified according to the conventions of the BioNetGen language (BNGL). Services include (1) an applet for drawing, editing, and viewing graphs of BNGL; (2) a network-generation engine for translating a set of rules into a chemical reaction network; (3) simulation engines that implement generate-first, on-the-fly, and network-free methods for simulating rule-based models; and (4) a database for sharing models, parameter values, annotations, simulation tasks and results.

  10. Traditional versus rule-based programming techniques: Application to the control of optional flight information

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.; Abbott, Kathy H.

    1987-01-01

    To the software design community, the concern over the costs associated with a program's execution time and implementation is great. It is always desirable, and sometimes imperative, that the proper programming technique is chosen which minimizes all costs for a given application or type of application. A study is described that compared cost-related factors associated with traditional programming techniques to rule-based programming techniques for a specific application. The results of this study favored the traditional approach regarding execution efficiency, but favored the rule-based approach regarding programmer productivity (implementation ease). Although this study examined a specific application, the results should be widely applicable.

  11. The Semantic Learning Organization

    ERIC Educational Resources Information Center

    Sicilia, Miguel-Angel; Lytras, Miltiadis D.

    2005-01-01

    Purpose: The aim of this paper is introducing the concept of a "semantic learning organization" (SLO) as an extension of the concept of "learning organization" in the technological domain. Design/methodology/approach: The paper takes existing definitions and conceptualizations of both learning organizations and Semantic Web technology to develop…

  12. Communication: General Semantics Perspectives.

    ERIC Educational Resources Information Center

    Thayer, Lee, Ed.

    This book contains the edited papers from the eleventh International Conference on General Semantics, titled "A Search for Relevance." The conference questioned, as a central theme, the relevance of general semantics in a world of wars and human misery. Reacting to a fundamental Korzybski-ian principle that man's view of reality is distorted by…

  13. Ontology Reuse in Geoscience Semantic Applications

    NASA Astrophysics Data System (ADS)

    Mayernik, M. S.; Gross, M. B.; Daniels, M. D.; Rowan, L. R.; Stott, D.; Maull, K. E.; Khan, H.; Corson-Rikert, J.

    2015-12-01

    The tension between local ontology development and wider ontology connections is fundamental to the Semantic web. It is often unclear, however, what the key decision points should be for new semantic web applications in deciding when to reuse existing ontologies and when to develop original ontologies. In addition, with the growth of semantic web ontologies and applications, new semantic web applications can struggle to efficiently and effectively identify and select ontologies to reuse. This presentation will describe the ontology comparison, selection, and consolidation effort within the EarthCollab project. UCAR, Cornell University, and UNAVCO are collaborating on the EarthCollab project to use semantic web technologies to enable the discovery of the research output from a diverse array of projects. The EarthCollab project is using the VIVO Semantic web software suite to increase discoverability of research information and data related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) diverse research projects informed by geodesy through the UNAVCO geodetic facility and consortium. This presentation will outline of EarthCollab use cases, and provide an overview of key ontologies being used, including the VIVO-Integrated Semantic Framework (VIVO-ISF), Global Change Information System (GCIS), and Data Catalog (DCAT) ontologies. We will discuss issues related to bringing these ontologies together to provide a robust ontological structure to support the EarthCollab use cases. It is rare that a single pre-existing ontology meets all of a new application's needs. New projects need to stitch ontologies together in ways that fit into the broader semantic web ecosystem.

  14. Evaluation of semantic-based information retrieval methods in the autism phenotype domain.

    PubMed

    Hassanpour, Saeed; O'Connor, Martin J; Das, Amar K

    2011-01-01

    Biomedical ontologies are increasingly being used to improve information retrieval methods. In this paper, we present a novel information retrieval approach that exploits knowledge specified by the Semantic Web ontology and rule languages OWL and SWRL. We evaluate our approach using an autism ontology that has 156 SWRL rules defining 145 autism phenotypes. Our approach uses a vector space model to correlate how well these phenotypes relate to the publications used to define them. We compare a vector space phenotype representation using class hierarchies with one that extends this method to incorporate additional semantics encoded in SWRL rules. From a PubMed-extracted corpus of 75 articles, we show that average rank of a related paper using the class hierarchy method is 4.6 whereas the average rank using the extended rule-based method is 3.3. Our results indicate that incorporating rule-based definitions in information retrieval methods can improve search for relevant publications.

  15. Enhancing medical database semantics.

    PubMed Central

    Leão, B. de F.; Pavan, A.

    1995-01-01

    Medical Databases deal with dynamic, heterogeneous and fuzzy data. The modeling of such complex domain demands powerful semantic data modeling methodologies. This paper describes GSM-Explorer a Case Tool that allows for the creation of relational databases using semantic data modeling techniques. GSM Explorer fully incorporates the Generic Semantic Data Model-GSM enabling knowledge engineers to model the application domain with the abstraction mechanisms of generalization/specialization, association and aggregation. The tool generates a structure that implements persistent database-objects through the automatic generation of customized SQL ANSI scripts that sustain the semantics defined in the higher lever. This paper emphasizes the system architecture and the mapping of the semantic model into relational tables. The present status of the project and its further developments are discussed in the Conclusions. PMID:8563288

  16. Order Theoretical Semantic Recommendation

    SciTech Connect

    Joslyn, Cliff A.; Hogan, Emilie A.; Paulson, Patrick R.; Peterson, Elena S.; Stephan, Eric G.; Thomas, Dennis G.

    2013-07-23

    Mathematical concepts of order and ordering relations play multiple roles in semantic technologies. Discrete totally ordered data characterize both input streams and top-k rank-ordered recommendations and query output, while temporal attributes establish numerical total orders, either over time points or in the more complex case of startend temporal intervals. But also of note are the fully partially ordered data, including both lattices and non-lattices, which actually dominate the semantic strcuture of ontological systems. Scalar semantic similarities over partially-ordered semantic data are traditionally used to return rank-ordered recommendations, but these require complementation with true metrics available over partially ordered sets. In this paper we report on our work in the foundations of partial order measurement in ontologies, with application to top-k semantic recommendation in workflows.

  17. Semantics, Pragmatics, and the Nature of Semantic Theories

    ERIC Educational Resources Information Center

    Spewak, David Charles, Jr.

    2013-01-01

    The primary concern of this dissertation is determining the distinction between semantics and pragmatics and how context sensitivity should be accommodated within a semantic theory. I approach the question over how to distinguish semantics from pragmatics from a new angle by investigating what the objects of a semantic theory are, namely…

  18. A Relation Routing Scheme for Distributed Semantic Media Query

    PubMed Central

    Liao, Zhuhua; Zhang, Guoqiang; Yi, Aiping; Zhang, Guoqing; Liang, Wei

    2013-01-01

    Performing complex semantic queries over large-scale distributed media contents is a challenging task for rich media applications. The dynamics and openness of data sources make it uneasy to realize a query scheme that simultaneously achieves precision, scalability, and reliability. In this paper, a novel relation routing scheme (RRS) is proposed by renovating the routing model of Content Centric Network (CCN) for directly querying large-scale semantic media content. By using proper query model and routing mechanism, semantic queries with complex relation constrains from users can be guided towards potential media sources through semantic guider nodes. The scattered and fragmented query results can be integrated on their way back for semantic needs or to avoid duplication. Several new techniques, such as semantic-based naming, incomplete response avoidance, timeout checking, and semantic integration, are developed in this paper to improve the accuracy, efficiency, and practicality of the proposed approach. Both analytical and experimental results show that the proposed scheme is a promising and effective solution for complex semantic queries and integration over large-scale networks. PMID:24319383

  19. Semantic processing of EHR data for clinical research.

    PubMed

    Sun, Hong; Depraetere, Kristof; De Roo, Jos; Mels, Giovanni; De Vloed, Boris; Twagirumukiza, Marc; Colaert, Dirk

    2015-12-01

    There is a growing need to semantically process and integrate clinical data from different sources for clinical research. This paper presents an approach to integrate EHRs from heterogeneous resources and generate integrated data in different data formats or semantics to support various clinical research applications. The proposed approach builds semantic data virtualization layers on top of data sources, which generate data in the requested semantics or formats on demand. This approach avoids upfront dumping to and synchronizing of the data with various representations. Data from different EHR systems are first mapped to RDF data with source semantics, and then converted to representations with harmonized domain semantics where domain ontologies and terminologies are used to improve reusability. It is also possible to further convert data to application semantics and store the converted results in clinical research databases, e.g. i2b2, OMOP, to support different clinical research settings. Semantic conversions between different representations are explicitly expressed using N3 rules and executed by an N3 Reasoner (EYE), which can also generate proofs of the conversion processes. The solution presented in this paper has been applied to real-world applications that process large scale EHR data.

  20. Semantic processing of EHR data for clinical research.

    PubMed

    Sun, Hong; Depraetere, Kristof; De Roo, Jos; Mels, Giovanni; De Vloed, Boris; Twagirumukiza, Marc; Colaert, Dirk

    2015-12-01

    There is a growing need to semantically process and integrate clinical data from different sources for clinical research. This paper presents an approach to integrate EHRs from heterogeneous resources and generate integrated data in different data formats or semantics to support various clinical research applications. The proposed approach builds semantic data virtualization layers on top of data sources, which generate data in the requested semantics or formats on demand. This approach avoids upfront dumping to and synchronizing of the data with various representations. Data from different EHR systems are first mapped to RDF data with source semantics, and then converted to representations with harmonized domain semantics where domain ontologies and terminologies are used to improve reusability. It is also possible to further convert data to application semantics and store the converted results in clinical research databases, e.g. i2b2, OMOP, to support different clinical research settings. Semantic conversions between different representations are explicitly expressed using N3 rules and executed by an N3 Reasoner (EYE), which can also generate proofs of the conversion processes. The solution presented in this paper has been applied to real-world applications that process large scale EHR data. PMID:26515501

  1. A rule-based expert system for chemical prioritization using effects-based chemical categories

    EPA Science Inventory

    A rule-based expert system (ES) was developed to predict chemical binding to the estrogen receptor (ER) patterned on the research approaches championed by Gilman Veith to whom this article and journal issue are dedicated. The ERES was built to be mechanistically-transparent and m...

  2. Rule-based approach to operating system selection: RMS vs. UNIX

    SciTech Connect

    Phifer, M.S.; Sadlowe, A.R.; Emrich, M.L.; Gadagkar, H.P.

    1988-10-01

    A rule-based system is under development for choosing computer operating systems. Following a brief historical account, this paper compares and contrasts the essential features of two operating systems highlighting particular applications. ATandT's UNIX System and Datapoint Corporations's Resource Management System (RMS) are used as illustrative examples. 11 refs., 3 figs.

  3. Haunted by a doppelgänger: irrelevant facial similarity affects rule-based judgments.

    PubMed

    von Helversen, Bettina; Herzog, Stefan M; Rieskamp, Jörg

    2014-01-01

    Judging other people is a common and important task. Every day professionals make decisions that affect the lives of other people when they diagnose medical conditions, grant parole, or hire new employees. To prevent discrimination, professional standards require that decision makers render accurate and unbiased judgments solely based on relevant information. Facial similarity to previously encountered persons can be a potential source of bias. Psychological research suggests that people only rely on similarity-based judgment strategies if the provided information does not allow them to make accurate rule-based judgments. Our study shows, however, that facial similarity to previously encountered persons influences judgment even in situations in which relevant information is available for making accurate rule-based judgments and where similarity is irrelevant for the task and relying on similarity is detrimental. In two experiments in an employment context we show that applicants who looked similar to high-performing former employees were judged as more suitable than applicants who looked similar to low-performing former employees. This similarity effect was found despite the fact that the participants used the relevant résumé information about the applicants by following a rule-based judgment strategy. These findings suggest that similarity-based and rule-based processes simultaneously underlie human judgment. PMID:23895921

  4. Age affects chunk-based, but not rule-based learning in artificial grammar acquisition.

    PubMed

    Kürten, Julia; De Vries, Meinou H; Kowal, Kristina; Zwitserlood, Pienie; Flöel, Agnes

    2012-07-01

    Explicit learning is well known to decline with age, but divergent results have been reported for implicit learning. Here, we assessed the effect of aging on implicit vs. explicit learning within the same task. Fifty-five young (mean 32 years) and 55 elderly (mean 64 years) individuals were exposed to letter strings generated by an artificial grammar. Subsequently, participants classified novel strings as grammatical or nongrammatical. Acquisition of superficial ("chunk-based") and structural ("rule-based") features of the grammar were analyzed separately. We found that overall classification accuracy was diminished in the elderly, driven by decreased performance on items that required chunk-based knowledge. Performance on items requiring rule-based knowledge was comparable between groups. Results indicate that rule-based and chunk-based learning are differentially affected by age: while rule-based learning, reflecting implicit learning, is preserved, chunk-based learning, which contains at least some explicit learning aspects, declines with age. Our findings may explain divergent results on implicit learning tasks in previous studies on aging. They may also help to better understand compensatory mechanisms during the aging process.

  5. A Defense of Semantic Minimalism

    ERIC Educational Resources Information Center

    Kim, Su

    2012-01-01

    Semantic Minimalism is a position about the semantic content of declarative sentences, i.e., the content that is determined entirely by syntax. It is defined by the following two points: "Point 1": The semantic content is a complete/truth-conditional proposition. "Point 2": The semantic content is useful to a theory of…

  6. A Semantic Graph Query Language

    SciTech Connect

    Kaplan, I L

    2006-10-16

    Semantic graphs can be used to organize large amounts of information from a number of sources into one unified structure. A semantic query language provides a foundation for extracting information from the semantic graph. The graph query language described here provides a simple, powerful method for querying semantic graphs.

  7. Semantic Theory: A Linguistic Perspective.

    ERIC Educational Resources Information Center

    Nilsen, Don L. F.; Nilsen, Alleen Pace

    This book attempts to bring linguists and language teachers up to date on the latest developments in semantics. A survey of the role of semantics in linguistics and other academic areas is followed by a historical perspective of semantics in American linguistics. Various semantic models are discussed. Anomaly, ambiguity, and discourse are…

  8. Allergen databases and allergen semantics.

    PubMed

    Gendel, Steven M

    2009-08-01

    The efficacy of any specific bioinformatic analysis of the potential allergenicity of new food proteins depends directly on the nature and content of the databases that are used in the analysis. A number of different allergen-related databases have been developed, each designed to meet a different need. These databases differ in content, organization, and accessibility. These differences create barriers for users and prevent data sharing and integration. The development and application of appropriate semantic web technologies, (for example, a food allergen ontology) could help to overcome these barriers and promote the development of more advanced analytic capabilities.

  9. Exact hybrid particle/population simulation of rule-based models of biochemical systems.

    PubMed

    Hogg, Justin S; Harris, Leonard A; Stover, Lori J; Nair, Niketh S; Faeder, James R

    2014-04-01

    Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings

  10. Spatiotemporal convergence of semantic processing in reading and speech perception.

    PubMed

    Vartiainen, Johanna; Parviainen, Tiina; Salmelin, Riitta

    2009-07-22

    Retrieval of word meaning from the semantic system and its integration with context are often assumed to be shared by spoken and written words. How is modality-independent semantic processing manifested in the brain, spatially and temporally? Time-sensitive neuroimaging allows tracking of neural activation sequences. Use of semantically related versus unrelated word pairs or sentences ending with a semantically highly or less plausible word, in separate studies of the auditory and visual modality, has associated lexical-semantic analysis with sustained activation at approximately 200-800 ms. Magnetoencephalography (MEG) studies have further identified the superior temporal cortex as a main locus of the semantic effect. Nevertheless, a direct comparison of the spatiotemporal neural correlates of visual and auditory word comprehension in the same brain is lacking. We used MEG to compare lexical-semantic analysis in the visual and auditory domain in the same individuals, and contrasted it with phonological analysis that, according to models of language perception, should occur at a different time with respect to semantic analysis in reading and speech perception. The stimuli were lists of four words that were either semantically or phonologically related, or with the final word unrelated to the preceding context. Superior temporal activation reflecting semantic processing occurred similarly in the two modalities, left-lateralized at 300-450 ms and thereafter bilaterally, generated in close-by areas. Effect of phonology preceded the semantic effect in speech perception but not in reading. The present data indicate involvement of the middle superior temporal cortex in semantic processing from approximately 300 ms onwards, regardless of input modality.

  11. The MMI Semantic Framework: Rosetta Stones for Earth Sciences

    NASA Astrophysics Data System (ADS)

    Rueda, C.; Bermudez, L. E.; Graybeal, J.; Alexander, P.

    2009-12-01

    Semantic interoperability—the exchange of meaning among computer systems—is needed to successfully share data in Ocean Science and across all Earth sciences. The best approach toward semantic interoperability requires a designed framework, and operationally tested tools and infrastructure within that framework. Currently available technologies make a scientific semantic framework feasible, but its development requires sustainable architectural vision and development processes. This presentation outlines the MMI Semantic Framework, including recent progress on it and its client applications. The MMI Semantic Framework consists of tools, infrastructure, and operational and community procedures and best practices, to meet short-term and long-term semantic interoperability goals. The design and prioritization of the semantic framework capabilities are based on real-world scenarios in Earth observation systems. We describe some key uses cases, as well as the associated requirements for building the overall infrastructure, which is realized through the MMI Ontology Registry and Repository. This system includes support for community creation and sharing of semantic content, ontology registration, version management, and seamless integration of user-friendly tools and application programming interfaces. The presentation describes the architectural components for semantic mediation, registry and repository for vocabularies, ontology, and term mappings. We show how the technologies and approaches in the framework can address community needs for managing and exchanging semantic information. We will demonstrate how different types of users and client applications exploit the tools and services for data aggregation, visualization, archiving, and integration. Specific examples from OOSTethys (http://www.oostethys.org) and the Ocean Observatories Initiative Cyberinfrastructure (http://www.oceanobservatories.org) will be cited. Finally, we show how semantic augmentation of web

  12. Trusting Crowdsourced Geospatial Semantics

    NASA Astrophysics Data System (ADS)

    Goodhue, P.; McNair, H.; Reitsma, F.

    2015-08-01

    The degree of trust one can place in information is one of the foremost limitations of crowdsourced geospatial information. As with the development of web technologies, the increased prevalence of semantics associated with geospatial information has increased accessibility and functionality. Semantics also provides an opportunity to extend indicators of trust for crowdsourced geospatial information that have largely focused on spatio-temporal and social aspects of that information. Comparing a feature's intrinsic and extrinsic properties to associated ontologies provides a means of semantically assessing the trustworthiness of crowdsourced geospatial information. The application of this approach to unconstrained semantic submissions then allows for a detailed assessment of the trust of these features whilst maintaining the descriptive thoroughness this mode of information submission affords. The resulting trust rating then becomes an attribute of the feature, providing not only an indication as to the trustworthiness of a specific feature but is able to be aggregated across multiple features to illustrate the overall trustworthiness of a dataset.

  13. Algebraic Semantics for Narrative

    ERIC Educational Resources Information Center

    Kahn, E.

    1974-01-01

    This paper uses discussion of Edmund Spenser's "The Faerie Queene" to present a theoretical framework for explaining the semantics of narrative discourse. The algebraic theory of finite automata is used. (CK)

  14. "Pre-semantic" cognition revisited: critical differences between semantic aphasia and semantic dementia.

    PubMed

    Jefferies, Elizabeth; Rogers, Timothy T; Hopper, Samantha; Ralph, Matthew A Lambon

    2010-01-01

    Patients with semantic dementia show a specific pattern of impairment on both verbal and non-verbal "pre-semantic" tasks, e.g., reading aloud, past tense generation, spelling to dictation, lexical decision, object decision, colour decision and delayed picture copying. All seven tasks are characterised by poorer performance for items that are atypical of the domain and "regularization errors" (irregular/atypical items are produced as if they were domain-typical). The emergence of this pattern across diverse tasks in the same patients indicates that semantic memory plays a key role in all of these types of "pre-semantic" processing. However, this claim remains controversial because semantically impaired patients sometimes fail to show an influence of regularity. This study demonstrates that (a) the location of brain damage and (b) the underlying nature of the semantic deficit affect the likelihood of observing the expected relationship between poor comprehension and regularity effects. We compared the effect of multimodal semantic impairment in the context of semantic dementia and stroke aphasia on the seven "pre-semantic" tasks listed above. In all of these tasks, the semantic aphasia patients were less sensitive to typicality than the semantic dementia patients, even though the two groups obtained comparable scores on semantic tests. The semantic aphasia group also made fewer regularization errors and many more unrelated and perseverative responses. We propose that these group differences reflect the different locus for the semantic impairment in the two conditions: patients with semantic dementia have degraded semantic representations, whereas semantic aphasia patients show deregulated semantic cognition with concomitant executive deficits. These findings suggest a reinterpretation of single-case studies of comprehension-impaired aphasic patients who fail to show the expected effect of regularity on "pre-semantic" tasks. Consequently, such cases do not demonstrate

  15. Integrated data management for clinical studies: automatic transformation of data models with semantic annotations for principal investigators, data managers and statisticians.

    PubMed

    Dugas, Martin; Dugas-Breit, Susanne

    2014-01-01

    Design, execution and analysis of clinical studies involves several stakeholders with different professional backgrounds. Typically, principle investigators are familiar with standard office tools, data managers apply electronic data capture (EDC) systems and statisticians work with statistics software. Case report forms (CRFs) specify the data model of study subjects, evolve over time and consist of hundreds to thousands of data items per study. To avoid erroneous manual transformation work, a converting tool for different representations of study data models was designed. It can convert between office format, EDC and statistics format. In addition, it supports semantic annotations, which enable precise definitions for data items. A reference implementation is available as open source package ODMconverter at http://cran.r-project.org. PMID:24587378

  16. Enhanced semantic interpretability by healthcare standards profiling.

    PubMed

    Lopez, Diego M; Blobel, Bernd G M E

    2008-01-01

    Several current healthcare standards support semantic interoperability. These standards are far to be completely adopted in health information system development, however. The objective of this paper is to provide a method and necessary tooling for reusing healthcare standards by exploiting the extensibility mechanisms of UML, by that way supporting the development of semantically interoperable systems and components. The method identifies first the models and tasks in the software development process in which health care standards can be reused. Then, the selected standard is formalized as a UML profile. Finally that profile is applied to system models, annotating them with the standard semantics. The supporting tools are Eclipse-based UML modeling tools. The method is integrated into a comprehensive framework for health information systems development. The feasibility of the approach is exemplified by a scenario reusing HL7 RIM and DIMs specifications. The approach presented is also applicable for harmonizing different standard specifications.

  17. Teaching Spelling to Students with Learning Disabilities: A Comparison of Rule-Based Strategies versus Traditional Instruction

    ERIC Educational Resources Information Center

    Darch, Craig; Eaves, Ronald C.; Crowe, D. Alan; Simmons, Kate; Conniff, Alexandra

    2006-01-01

    This study compared two instructional methods for teaching spelling to elementary students with learning disabilities (LD). Forty-two elementary students with LD were randomly assigned to one of two instructional groups to teach spelling words: (a) a rule-based strategy group that focused on teaching students spelling rules (based on the "Spelling…

  18. Semantic querying of relational data for clinical intelligence: a semantic web services-based approach

    PubMed Central

    2013-01-01

    Background Clinical Intelligence, as a research and engineering discipline, is dedicated to the development of tools for data analysis for the purposes of clinical research, surveillance, and effective health care management. Self-service ad hoc querying of clinical data is one desirable type of functionality. Since most of the data are currently stored in relational or similar form, ad hoc querying is problematic as it requires specialised technical skills and the knowledge of particular data schemas. Results A possible solution is semantic querying where the user formulates queries in terms of domain ontologies that are much easier to navigate and comprehend than data schemas. In this article, we are exploring the possibility of using SADI Semantic Web services for semantic querying of clinical data. We have developed a prototype of a semantic querying infrastructure for the surveillance of, and research on, hospital-acquired infections. Conclusions Our results suggest that SADI can support ad-hoc, self-service, semantic queries of relational data in a Clinical Intelligence context. The use of SADI compares favourably with approaches based on declarative semantic mappings from data schemas to ontologies, such as query rewriting and RDFizing by materialisation, because it can easily cope with situations when (i) some computation is required to turn relational data into RDF or OWL, e.g., to implement temporal reasoning, or (ii) integration with external data sources is necessary. PMID:23497556

  19. Clustering and rule-based classifications of chemical structures evaluated in the biological activity space.

    PubMed

    Schuffenhauer, Ansgar; Brown, Nathan; Ertl, Peter; Jenkins, Jeremy L; Selzer, Paul; Hamon, Jacques

    2007-01-01

    Classification methods for data sets of molecules according to their chemical structure were evaluated for their biological relevance, including rule-based, scaffold-oriented classification methods and clustering based on molecular descriptors. Three data sets resulting from uniformly determined in vitro biological profiling experiments were classified according to their chemical structures, and the results were compared in a Pareto analysis with the number of classes and their average spread in the profile space as two concurrent objectives which were to be minimized. It has been found that no classification method is overall superior to all other studied methods, but there is a general trend that rule-based, scaffold-oriented methods are the better choice if classes with homogeneous biological activity are required, but a large number of clusters can be tolerated. On the other hand, clustering based on chemical fingerprints is superior if fewer and larger classes are required, and some loss of homogeneity in biological activity can be accepted.

  20. A self-learning rule base for command following in dynamical systems

    NASA Technical Reports Server (NTRS)

    Tsai, Wei K.; Lee, Hon-Mun; Parlos, Alexander

    1992-01-01

    In this paper, a self-learning Rule Base for command following in dynamical systems is presented. The learning is accomplished though reinforcement learning using an associative memory called SAM. The main advantage of SAM is that it is a function approximator with explicit storage of training samples. A learning algorithm patterned after the dynamic programming is proposed. Two artificially created, unstable dynamical systems are used for testing, and the Rule Base was used to generate a feedback control to improve the command following ability of the otherwise uncontrolled systems. The numerical results are very encouraging. The controlled systems exhibit a more stable behavior and a better capability to follow reference commands. The rules resulting from the reinforcement learning are explicitly stored and they can be modified or augmented by human experts. Due to overlapping storage scheme of SAM, the stored rules are similar to fuzzy rules.

  1. Creating an ontology driven rules base for an expert system for medical diagnosis.

    PubMed

    Bertaud Gounot, Valérie; Donfack, Valéry; Lasbleiz, Jérémy; Bourde, Annabel; Duvauferrier, Régis

    2011-01-01

    Expert systems of the 1980s have failed on the difficulties of maintaining large rule bases. The current work proposes a method to achieve and maintain rule bases grounded on ontologies (like NCIT). The process described here for an expert system on plasma cell disorder encompasses extraction of a sub-ontology and automatic and comprehensive generation of production rules. The creation of rules is not based directly on classes, but on individuals (instances). Instances can be considered as prototypes of diseases formally defined by "destrictions" in the ontology. Thus, it is possible to use this process to make diagnoses of diseases. The perspectives of this work are considered: the process described with an ontology formalized in OWL1 can be extended by using an ontology in OWL2 and allow reasoning about numerical data in addition to symbolic data. PMID:21893840

  2. Spatial Queries Entity Recognition and Disambiguation Using Rule-Based Approach

    NASA Astrophysics Data System (ADS)

    Hamzei, E.; Hakimpour, F.; Forati, A.

    2015-12-01

    In the digital world, search engines have been proposed as one of challenging research areas. One of the main issues in search engines studies is query processing, which its aim is to understand user's needs. If unsuitable spatial query processing approach is employed, the results will be associated with high degree of ambiguity. To evade such degree of ambiguity, in this paper we present a new algorithm which depends on rule-based systems to process queries. Our algorithm is implemented in the three basic steps including: deductively iterative splitting the query; finding candidates for the location names, the location types and spatial relationships; and finally checking the relationships logically and conceptually using a rule based system. As we finally present in the paper using our proposed method have two major advantages: the search engines can provide the capability of spatial analysis based on the specific process and secondly because of its disambiguation technique, user reaches the more desirable result.

  3. Rule-based mechanisms of learning for intelligent adaptive flight control

    NASA Technical Reports Server (NTRS)

    Handelman, David A.; Stengel, Robert F.

    1990-01-01

    How certain aspects of human learning can be used to characterize learning in intelligent adaptive control systems is investigated. Reflexive and declarative memory and learning are described. It is shown that model-based systems-theoretic adaptive control methods exhibit attributes of reflexive learning, whereas the problem-solving capabilities of knowledge-based systems of artificial intelligence are naturally suited for implementing declarative learning. Issues related to learning in knowledge-based control systems are addressed, with particular attention given to rule-based systems. A mechanism for real-time rule-based knowledge acquisition is suggested, and utilization of this mechanism within the context of failure diagnosis for fault-tolerant flight control is demonstrated.

  4. Adventures in semantic publishing: exemplar semantic enhancements of a research article.

    PubMed

    Shotton, David; Portwin, Katie; Klyne, Graham; Miles, Alistair

    2009-04-01

    Scientific innovation depends on finding, integrating, and re-using the products of previous research. Here we explore how recent developments in Web technology, particularly those related to the publication of data and metadata, might assist that process by providing semantic enhancements to journal articles within the mainstream process of scholarly journal publishing. We exemplify this by describing semantic enhancements we have made to a recent biomedical research article taken from PLoS Neglected Tropical Diseases, providing enrichment to its content and increased access to datasets within it. These semantic enhancements include provision of live DOIs and hyperlinks; semantic markup of textual terms, with links to relevant third-party information resources; interactive figures; a re-orderable reference list; a document summary containing a study summary, a tag cloud, and a citation analysis; and two novel types of semantic enrichment: the first, a Supporting Claims Tooltip to permit "Citations in Context", and the second, Tag Trees that bring together semantically related terms. In addition, we have published downloadable spreadsheets containing data from within tables and figures, have enriched these with provenance information, and have demonstrated various types of data fusion (mashups) with results from other research articles and with Google Maps. We have also published machine-readable RDF metadata both about the article and about the references it cites, for which we developed a Citation Typing Ontology, CiTO (http://purl.org/net/cito/). The enhanced article, which is available at http://dx.doi.org/10.1371/journal.pntd.0000228.x001, presents a compelling existence proof of the possibilities of semantic publication. We hope the showcase of examples and ideas it contains, described in this paper, will excite the imaginations of researchers and publishers, stimulating them to explore the possibilities of semantic publishing for their own research articles

  5. Adventures in Semantic Publishing: Exemplar Semantic Enhancements of a Research Article

    PubMed Central

    Shotton, David; Portwin, Katie; Klyne, Graham; Miles, Alistair

    2009-01-01

    Scientific innovation depends on finding, integrating, and re-using the products of previous research. Here we explore how recent developments in Web technology, particularly those related to the publication of data and metadata, might assist that process by providing semantic enhancements to journal articles within the mainstream process of scholarly journal publishing. We exemplify this by describing semantic enhancements we have made to a recent biomedical research article taken from PLoS Neglected Tropical Diseases, providing enrichment to its content and increased access to datasets within it. These semantic enhancements include provision of live DOIs and hyperlinks; semantic markup of textual terms, with links to relevant third-party information resources; interactive figures; a re-orderable reference list; a document summary containing a study summary, a tag cloud, and a citation analysis; and two novel types of semantic enrichment: the first, a Supporting Claims Tooltip to permit “Citations in Context”, and the second, Tag Trees that bring together semantically related terms. In addition, we have published downloadable spreadsheets containing data from within tables and figures, have enriched these with provenance information, and have demonstrated various types of data fusion (mashups) with results from other research articles and with Google Maps. We have also published machine-readable RDF metadata both about the article and about the references it cites, for which we developed a Citation Typing Ontology, CiTO (http://purl.org/net/cito/). The enhanced article, which is available at http://dx.doi.org/10.1371/journal.pntd.0000228.x001, presents a compelling existence proof of the possibilities of semantic publication. We hope the showcase of examples and ideas it contains, described in this paper, will excite the imaginations of researchers and publishers, stimulating them to explore the possibilities of semantic publishing for their own research

  6. A two-stage evolutionary process for designing TSK fuzzy rule-based systems.

    PubMed

    Cordon, O; Herrera, F

    1999-01-01

    Nowadays, fuzzy rule-based systems are successfully applied to many different real-world problems. Unfortunately, relatively few well-structured methodologies exist for designing and, in many cases, human experts are not able to express the knowledge needed to solve the problem in the form of fuzzy rules. Takagi-Sugeno-Kang (TSK) fuzzy rule-based systems were enunciated in order to solve this design problem because they are usually identified using numerical data. In this paper we present a two-stage evolutionary process for designing TSK fuzzy rule-based systems from examples combining a generation stage based on a (mu, lambda)-evolution strategy, in which the fuzzy rules with different consequents compete among themselves to form part of a preliminary knowledge base, and a refinement stage in which both the antecedent and consequent parts of the fuzzy rules in this previous knowledge base are adapted by a hybrid evolutionary process composed of a genetic algorithm and an evolution strategy to obtain the final Knowledge base whose rules cooperate in the best possible way. Some aspects make this process different from others proposed until now: the design problem is addressed in two different stages, the use of an angular coding of the consequent parameters that allows us to search across the whole space of possible solutions, and the use of the available knowledge about the system under identification to generate the initial populations of the Evolutionary Algorithms that causes the search process to obtain good solutions more quickly. The performance of the method proposed is shown by solving two different problems: the fuzzy modeling of some three-dimensional surfaces and the computing of the maintenance costs of electrical medium line in Spanish towns. Results obtained are compared with other kind of techniques, evolutionary learning processes to design TSK and Mamdani-type fuzzy rule-based systems in the first case, and classical regression and neural modeling

  7. Feature- versus rule-based generalization in rats, pigeons and humans.

    PubMed

    Maes, Elisa; De Filippo, Guido; Inkster, Angus B; Lea, Stephen E G; De Houwer, Jan; D'Hooge, Rudi; Beckers, Tom; Wills, Andy J

    2015-11-01

    Humans can spontaneously create rules that allow them to efficiently generalize what they have learned to novel situations. An enduring question is whether rule-based generalization is uniquely human or whether other animals can also abstract rules and apply them to novel situations. In recent years, there have been a number of high-profile claims that animals such as rats can learn rules. Most of those claims are quite weak because it is possible to demonstrate that simple associative systems (which do not learn rules) can account for the behavior in those tasks. Using a procedure that allows us to clearly distinguish feature-based from rule-based generalization (the Shanks-Darby procedure), we demonstrate that adult humans show rule-based generalization in this task, while generalization in rats and pigeons was based on featural overlap between stimuli. In brief, when learning that a stimulus made of two components ("AB") predicts a different outcome than its elements ("A" and "B"), people spontaneously abstract an opposites rule and apply it to new stimuli (e.g., knowing that "C" and "D" predict one outcome, they will predict that "CD" predicts the opposite outcome). Rats and pigeons show the reverse behavior-they generalize what they have learned, but on the basis of similarity (e.g., "CD" is similar to "C" and "D", so the same outcome is predicted for the compound stimulus as for the components). Genuinely rule-based behavior is observed in humans, but not in rats and pigeons, in the current procedure.

  8. A Metrics Taxonomy and Reporting Strategy for Rule-Based Alerts.

    PubMed

    Krall, Michael; Gerace, Alexander

    2015-01-01

    An action-oriented alerts taxonomy according to structure, actions, and implicit intended process outcomes using a set of 333 rule-based alerts at Kaiser Permanente Northwest (KPNW) was developed. The authors identified 9 major and 17 overall classes of alerts and developed a specific metric approach for 5 of these classes, including the 3 most numerous ones in KPNW, accounting for 224 (67%) of the alerts.

  9. A New Rule-Based System for the Construction and Structural Characterization of Artificial Proteins

    NASA Astrophysics Data System (ADS)

    Štambuk, Nikola; Konjevoda, Paško; Gotovac, Nikola

    In this paper, we present a new rule-based system for an artificial protein design incorporating ternary amino acid polarity (polar, nonpolar, and neutral). It may be used to design de novo α and β protein fold structures and mixed class proteins. The targeted molecules are artificial proteins with important industrial and biomedical applications, related to the development of diagnostic-therapeutic peptide pharmaceuticals, antibody mimetics, peptide vaccines, new nanobiomaterials and engineered protein scaffolds.

  10. A Metrics Taxonomy and Reporting Strategy for Rule-Based Alerts.

    PubMed

    Krall, Michael; Gerace, Alexander

    2015-01-01

    An action-oriented alerts taxonomy according to structure, actions, and implicit intended process outcomes using a set of 333 rule-based alerts at Kaiser Permanente Northwest (KPNW) was developed. The authors identified 9 major and 17 overall classes of alerts and developed a specific metric approach for 5 of these classes, including the 3 most numerous ones in KPNW, accounting for 224 (67%) of the alerts. PMID:26057684

  11. Semantics-Based Interoperability Framework for the Geosciences

    NASA Astrophysics Data System (ADS)

    Sinha, A.; Malik, Z.; Raskin, R.; Barnes, C.; Fox, P.; McGuinness, D.; Lin, K.

    2008-12-01

    Interoperability between heterogeneous data, tools and services is required to transform data to knowledge. To meet geoscience-oriented societal challenges such as forcing of climate change induced by volcanic eruptions, we suggest the need to develop semantic interoperability for data, services, and processes. Because such scientific endeavors require integration of multiple data bases associated with global enterprises, implicit semantic-based integration is impossible. Instead, explicit semantics are needed to facilitate interoperability and integration. Although different types of integration models are available (syntactic or semantic) we suggest that semantic interoperability is likely to be the most successful pathway. Clearly, the geoscience community would benefit from utilization of existing XML-based data models, such as GeoSciML, WaterML, etc to rapidly advance semantic interoperability and integration. We recognize that such integration will require a "meanings-based search, reasoning and information brokering", which will be facilitated through inter-ontology relationships (ontologies defined for each discipline). We suggest that Markup languages (MLs) and ontologies can be seen as "data integration facilitators", working at different abstraction levels. Therefore, we propose to use an ontology-based data registration and discovery approach to compliment mark-up languages through semantic data enrichment. Ontologies allow the use of formal and descriptive logic statements which permits expressive query capabilities for data integration through reasoning. We have developed domain ontologies (EPONT) to capture the concept behind data. EPONT ontologies are associated with existing ontologies such as SUMO, DOLCE and SWEET. Although significant efforts have gone into developing data (object) ontologies, we advance the idea of developing semantic frameworks for additional ontologies that deal with processes and services. This evolutionary step will

  12. A weighted rule based method for predicting malignancy of pulmonary nodules by nodule characteristics.

    PubMed

    Kaya, Aydın; Can, Ahmet Burak

    2015-08-01

    Predicting malignancy of solitary pulmonary nodules from computer tomography scans is a difficult and important problem in the diagnosis of lung cancer. This paper investigates the contribution of nodule characteristics in the prediction of malignancy. Using data from Lung Image Database Consortium (LIDC) database, we propose a weighted rule based classification approach for predicting malignancy of pulmonary nodules. LIDC database contains CT scans of nodules and information about nodule characteristics evaluated by multiple annotators. In the first step of our method, votes for nodule characteristics are obtained from ensemble classifiers by using image features. In the second step, votes and rules obtained from radiologist evaluations are used by a weighted rule based method to predict malignancy. The rule based method is constructed by using radiologist evaluations on previous cases. Correlations between malignancy and other nodule characteristics and agreement ratio of radiologists are considered in rule evaluation. To handle the unbalanced nature of LIDC, ensemble classifiers and data balancing methods are used. The proposed approach is compared with the classification methods trained on image features. Classification accuracy, specificity and sensitivity of classifiers are measured. The experimental results show that using nodule characteristics for malignancy prediction can improve classification results.

  13. An evaluation and implementation of rule-based Home Energy Management System using the Rete algorithm.

    PubMed

    Kawakami, Tomoya; Fujita, Naotaka; Yoshihisa, Tomoki; Tsukamoto, Masahiko

    2014-01-01

    In recent years, sensors become popular and Home Energy Management System (HEMS) takes an important role in saving energy without decrease in QoL (Quality of Life). Currently, many rule-based HEMSs have been proposed and almost all of them assume "IF-THEN" rules. The Rete algorithm is a typical pattern matching algorithm for IF-THEN rules. Currently, we have proposed a rule-based Home Energy Management System (HEMS) using the Rete algorithm. In the proposed system, rules for managing energy are processed by smart taps in network, and the loads for processing rules and collecting data are distributed to smart taps. In addition, the number of processes and collecting data are reduced by processing rules based on the Rete algorithm. In this paper, we evaluated the proposed system by simulation. In the simulation environment, rules are processed by a smart tap that relates to the action part of each rule. In addition, we implemented the proposed system as HEMS using smart taps.

  14. An evaluation and implementation of rule-based Home Energy Management System using the Rete algorithm.

    PubMed

    Kawakami, Tomoya; Fujita, Naotaka; Yoshihisa, Tomoki; Tsukamoto, Masahiko

    2014-01-01

    In recent years, sensors become popular and Home Energy Management System (HEMS) takes an important role in saving energy without decrease in QoL (Quality of Life). Currently, many rule-based HEMSs have been proposed and almost all of them assume "IF-THEN" rules. The Rete algorithm is a typical pattern matching algorithm for IF-THEN rules. Currently, we have proposed a rule-based Home Energy Management System (HEMS) using the Rete algorithm. In the proposed system, rules for managing energy are processed by smart taps in network, and the loads for processing rules and collecting data are distributed to smart taps. In addition, the number of processes and collecting data are reduced by processing rules based on the Rete algorithm. In this paper, we evaluated the proposed system by simulation. In the simulation environment, rules are processed by a smart tap that relates to the action part of each rule. In addition, we implemented the proposed system as HEMS using smart taps. PMID:25136672

  15. Strategies for adding adaptive learning mechanisms to rule-based diagnostic expert systems

    NASA Technical Reports Server (NTRS)

    Stclair, D. C.; Sabharwal, C. L.; Bond, W. E.; Hacke, Keith

    1988-01-01

    Rule-based diagnostic expert systems can be used to perform many of the diagnostic chores necessary in today's complex space systems. These expert systems typically take a set of symptoms as input and produce diagnostic advice as output. The primary objective of such expert systems is to provide accurate and comprehensive advice which can be used to help return the space system in question to nominal operation. The development and maintenance of diagnostic expert systems is time and labor intensive since the services of both knowledge engineer(s) and domain expert(s) are required. The use of adaptive learning mechanisms to increment evaluate and refine rules promises to reduce both time and labor costs associated with such systems. This paper describes the basic adaptive learning mechanisms of strengthening, weakening, generalization, discrimination, and discovery. Next basic strategies are discussed for adding these learning mechanisms to rule-based diagnostic expert systems. These strategies support the incremental evaluation and refinement of rules in the knowledge base by comparing the set of advice given by the expert system (A) with the correct diagnosis (C). Techniques are described for selecting those rules in the in the knowledge base which should participate in adaptive learning. The strategies presented may be used with a wide variety of learning algorithms. Further, these strategies are applicable to a large number of rule-based diagnostic expert systems. They may be used to provide either immediate or deferred updating of the knowledge base.

  16. Remote semantic memory is impoverished in hippocampal amnesia.

    PubMed

    Klooster, Nathaniel B; Duff, Melissa C

    2015-12-01

    The necessity of the hippocampus for acquiring new semantic concepts is a topic of considerable debate. However, it is generally accepted that any role the hippocampus plays in semantic memory is time limited and that previously acquired information becomes independent of the hippocampus over time. This view, along with intact naming and word-definition matching performance in amnesia, has led to the notion that remote semantic memory is intact in patients with hippocampal amnesia. Motivated by perspectives of word learning as a protracted process where additional features and senses of a word are added over time, and by recent discoveries about the time course of hippocampal contributions to on-line relational processing, reconsolidation, and the flexible integration of information, we revisit the notion that remote semantic memory is intact in amnesia. Using measures of semantic richness and vocabulary depth from psycholinguistics and first and second language-learning studies, we examined how much information is associated with previously acquired, highly familiar words in a group of patients with bilateral hippocampal damage and amnesia. Relative to healthy demographically matched comparison participants and a group of brain-damaged comparison participants, the patients with hippocampal amnesia performed significantly worse on both productive and receptive measures of vocabulary depth and semantic richness. These findings suggest that remote semantic memory is impoverished in patients with hippocampal amnesia and that the hippocampus may play a role in the maintenance and updating of semantic memory beyond its initial acquisition. PMID:26474741

  17. Remote semantic memory is impoverished in hippocampal amnesia.

    PubMed

    Klooster, Nathaniel B; Duff, Melissa C

    2015-12-01

    The necessity of the hippocampus for acquiring new semantic concepts is a topic of considerable debate. However, it is generally accepted that any role the hippocampus plays in semantic memory is time limited and that previously acquired information becomes independent of the hippocampus over time. This view, along with intact naming and word-definition matching performance in amnesia, has led to the notion that remote semantic memory is intact in patients with hippocampal amnesia. Motivated by perspectives of word learning as a protracted process where additional features and senses of a word are added over time, and by recent discoveries about the time course of hippocampal contributions to on-line relational processing, reconsolidation, and the flexible integration of information, we revisit the notion that remote semantic memory is intact in amnesia. Using measures of semantic richness and vocabulary depth from psycholinguistics and first and second language-learning studies, we examined how much information is associated with previously acquired, highly familiar words in a group of patients with bilateral hippocampal damage and amnesia. Relative to healthy demographically matched comparison participants and a group of brain-damaged comparison participants, the patients with hippocampal amnesia performed significantly worse on both productive and receptive measures of vocabulary depth and semantic richness. These findings suggest that remote semantic memory is impoverished in patients with hippocampal amnesia and that the hippocampus may play a role in the maintenance and updating of semantic memory beyond its initial acquisition.

  18. User-centered semantic harmonization: a case study.

    PubMed

    Weng, Chunhua; Gennari, John H; Fridsma, Douglas B

    2007-06-01

    Semantic interoperability is one of the great challenges in biomedical informatics. Methods such as ontology alignment or use of metadata neither scale nor fundamentally alleviate semantic heterogeneity among information sources. In the context of the Cancer Biomedical Informatics Grid program, the Biomedical Research Integrated Domain Group (BRIDG) has been making an ambitious effort to harmonize existing information models for clinical research from a variety of sources and modeling agreed-upon semantics shared by the technical harmonization committee and the developers of these models. This paper provides some observations on this user-centered semantic harmonization effort and its inherent technical and social challenges. The authors also compare BRIDG with related efforts to achieve semantic interoperability in healthcare, including UMLS, InterMed, the Semantic Web, and the Ontology for Biomedical Investigations initiative. The BRIDG project demonstrates the feasibility of user-centered collaborative domain modeling as an approach to semantic harmonization, but also highlights a number of technology gaps in support of collaborative semantic harmonization that remain to be filled.

  19. The Semantic SPASE

    NASA Astrophysics Data System (ADS)

    Hughes, S.; Crichton, D.; Thieman, J.; Ramirez, P.; King, T.; Weiss, M.

    2005-12-01

    The Semantic SPASE (Space Physics Archive Search and Extract) prototype demonstrates the use of semantic web technologies to capture, document, and manage the SPASE data model, support facet- and text-based search, and provide flexible and intuitive user interfaces. The SPASE data model, under development since late 2003 by a consortium of space physics domain experts, is intended to serve as the basis for interoperability between independent data systems. To develop the Semantic SPASE prototype, the data model was first analyzed to determine the inherit object classes and their attributes. These were entered into Stanford Medical Informatics' Protege ontology tool and annotated using definitions from the SPASE documentation. Further analysis of the data model resulted in the addition of class relationships. Finally attributes and relationships that support broad-scope interoperability were added from research associated with the Object-Oriented Data Technology task. To validate the ontology and produce a knowledge base, example data products were ingested. The capture of the data model as an ontology results in a more formal specification of the model. The Protege software is also a powerful management tool and supports plug-ins that produce several graphical notations as output. The stated purpose of the semantic web is to support machine understanding of web-based information. Protege provides an export capability to RDF/XML and RDFS/XML for this purpose. Several research efforts use RDF/XML knowledge bases to provide semantic search. MIT's Simile/Longwell project provides both facet- and text-based search using a suite of metadata browsers and the text-based search engine Lucene. Using the Protege generated RDF knowledge-base a semantic search application was easily built and deployed to run as a web application. Configuration files specify the object attributes and values to be designated as facets (i.e. search) constraints. Semantic web technologies provide

  20. Exact Hybrid Particle/Population Simulation of Rule-Based Models of Biochemical Systems

    PubMed Central

    Stover, Lori J.; Nair, Niketh S.; Faeder, James R.

    2014-01-01

    Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This “network-free” approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of “partial network expansion” into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory

  1. Semantic home video categorization

    NASA Astrophysics Data System (ADS)

    Min, Hyun-Seok; Lee, Young Bok; De Neve, Wesley; Ro, Yong Man

    2009-02-01

    Nowadays, a strong need exists for the efficient organization of an increasing amount of home video content. To create an efficient system for the management of home video content, it is required to categorize home video content in a semantic way. So far, a significant amount of research has already been dedicated to semantic video categorization. However, conventional categorization approaches often rely on unnecessary concepts and complicated algorithms that are not suited in the context of home video categorization. To overcome the aforementioned problem, this paper proposes a novel home video categorization method that adopts semantic home photo categorization. To use home photo categorization in the context of home video, we segment video content into shots and extract key frames that represent each shot. To extract the semantics from key frames, we divide each key frame into ten local regions and extract lowlevel features. Based on the low level features extracted for each local region, we can predict the semantics of a particular key frame. To verify the usefulness of the proposed home video categorization method, experiments were performed with home video sequences, labeled by concepts part of the MPEG-7 VCE2 dataset. To verify the usefulness of the proposed home video categorization method, experiments were performed with 70 home video sequences. For the home video sequences used, the proposed system produced a recall of 77% and an accuracy of 78%.

  2. Auto-Generated Semantic Processing Services

    NASA Technical Reports Server (NTRS)

    Davis, Rodney; Hupf, Greg

    2009-01-01

    Auto-Generated Semantic Processing (AGSP) Services is a suite of software tools for automated generation of other computer programs, denoted cross-platform semantic adapters, that support interoperability of computer-based communication systems that utilize a variety of both new and legacy communication software running in a variety of operating- system/computer-hardware combinations. AGSP has numerous potential uses in military, space-exploration, and other government applications as well as in commercial telecommunications. The cross-platform semantic adapters take advantage of common features of computer- based communication systems to enforce semantics, messaging protocols, and standards of processing of streams of binary data to ensure integrity of data and consistency of meaning among interoperating systems. The auto-generation aspect of AGSP Services reduces development time and effort by emphasizing specification and minimizing implementation: In effect, the design, building, and debugging of software for effecting conversions among complex communication protocols, custom device mappings, and unique data-manipulation algorithms is replaced with metadata specifications that map to an abstract platform-independent communications model. AGSP Services is modular and has been shown to be easily integrable into new and legacy NASA flight and ground communication systems.

  3. Semantic Parameters of Split Intransitivity.

    ERIC Educational Resources Information Center

    Van Valin, Jr., Robert D.

    1990-01-01

    This paper argues that split-intransitive phenomena are better explained in semantic terms. A semantic analysis is carried out in Role and Reference Grammar, which assumes the theory of verb classification proposed in Dowty 1979. (49 references) (JL)

  4. The Cognitive and Neural Expression of Semantic Memory Impairment in Mild Cognitive Impairment and Early Alzheimer's Disease

    ERIC Educational Resources Information Center

    Joubert, Sven; Brambati, Simona M.; Ansado, Jennyfer; Barbeau, Emmanuel J.; Felician, Olivier; Didic, Mira; Lacombe, Jacinthe; Goldstein, Rachel; Chayer, Celine; Kergoat, Marie-Jeanne

    2010-01-01

    Semantic deficits in Alzheimer's disease have been widely documented, but little is known about the integrity of semantic memory in the prodromal stage of the illness. The aims of the present study were to: (i) investigate naming abilities and semantic memory in amnestic mild cognitive impairment (aMCI), early Alzheimer's disease (AD) compared to…

  5. Semantic web data warehousing for caGrid.

    PubMed

    McCusker, James P; Phillips, Joshua A; González Beltrán, Alejandra; Finkelstein, Anthony; Krauthammer, Michael

    2009-10-01

    The National Cancer Institute (NCI) is developing caGrid as a means for sharing cancer-related data and services. As more data sets become available on caGrid, we need effective ways of accessing and integrating this information. Although the data models exposed on caGrid are semantically well annotated, it is currently up to the caGrid client to infer relationships between the different models and their classes. In this paper, we present a Semantic Web-based data warehouse (Corvus) for creating relationships among caGrid models. This is accomplished through the transformation of semantically-annotated caBIG Unified Modeling Language (UML) information models into Web Ontology Language (OWL) ontologies that preserve those semantics. We demonstrate the validity of the approach by Semantic Extraction, Transformation and Loading (SETL) of data from two caGrid data sources, caTissue and caArray, as well as alignment and query of those sources in Corvus. We argue that semantic integration is necessary for integration of data from distributed web services and that Corvus is a useful way of accomplishing this. Our approach is generalizable and of broad utility to researchers facing similar integration challenges.

  6. Temporal Representation in Semantic Graphs

    SciTech Connect

    Levandoski, J J; Abdulla, G M

    2007-08-07

    A wide range of knowledge discovery and analysis applications, ranging from business to biological, make use of semantic graphs when modeling relationships and concepts. Most of the semantic graphs used in these applications are assumed to be static pieces of information, meaning temporal evolution of concepts and relationships are not taken into account. Guided by the need for more advanced semantic graph queries involving temporal concepts, this paper surveys the existing work involving temporal representations in semantic graphs.

  7. Causal premise semantics.

    PubMed

    Kaufmann, Stefan

    2013-08-01

    The rise of causality and the attendant graph-theoretic modeling tools in the study of counterfactual reasoning has had resounding effects in many areas of cognitive science, but it has thus far not permeated the mainstream in linguistic theory to a comparable degree. In this study I show that a version of the predominant framework for the formal semantic analysis of conditionals, Kratzer-style premise semantics, allows for a straightforward implementation of the crucial ideas and insights of Pearl-style causal networks. I spell out the details of such an implementation, focusing especially on the notions of intervention on a network and backtracking interpretations of counterfactuals.

  8. Semantic Webs and Study Skills.

    ERIC Educational Resources Information Center

    Hoover, John J.; Rabideau, Debra K.

    1995-01-01

    Principles for ensuring effective use of semantic webbing in meeting study skill needs of students with learning problems are noted. Important study skills are listed, along with suggested semantic web topics for which subordinate ideas may be developed. Two semantic webs are presented, illustrating the study skills of multiple choice test-taking…

  9. Semantic Search of Web Services

    ERIC Educational Resources Information Center

    Hao, Ke

    2013-01-01

    This dissertation addresses semantic search of Web services using natural language processing. We first survey various existing approaches, focusing on the fact that the expensive costs of current semantic annotation frameworks result in limited use of semantic search for large scale applications. We then propose a vector space model based service…

  10. THE TWO-LEVEL THEORY OF VERB MEANING: AN APPROACH TO INTEGRATING THE SEMANTICS OF ACTION WITH THE MIRROR NEURON SYSTEM

    PubMed Central

    Kemmerer, David; Castillo, Javier Gonzalez

    2010-01-01

    Verbs have two separate levels of meaning. One level reflects the uniqueness of every verb and is called the “root.” The other level consists of a more austere representation that is shared by all the verbs in a given class and is called the “event structure template.” We explore the following hypotheses about how, with specific reference to the motor features of action verbs, these two distinct levels of semantic representation might correspond to two distinct levels of the mirror neuron system. Hypothesis 1: Root-level motor features of verb meaning are partially subserved by somatotopically mapped mirror neurons in the left primary motor and/or premotor cortices. Hypothesis 2: Template-level motor features of verb meaning are partially subserved by representationally more schematic mirror neurons in Brodmann area 44 of the left inferior frontal gyrus. Evidence has been accumulating in support of the general neuroanatomical claims made by these two hypotheses—namely, that each level of verb meaning is associated with the designated cortical areas. However, as yet no studies have satisfied all the criteria necessary to support the more specific neurobiological claims made by the two hypotheses—namely, that each level of verb meaning is associated with mirror neurons in the pertinent brain regions. This would require demonstrating that within those regions the same neuronal populations are engaged during (a) the linguistic processing of particular motor features of verb meaning, (b) the execution of actions with the corresponding motor features, and (c) the observation of actions with the corresponding motor features. PMID:18996582

  11. Semantator: semantic annotator for converting biomedical text to linked data.

    PubMed

    Tao, Cui; Song, Dezhao; Sharma, Deepak; Chute, Christopher G

    2013-10-01

    More than 80% of biomedical data is embedded in plain text. The unstructured nature of these text-based documents makes it challenging to easily browse and query the data of interest in them. One approach to facilitate browsing and querying biomedical text is to convert the plain text to a linked web of data, i.e., converting data originally in free text to structured formats with defined meta-level semantics. In this paper, we introduce Semantator (Semantic Annotator), a semantic-web-based environment for annotating data of interest in biomedical documents, browsing and querying the annotated data, and interactively refining annotation results if needed. Through Semantator, information of interest can be either annotated manually or semi-automatically using plug-in information extraction tools. The annotated results will be stored in RDF and can be queried using the SPARQL query language. In addition, semantic reasoners can be directly applied to the annotated data for consistency checking and knowledge inference. Semantator has been released online and was used by the biomedical ontology community who provided positive feedbacks. Our evaluation results indicated that (1) Semantator can perform the annotation functionalities as designed; (2) Semantator can be adopted in real applications in clinical and transactional research; and (3) the annotated results using Semantator can be easily used in Semantic-web-based reasoning tools for further inference.

  12. Semantator: annotating clinical narratives with semantic web ontologies.

    PubMed

    Song, Dezhao; Chute, Christopher G; Tao, Cui

    2012-01-01

    To facilitate clinical research, clinical data needs to be stored in a machine processable and understandable way. Manual annotating clinical data is time consuming. Automatic approaches (e.g., Natural Language Processing systems) have been adopted to convert such data into structured formats; however, the quality of such automatically extracted data may not always be satisfying. In this paper, we propose Semantator, a semi-automatic tool for document annotation with Semantic Web ontologies. With a loaded free text document and an ontology, Semantator supports the creation/deletion of ontology instances for any document fragment, linking/disconnecting instances with the properties in the ontology, and also enables automatic annotation by connecting to the NCBO annotator and cTAKES. By representing annotations in Semantic Web standards, Semantator supports reasoning based upon the underlying semantics of the owl:disjointWith and owl:equivalentClass predicates. We present discussions based on user experiences of using Semantator.

  13. Semantic web representation of LOINC: an ontological perspective.

    PubMed

    Srinivasan, Arunkumar; Kunapareddy, Narendra; Mirhaji, Parsa; Casscells, S Ward

    2006-01-01

    The Logical Observation Identifiers Names and Codes terminology (LOINC) has been proposed as a nomenclature for clinical laboratory tests. We present a formal representation of LOINC using a Semantic Web-based ontology that defines LOINC concepts in terms of the six main LOINC axes and their relationships with UMLS Semantic Types and the UMLS Metathesarus. This representation may enable automated information integration and decision support in public health surveillance.

  14. Rule-based modeling: a computational approach for studying biomolecular site dynamics in cell signaling systems

    PubMed Central

    Chylek, Lily A.; Harris, Leonard A.; Tung, Chang-Shung; Faeder, James R.; Lopez, Carlos F.

    2013-01-01

    Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and post-translational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). PMID:24123887

  15. A fuzzy rule based metamodel for monthly catchment nitrate fate simulations

    NASA Astrophysics Data System (ADS)

    van der Heijden, Sven; Haberlandt, Uwe

    2014-05-01

    The high complexity of nitrate dynamics and corresponding deterministic models make it complicated to find suitable tools for decision support (DS) in large river catchments. Models for DS should ideally be easily applicable, fast, parsimonious in data requirements, easy to understand even for non-experts, able to reproduce sub-annual nitrate dynamics in order to evaluate temporal mitigation measures, and capable of scenario analysis. All these characteristics can be met with fuzzy rule based modelling. As a machine learning technique fuzzy rules have to be trained on data. Especially for nitrate, rarely enough data in sufficient temporal and spatial resolution is available. To circumvent this problem, the metamodelling approach can be used to train the fuzzy rules. This means a well-calibrated deterministic catchment model is used to generate "observed" data, which in a second step serve as training data for the fuzzy model. This study presents a fuzzy rule based metamodel consisting of eight fuzzy modules, which is able to simulate nitrate fluxes in large watersheds from their diffuse sources via surface runoff, interflow, and base flow to the catchment outlet. The fuzzy rules are trained on a database established with a calibrated SWAT model for an investigation area of 1000 km². The metamodel performs well on this training area and on two out of three validation areas in different landscapes, with a Nash-Sutcliffe coefficient around 0.5-0.7 for the monthly calculations. The fuzzy model proves to be fast, requires only few readily available input data, and the rule based model structure can be interpreted to a certain degree, which deems the presented approach suitable for the development of decision support tools.

  16. Applications and Methods Utilizing the Simple Semantic Web Architecture and Protocol (SSWAP) for Bioinformatics Resource Discovery and Disparate Data and Service Integration

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of scientific data between information resources difficu...

  17. Auditory semantic networks for words and natural sounds.

    PubMed

    Cummings, A; Ceponiene, R; Koyama, A; Saygin, A P; Townsend, J; Dick, F

    2006-10-18

    Does lexical processing rely on a specialized semantic network in the brain, or does it draw on more general semantic resources? The primary goal of this study was to compare behavioral and electrophysiological responses evoked during the processing of words, environmental sounds, and non-meaningful sounds in semantically matching or mismatching visual contexts. A secondary goal was to characterize the dynamic relationship between the behavioral and neural activities related to semantic integration using a novel analysis technique, ERP imaging. In matching trials, meaningful-sound ERPs were characterized by an extended positivity (200-600 ms) that in mismatching trials partly overlapped with centro-parietal N400 and frontal N600 negativities. The mismatch word-N400 peaked later than the environmental sound-N400 and was only slightly more posterior in scalp distribution. Single-trial ERP imaging revealed that for meaningful stimuli, the match-positivity consisted of a sensory P2 (200 ms), a semantic positivity (PS, 300 ms), and a parietal response-related positivity (PR, 500-800 ms). The magnitudes (but not the timing) of the N400 and PS activities correlated with subjects' reaction times, whereas both the latency and magnitude of the PR was correlated with subjects' reaction times. These results suggest that largely overlapping neural networks process verbal and non-verbal semantic information. In addition, it appears that semantic integration operates across different time scales: earlier processes (indexed by the PS and N400) utilize the established meaningful, but not necessarily lexical, semantic representations, whereas later processes (indexed by the PR and N600) are involved in the explicit interpretation of stimulus semantics and possibly of the required response. PMID:16962567

  18. Distinct pathways for rule-based retrieval and spatial mapping of memory representations in hippocampal neurons.

    PubMed

    Navawongse, Rapeechai; Eichenbaum, Howard

    2013-01-16

    Hippocampal neurons encode events within the context in which they occurred, a fundamental feature of episodic memory. Here we explored the sources of event and context information represented by hippocampal neurons during the retrieval of object associations in rats. Temporary inactivation of the medial prefrontal cortex differentially reduced the selectivity of rule-based object associations represented by hippocampal neuronal firing patterns but did not affect spatial firing patterns. In contrast, inactivation of the medial entorhinal cortex resulted in a pervasive reorganization of hippocampal mappings of spatial context and events. These results suggest distinct and cooperative prefrontal and medial temporal mechanisms in memory representation.

  19. A conceptual model to empower software requirements conflict detection and resolution with rule-based reasoning

    NASA Astrophysics Data System (ADS)

    Ahmad, Sabrina; Jalil, Intan Ermahani A.; Ahmad, Sharifah Sakinah Syed

    2016-08-01

    It is seldom technical issues which impede the process of eliciting software requirements. The involvement of multiple stakeholders usually leads to conflicts and therefore the need of conflict detection and resolution effort is crucial. This paper presents a conceptual model to further improve current efforts. Hence, this paper forwards an improved conceptual model to assist the conflict detection and resolution effort which extends the model ability and improves overall performance. The significant of the new model is to empower the automation of conflicts detection and its severity level with rule-based reasoning.

  20. Environmental Attitudes Semantic Differential.

    ERIC Educational Resources Information Center

    Mehne, Paul R.; Goulard, Cary J.

    This booklet is an evaluation instrument which utilizes semantic differential data to assess environmental attitudes. Twelve concepts are included: regulated access to beaches, urban planning, dune vegetation, wetlands, future cities, reclaiming wetlands for building development, city parks, commercial development of beaches, existing cities,…

  1. Assertiveness through Semantics.

    ERIC Educational Resources Information Center

    Zuercher, Nancy T.

    1983-01-01

    Suggests that connotations of assertiveness do not convey all of its meanings, particularly the components of positive feelings, communication, and cooperation. The application of semantics can help restore the balance. Presents a model for differentiating assertive behavior and clarifying the definition. (JAC)

  2. Latent Semantic Analysis.

    ERIC Educational Resources Information Center

    Dumais, Susan T.

    2004-01-01

    Presents a literature review that covers the following topics related to Latent Semantic Analysis (LSA): (1) LSA overview; (2) applications of LSA, including information retrieval (IR), information filtering, cross-language retrieval, and other IR-related LSA applications; (3) modeling human memory, including the relationship of LSA to other…

  3. Are Terminologies Semantically Uninteresting?

    ERIC Educational Resources Information Center

    Jacobson, Sven

    Some semanticists have argued that technical vocabulary or terminology is extralinguistic and therefore semantically uninteresting. However, no boundary exists in linguistic reality between terminology and ordinary vocabulary. Rather, terminologies and ordinary language exist on a continuum, and terminology is therefore a legitimate field for…

  4. Semantic Space Analyst

    2004-04-15

    The Semantic Space Analyst (SSA) is software for analyzing a text corpus, discovering relationships among terms, and allowing the user to explore that information in different ways. It includes features for displaying and laying out terms and relationships visually, for generating such maps from manual queries, for discovering differences between corpora. Data can also be exported to Microsoft Excel.

  5. Semantic physical science

    PubMed Central

    2012-01-01

    The articles in this special issue arise from a workshop and symposium held in January 2012 (Semantic Physical Science’). We invited people who shared our vision for the potential of the web to support chemical and related subjects. Other than the initial invitations, we have not exercised any control over the content of the contributed articles. PMID:22856527

  6. Universal Semantics in Translation

    ERIC Educational Resources Information Center

    Wang, Zhenying

    2009-01-01

    What and how we translate are questions often argued about. No matter what kind of answers one may give, priority in translation should be granted to meaning, especially those meanings that exist in all concerned languages. In this paper the author defines them as universal sememes, and the study of them as universal semantics, of which…

  7. Hierarchical semantic structures for medical NLP.

    PubMed

    Taira, Ricky K; Arnold, Corey W

    2013-01-01

    We present a framework for building a medical natural language processing (NLP) system capable of deep understanding of clinical text reports. The framework helps developers understand how various NLP-related efforts and knowledge sources can be integrated. The aspects considered include: 1) computational issues dealing with defining layers of intermediate semantic structures to reduce the dimensionality of the NLP problem; 2) algorithmic issues in which we survey the NLP literature and discuss state-of-the-art procedures used to map between various levels of the hierarchy; and 3) implementation issues to software developers with available resources. The objective of this poster is to educate readers to the various levels of semantic representation (e.g., word level concepts, ontological concepts, logical relations, logical frames, discourse structures, etc.). The poster presents an architecture for which diverse efforts and resources in medical NLP can be integrated in a principled way.

  8. Predicting the relatiave vulnerability of near-coastal species to climate change using a rule-based ecoinformatics approach

    EPA Science Inventory

    Background/Questions/Methods Near-coastal species are threatened by multiple climate change drivers, including temperature increases, ocean acidification, and sea level rise. To identify vulnerable habitats, geographic regions, and species, we developed a sequential, rule-based...

  9. A rule-based approach for the correlation of alarms to support Disaster and Emergency Management

    NASA Astrophysics Data System (ADS)

    Gloria, M.; Minei, G.; Lersi, V.; Pasquariello, D.; Monti, C.; Saitto, A.

    2009-04-01

    Key words: Simple Event Correlator, Agent Platform, Ontology, Semantic Web, Distributed Systems, Emergency Management The importance of recognition of emergency's typology to control the critical situation for security of citizens has been always recognized. It follows this aspect is very important for proper management of a hazardous event. In this work we present a solution for the recognition of emergency's typology adopted by an Italian research project, called CI6 (Centro Integrato per Servizi di Emergenza Innovativi). In our approach, CI6 receives alarms by citizen or people involved in the work (for example: police, operator of 112, and so on). CI6 represents any alarm by a set of information, including a text that describes it and obtained when the user points out the danger, and a pair of coordinates for its location. The system realizes an analysis of text and automatically infers information on the type of emergencies by means a set of parsing rules and rules of inference applied by a independent module: a correlator of events based on their log and called Simple Event Correlator (SEC). SEC, integrated in CI6's platform, is an open source and platform independent event correlation tool. SEC accepts input both files and text derived from standard input, making it flexible because it can be matched to any application that is able to write its output to a file stream. The SEC configuration is stored in text files as rules, each rule specifying an event matching condition, an action list, and optionally a Boolean expression whose truth value decides whether the rule can be applied at a given moment. SEC can produce output events by executing user-specified shell scripts or programs, by writing messages to files, and by various other means. SEC has been successfully applied in various domains like network management, system monitoring, data security, intrusion detection, log file monitoring and analysis, etc; it has been used or integrated with many

  10. Neural Substrates of Semantic Prospection – Evidence from the Dementias

    PubMed Central

    Irish, Muireann; Eyre, Nadine; Dermody, Nadene; O’Callaghan, Claire; Hodges, John R.; Hornberger, Michael; Piguet, Olivier

    2016-01-01

    The ability to envisage personally relevant events at a future time point represents an incredibly sophisticated cognitive endeavor and one that appears to be intimately linked to episodic memory integrity. Far less is known regarding the neurocognitive mechanisms underpinning the capacity to envisage non-personal future occurrences, known as semantic future thinking. Moreover the degree of overlap between the neural substrates supporting episodic and semantic forms of prospection remains unclear. To this end, we sought to investigate the capacity for episodic and semantic future thinking in Alzheimer’s disease (n = 15) and disease-matched behavioral-variant frontotemporal dementia (n = 15), neurodegenerative disorders characterized by significant medial temporal lobe (MTL) and frontal pathology. Participants completed an assessment of past and future thinking across personal (episodic) and non-personal (semantic) domains, as part of a larger neuropsychological battery investigating episodic and semantic processing, and their performance was contrasted with 20 age- and education-matched healthy older Controls. Participants underwent whole-brain T1-weighted structural imaging and voxel-based morphometry analysis was conducted to determine the relationship between gray matter integrity and episodic and semantic future thinking. Relative to Controls, both patient groups displayed marked future thinking impairments, extending across episodic and semantic domains. Analyses of covariance revealed that while episodic future thinking deficits could be explained solely in terms of episodic memory proficiency, semantic prospection deficits reflected the interplay between episodic and semantic processing. Distinct neural correlates emerged for each form of future simulation with differential involvement of prefrontal, lateral temporal, and medial temporal regions. Notably, the hippocampus was implicated irrespective of future thinking domain, with the suggestion of

  11. An investigation of care-based vs. rule-based morality in frontotemporal dementia, Alzheimer's disease, and healthy controls.

    PubMed

    Carr, Andrew R; Paholpak, Pongsatorn; Daianu, Madelaine; Fong, Sylvia S; Mather, Michelle; Jimenez, Elvira E; Thompson, Paul; Mendez, Mario F

    2015-11-01

    Behavioral changes in dementia, especially behavioral variant frontotemporal dementia (bvFTD), may result in alterations in moral reasoning. Investigators have not clarified whether these alterations reflect differential impairment of care-based vs. rule-based moral behavior. This study investigated 18 bvFTD patients, 22 early onset Alzheimer's disease (eAD) patients, and 20 healthy age-matched controls on care-based and rule-based items from the Moral Behavioral Inventory and the Social Norms Questionnaire, neuropsychological measures, and magnetic resonance imaging (MRI) regions of interest. There were significant group differences with the bvFTD patients rating care-based morality transgressions less severely than the eAD group and rule-based moral behavioral transgressions more severely than controls. Across groups, higher care-based morality ratings correlated with phonemic fluency on neuropsychological tests, whereas higher rule-based morality ratings correlated with increased difficulty set-shifting and learning new rules to tasks. On neuroimaging, severe care-based reasoning correlated with cortical volume in right anterior temporal lobe, and rule-based reasoning correlated with decreased cortical volume in the right orbitofrontal cortex. Together, these findings suggest that frontotemporal disease decreases care-based morality and facilitates rule-based morality possibly from disturbed contextual abstraction and set-shifting. Future research can examine whether frontal lobe disorders and bvFTD result in a shift from empathic morality to the strong adherence to conventional rules.

  12. Taxonomy, Ontology and Semantics at Johnson Space Center

    NASA Technical Reports Server (NTRS)

    Berndt, Sarah Ann

    2011-01-01

    At NASA Johnson Space Center (JSC), the Chief Knowledge Officer has been developing the JSC Taxonomy to capitalize on the accomplishments of yesterday while maintaining the flexibility needed for the evolving information environment of today. A clear vision and scope for the semantic system is integral to its success. The vision for the JSC Taxonomy is to connect information stovepipes to present a unified view for information and knowledge across the Center, across organizations, and across decades. Semantic search at JSC means seemless integration of disparate information sets into a single interface. Ever increasing use, interest, and organizational participation mark successful integration and provide the framework for future application.

  13. Towards rule-based metabolic databases: a requirement analysis based on KEGG.

    PubMed

    Richter, Stephan; Fetzer, Ingo; Thullner, Martin; Centler, Florian; Dittrich, Peter

    2015-01-01

    Knowledge of metabolic processes is collected in easily accessable online databases which are increasing rapidly in content and detail. Using these databases for the automatic construction of metabolic network models requires high accuracy and consistency. In this bipartite study we evaluate current accuracy and consistency problems using the KEGG database as a prominent example and propose design principles for dealing with such problems. In the first half, we present our computational approach for classifying inconsistencies and provide an overview of the classes of inconsistencies we identified. We detected inconsistencies both for database entries referring to substances and entries referring to reactions. In the second part, we present strategies to deal with the detected problem classes. We especially propose a rule-based database approach which allows for the inclusion of parameterised molecular species and parameterised reactions. Detailed case-studies and a comparison of explicit networks from KEGG with their anticipated rule-based representation underline the applicability and scalability of this approach. PMID:26547981

  14. A Rule Based Approach to ISS Interior Volume Control and Layout

    NASA Technical Reports Server (NTRS)

    Peacock, Brian; Maida, Jim; Fitts, David; Dory, Jonathan

    2001-01-01

    Traditional human factors design involves the development of human factors requirements based on a desire to accommodate a certain percentage of the intended user population. As the product is developed human factors evaluation involves comparison between the resulting design and the specifications. Sometimes performance metrics are involved that allow leniency in the design requirements given that the human performance result is satisfactory. Clearly such approaches may work but they give rise to uncertainty and negotiation. An alternative approach is to adopt human factors design rules that articulate a range of each design continuum over which there are varying outcome expectations and interactions with other variables, including time. These rules are based on a consensus of human factors specialists, designers, managers and customers. The International Space Station faces exactly this challenge in interior volume control, which is based on anthropometric, performance and subjective preference criteria. This paper describes the traditional approach and then proposes a rule-based alternative. The proposed rules involve spatial, temporal and importance dimensions. If successful this rule-based concept could be applied to many traditional human factors design variables and could lead to a more effective and efficient contribution of human factors input to the design process.

  15. The relevance of a rules-based maize marketing policy: an experimental case study of Zambia.

    PubMed

    Abbink, Klaus; Jayne, Thomas S; Moller, Lars C

    2011-01-01

    Strategic interaction between public and private actors is increasingly recognised as an important determinant of agricultural market performance in Africa and elsewhere. Trust and consultation tends to positively affect private activity while uncertainty of government behaviour impedes it. This paper reports on a laboratory experiment based on a stylised model of the Zambian maize market. The experiment facilitates a comparison between discretionary interventionism and a rules-based policy in which the government pre-commits itself to a future course of action. A simple precommitment rule can, in theory, overcome the prevailing strategic dilemma by encouraging private sector participation. Although this result is also borne out in the economic experiment, the improvement in private sector activity is surprisingly small and not statistically significant due to irrationally cautious choices by experimental governments. Encouragingly, a rules-based policy promotes a much more stable market outcome thereby substantially reducing the risk of severe food shortages. These results underscore the importance of predictable and transparent rules for the state's involvement in agricultural markets.

  16. Automated detection of pain from facial expressions: a rule-based approach using AAM

    NASA Astrophysics Data System (ADS)

    Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.

    2012-02-01

    In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.

  17. A multilayer perceptron solution to the match phase problem in rule-based artificial intelligence systems

    NASA Technical Reports Server (NTRS)

    Sartori, Michael A.; Passino, Kevin M.; Antsaklis, Panos J.

    1992-01-01

    In rule-based AI planning, expert, and learning systems, it is often the case that the left-hand-sides of the rules must be repeatedly compared to the contents of some 'working memory'. The traditional approach to solve such a 'match phase problem' for production systems is to use the Rete Match Algorithm. Here, a new technique using a multilayer perceptron, a particular artificial neural network model, is presented to solve the match phase problem for rule-based AI systems. A syntax for premise formulas (i.e., the left-hand-sides of the rules) is defined, and working memory is specified. From this, it is shown how to construct a multilayer perceptron that finds all of the rules which can be executed for the current situation in working memory. The complexity of the constructed multilayer perceptron is derived in terms of the maximum number of nodes and the required number of layers. A method for reducing the number of layers to at most three is also presented.

  18. The effects of age on associative and rule-based causal learning and generalization.

    PubMed

    Mutter, Sharon A; Plumlee, Leslie F

    2014-06-01

    We assessed how age influences associative and rule-based processes in causal learning using the Shanks and Darby (1998) concurrent patterning discrimination task. In Experiment 1, participants were divided into groups based on their learning performance after 6 blocks of training trials. High discrimination mastery young adults learned the patterning discrimination more rapidly and accurately than moderate mastery young adults. They were also more likely to induce the patterning rule and use this rule to generate predictions for novel cues, whereas moderate mastery young adults were more likely to use cue similarity as the basis for their predictions. Like moderate mastery young adults, older adults used similarity-based generalization for novel cues, but they did not achieve the same level of patterning discrimination. In Experiment 2, young and older adults were trained to the same learning criterion. Older adults again showed deficits in patterning discrimination and, in contrast to young adults, even when they reported awareness of the patterning rule, they used only similarity-based generalization in their predictions for novel cues. These findings suggest that it is important to consider how the ability to code or use cue representations interacts with the requirements of the causal learning task. In particular, age differences in causal learning seem to be greatest for tasks that require rapid coding of configural representations to control associative interference between similar cues. Configural coding may also be related to the success of rule-based processes in these types of learning tasks.

  19. A fuzzy rule based metamodel for monthly catchment nitrate fate simulations

    NASA Astrophysics Data System (ADS)

    van der Heijden, S.; Haberlandt, U.

    2015-12-01

    The high complexity of nitrate dynamics and corresponding deterministic models make it very appealing to employ easy, fast, and parsimonious modelling alternatives for decision support. This study presents a fuzzy rule based metamodel consisting of eight fuzzy modules, which is able to simulate nitrate fluxes in large watersheds from their diffuse sources via surface runoff, interflow, and base flow to the catchment outlet. The fuzzy rules are trained on a database established with a calibrated SWAT model for an investigation area of 1000 km2. The metamodel performs well on this training area and on two out of three validation areas in different landscapes, with a Nash-Sutcliffe coefficient of around 0.5-0.7 for the monthly nitrate calculations. The fuzzy model proves to be fast, requires only few readily available input data, and the rule based model structure facilitates a common-sense interpretation of the model, which deems the presented approach suitable for the development of decision support tools.

  20. Semantic similarity measure in biomedical domain leverage web search engine.

    PubMed

    Chen, Chi-Huang; Hsieh, Sheau-Ling; Weng, Yung-Ching; Chang, Wen-Yung; Lai, Feipei

    2010-01-01

    Semantic similarity measure plays an essential role in Information Retrieval and Natural Language Processing. In this paper we propose a page-count-based semantic similarity measure and apply it in biomedical domains. Previous researches in semantic web related applications have deployed various semantic similarity measures. Despite the usefulness of the measurements in those applications, measuring semantic similarity between two terms remains a challenge task. The proposed method exploits page counts returned by the Web Search Engine. We define various similarity scores for two given terms P and Q, using the page counts for querying P, Q and P AND Q. Moreover, we propose a novel approach to compute semantic similarity using lexico-syntactic patterns with page counts. These different similarity scores are integrated adapting support vector machines, to leverage the robustness of semantic similarity measures. Experimental results on two datasets achieve correlation coefficients of 0.798 on the dataset provided by A. Hliaoutakis, 0.705 on the dataset provide by T. Pedersen with physician scores and 0.496 on the dataset provided by T. Pedersen et al. with expert scores.

  1. Semantic Annotation of Existing Geo-Datasets a Case Study of Disaster Response in Netherlands

    NASA Astrophysics Data System (ADS)

    Mobasheri, A.; van Oosterom, P.; Zlatanova, S.; Bakillah, M.

    2013-05-01

    Use of relevant geo-information is one of the important issues for performing different tasks and processes in disaster response phase. In order to save time and cost, services could be employed for integrating and extracting relevant up-to-date geo-information. For this purpose, semantics of geo-information should be explicitly defined. This paper presents our initial results in applying an approach for semantic annotation of existing geo-datasets. In this research the process of injecting semantic descriptions into geodatasets (information integration) is called semantic annotation. A web system architecture is presented and the process of semantic annotation is presented by using the Meta-Annotation approach. The approach is elaborated by providing an example in disaster response which utilizes geo-datasets in CityGML format and further two languages of semantic web technology: RDF and Notation3.

  2. Parameters of Semantic Multisensory Integration Depend on Timing and Modality Order among People on the Autism Spectrum: Evidence from Event-Related Potentials

    ERIC Educational Resources Information Center

    Russo, N.; Mottron, L.; Burack, J. A.; Jemel, B.

    2012-01-01

    Individuals with autism spectrum disorders (ASD) report difficulty integrating simultaneously presented visual and auditory stimuli (Iarocci & McDonald, 2006), albeit showing enhanced perceptual processing of unisensory stimuli, as well as an enhanced role of perception in higher-order cognitive tasks (Enhanced Perceptual Functioning (EPF) model;…

  3. Meaningful Physical Changes Mediate Lexical-Semantic Integration: Top-Down and Form-Based Bottom-Up Information Sources Interact in the N400

    ERIC Educational Resources Information Center

    Lotze, Netaya; Tune, Sarah; Schlesewsky, Matthias; Bornkessel-Schlesewsky, Ina

    2011-01-01

    Models of how the human brain reconstructs an intended meaning from a linguistic input often draw upon the N400 event-related potential (ERP) component as evidence. Current accounts of the N400 emphasise either the role of contextually induced lexical preactivation of a critical word (Lau, Phillips, & Poeppel, 2008) or the ease of integration into…

  4. The value of the Semantic Web in the laboratory.

    PubMed

    Frey, Jeremy G

    2009-06-01

    The Semantic Web is beginning to impact on the wider chemical and physical sciences, beyond the earlier adopted bio-informatics. While useful in large-scale data driven science with automated processing, these technologies can also help integrate the work of smaller scale laboratories producing diverse data. The semantics aid the discovery, reliable re-use of data, provide improved provenance and facilitate automated processing by increased resilience to changes in presentation and reduced ambiguity. The Semantic Web, its tools and collections are not yet competitive with well-established solutions to current problems. It is in the reduced cost of instituting solutions to new problems that the versatility of Semantic Web-enabled data and resources will make their mark once the more general-purpose tools are more available.

  5. The value of the Semantic Web in the laboratory.

    PubMed

    Frey, Jeremy G

    2009-06-01

    The Semantic Web is beginning to impact on the wider chemical and physical sciences, beyond the earlier adopted bio-informatics. While useful in large-scale data driven science with automated processing, these technologies can also help integrate the work of smaller scale laboratories producing diverse data. The semantics aid the discovery, reliable re-use of data, provide improved provenance and facilitate automated processing by increased resilience to changes in presentation and reduced ambiguity. The Semantic Web, its tools and collections are not yet competitive with well-established solutions to current problems. It is in the reduced cost of instituting solutions to new problems that the versatility of Semantic Web-enabled data and resources will make their mark once the more general-purpose tools are more available. PMID:19508917

  6. Semantic interpretation of nominalizations

    SciTech Connect

    Hull, R.D.; Gomez, F.

    1996-12-31

    A computational approach to the semantic interpretation of nominalizations is described. Interpretation of normalizations involves three tasks: deciding whether the normalization is being used in a verbal or non-verbal sense; disambiguating the normalized verb when a verbal sense is used; and determining the fillers of the thematic roles of the verbal concept or predicate of the nominalization. A verbal sense can be recognized by the presence of modifiers that represent the arguments of the verbal concept. It is these same modifiers which provide the semantic clues to disambiguate the normalized verb. In the absence of explicit modifiers, heuristics are used to discriminate between verbal and non-verbal senses. A correspondence between verbs and their nominalizations is exploited so that only a small amount of additional knowledge is needed to handle the nominal form. These methods are tested in the domain of encyclopedic texts and the results are shown.

  7. Living With Semantic Dementia

    PubMed Central

    Sage, Karen; Wilkinson, Ray; Keady, John

    2014-01-01

    Semantic dementia is a variant of frontotemporal dementia and is a recently recognized diagnostic condition. There has been some research quantitatively examining care partner stress and burden in frontotemporal dementia. There are, however, few studies exploring the subjective experiences of family members caring for those with frontotemporal dementia. Increased knowledge of such experiences would allow service providers to tailor intervention, support, and information better. We used a case study design, with thematic narrative analysis applied to interview data, to describe the experiences of a wife and son caring for a husband/father with semantic dementia. Using this approach, we identified four themes: (a) living with routines, (b) policing and protecting, (c) making connections, and (d) being adaptive and flexible. Each of these themes were shared and extended, with the importance of routines in everyday life highlighted. The implications for policy, practice, and research are discussed. PMID:24532121

  8. Practical Semantic Astronomy

    NASA Astrophysics Data System (ADS)

    Graham, Matthew; Gray, N.; Burke, D.

    2010-01-01

    Many activities in the era of data-intensive astronomy are predicated upon some transference of domain knowledge and expertise from human to machine. The semantic infrastructure required to support this is no longer a pipe dream of computer science but a set of practical engineering challenges, more concerned with deployment and performance details than AI abstractions. The application of such ideas promises to help in such areas as contextual data access, exploiting distributed annotation and heterogeneous sources, and intelligent data dissemination and discovery. In this talk, we will review the status and use of semantic technologies in astronomy, particularly to address current problems in astroinformatics, with such projects as SKUA and AstroCollation.

  9. Models of Relevant Cue Integration in Name Retrieval

    ERIC Educational Resources Information Center

    Lombardi, Luigi; Sartori, Giuseppe

    2007-01-01

    Semantic features have different levels of importance in indexing a target concept. The article proposes that semantic relevance, an algorithmically derived measure based on concept descriptions, may efficiently capture the relative importance of different semantic features. Three models of how semantic features are integrated in terms of…

  10. Complex Semantic Networks

    NASA Astrophysics Data System (ADS)

    Teixeira, G. M.; Aguiar, M. S. F.; Carvalho, C. F.; Dantas, D. R.; Cunha, M. V.; Morais, J. H. M.; Pereira, H. B. B.; Miranda, J. G. V.

    Verbal language is a dynamic mental process. Ideas emerge by means of the selection of words from subjective and individual characteristics throughout the oral discourse. The goal of this work is to characterize the complex network of word associations that emerge from an oral discourse from a discourse topic. Because of that, concepts of associative incidence and fidelity have been elaborated and represented the probability of occurrence of pairs of words in the same sentence in the whole oral discourse. Semantic network of words associations were constructed, where the words are represented as nodes and the edges are created when the incidence-fidelity index between pairs of words exceeds a numerical limit (0.001). Twelve oral discourses were studied. The networks generated from these oral discourses present a typical behavior of complex networks and their indices were calculated and their topologies characterized. The indices of these networks obtained from each incidence-fidelity limit exhibit a critical value in which the semantic network has maximum conceptual information and minimum residual associations. Semantic networks generated by this incidence-fidelity limit depict a pattern of hierarchical classes that represent the different contexts used in the oral discourse.

  11. Semantic Enhancement for Enterprise Data Management

    NASA Astrophysics Data System (ADS)

    Ma, Li; Sun, Xingzhi; Cao, Feng; Wang, Chen; Wang, Xiaoyuan; Kanellos, Nick; Wolfson, Dan; Pan, Yue

    Taking customer data as an example, the paper presents an approach to enhance the management of enterprise data by using Semantic Web technologies. Customer data is the most important kind of core business entity a company uses repeatedly across many business processes and systems, and customer data management (CDM) is becoming critical for enterprises because it keeps a single, complete and accurate record of customers across the enterprise. Existing CDM systems focus on integrating customer data from all customer-facing channels and front and back office systems through multiple interfaces, as well as publishing customer data to different applications. To make the effective use of the CDM system, this paper investigates semantic query and analysis over the integrated and centralized customer data, enabling automatic classification and relationship discovery. We have implemented these features over IBM Websphere Customer Center, and shown the prototype to our clients. We believe that our study and experiences are valuable for both Semantic Web community and data management community.

  12. Choosing goals, not rules: deciding among rule-based action plans.

    PubMed

    Klaes, Christian; Westendorff, Stephanie; Chakrabarti, Shubhodeep; Gail, Alexander

    2011-05-12

    In natural situations, movements are often directed toward locations different from that of the evoking sensory stimulus. Movement goals must then be inferred from the sensory cue based on rules. When there is uncertainty about the rule that applies for a given cue, planning a movement involves both choosing the relevant rule and computing the movement goal based on that rule. Under these conditions, it is not clear whether primates compute multiple movement goals based on all possible rules before choosing an action, or whether they first choose a rule and then only represent the movement goal associated with that rule. Supporting the former hypothesis, we show that neurons in the frontoparietal reach areas of monkeys simultaneously represent two different rule-based movement goals, which are biased by the monkeys' choice preferences. Apparently, primates choose between multiple behavioral options by weighing against each other the movement goals associated with each option.

  13. Modeling for (physical) biologists: an introduction to the rule-based approach.

    PubMed

    Chylek, Lily A; Harris, Leonard A; Faeder, James R; Hlavacek, William S

    2015-07-16

    Models that capture the chemical kinetics of cellular regulatory networks can be specified in terms of rules for biomolecular interactions. A rule defines a generalized reaction, meaning a reaction that permits multiple reactants, each capable of participating in a characteristic transformation and each possessing certain, specified properties, which may be local, such as the state of a particular site or domain of a protein. In other words, a rule defines a transformation and the properties that reactants must possess to participate in the transformation. A rule also provides a rate law. A rule-based approach to modeling enables consideration of mechanistic details at the level of functional sites of biomolecules and provides a facile and visual means for constructing computational models, which can be analyzed to study how system-level behaviors emerge from component interactions.

  14. Auto-control of pumping operations in sewerage systems by rule-based fuzzy neural networks

    NASA Astrophysics Data System (ADS)

    Chiang, Y.-M.; Chang, L.-C.; Tsai, M.-J.; Wang, Y.-F.; Chang, F.-J.

    2011-01-01

    Pumping stations play an important role in flood mitigation in metropolitan areas. The existing sewerage systems, however, are facing a great challenge of fast rising peak flow resulting from urbanization and climate change. It is imperative to construct an efficient and accurate operating prediction model for pumping stations to simulate the drainage mechanism for discharging the rainwater in advance. In this study, we propose two rule-based fuzzy neural networks, adaptive neuro-fuzzy inference system (ANFIS) and counterpropagation fuzzy neural network for on-line predicting of the number of open and closed pumps of a pivotal pumping station in Taipei city up to a lead time of 20 min. The performance of ANFIS outperforms that of CFNN in terms of model efficiency, accuracy, and correctness. Furthermore, the results not only show the predictive water levels do contribute to the successfully operating pumping stations but also demonstrate the applicability and reliability of ANFIS in automatically controlling the urban sewerage systems.

  15. Auto-control of pumping operations in sewerage systems by rule-based fuzzy neural networks

    NASA Astrophysics Data System (ADS)

    Chiang, Y.-M.; Chang, L.-C.; Tsai, M.-J.; Wang, Y.-F.; Chang, F.-J.

    2010-09-01

    Pumping stations play an important role in flood mitigation in metropolitan areas. The existing sewerage systems, however, are facing a great challenge of fast rising peak flow resulting from urbanization and climate change. It is imperative to construct an efficient and accurate operating prediction model for pumping stations to simulate the drainage mechanism for discharging the rainwater in advance. In this study, we propose two rule-based fuzzy neural networks, adaptive neuro-fuzzy inference system (ANFIS) and counterpropagatiom fuzzy neural network (CFNN) for on-line predicting of the number of open and closed pumps of a pivotal pumping station in Taipei city up to a lead time of 20 min. The performance of ANFIS outperforms that of CFNN in terms of model efficiency, accuracy, and correctness. Furthermore, the results not only show the predictive water levels do contribute to the successfully operating pumping stations but also demonstrate the applicability and reliability of ANFIS in automatically controlling the urban sewerage systems.

  16. Modeling for (physical) biologists: an introduction to the rule-based approach.

    PubMed

    Chylek, Lily A; Harris, Leonard A; Faeder, James R; Hlavacek, William S

    2015-07-01

    Models that capture the chemical kinetics of cellular regulatory networks can be specified in terms of rules for biomolecular interactions. A rule defines a generalized reaction, meaning a reaction that permits multiple reactants, each capable of participating in a characteristic transformation and each possessing certain, specified properties, which may be local, such as the state of a particular site or domain of a protein. In other words, a rule defines a transformation and the properties that reactants must possess to participate in the transformation. A rule also provides a rate law. A rule-based approach to modeling enables consideration of mechanistic details at the level of functional sites of biomolecules and provides a facile and visual means for constructing computational models, which can be analyzed to study how system-level behaviors emerge from component interactions. PMID:26178138

  17. Rule-based Modeling and Simulation of Biochemical Systems with Molecular Finite Automata

    PubMed Central

    Yang, Jin; Meng, Xin; Hlavacek, William S.

    2011-01-01

    We propose a theoretical formalism, molecular finite automata (MFA), to describe individual proteins as rule-based computing machines. The MFA formalism provides a framework for modeling individual protein behaviors and systems-level dynamics via construction of programmable and executable machines. Models specified within this formalism explicitly represent the context-sensitive dynamics of individual proteins driven by external inputs and represent protein-protein interactions as synchronized machine reconfigurations. Both deterministic and stochastic simulations can be applied to quantitatively compute the dynamics of MFA models. We apply the MFA formalism to model and simulate a simple example of a signal transduction system that involves a MAP kinase cascade and a scaffold protein. PMID:21073243

  18. Note on the rationality of rule-based versus exemplar-based processing in human judgment.

    PubMed

    Juslin, Peter; Olsson, Henrik

    2004-02-01

    This paper reports a study of the relationship between rule- versus exemplar-based processing and criteria for rationality of judgment. Participants made probability judgments in a classification task devised by S. W. Allen and L. R. Brooks (1991). In the exemplar condition, the miscalibration was accounted for by stochastic components of the judgment with a format-dependence effect, implying simultaneous over- and underconfidence depending on the response scale. In the rule condition, there was an overconfidence bias not accounted for by the stochastic components of judgment. In both conditions the participants were additive on average and reasonably transitive, but the larger stochastic component in the exemplar condition produced somewhat larger absolute deviations. The results suggest that exemplar processes are unbiased but more perturbed by stochastic components, while rule-based processes may be more prone to bias. PMID:15016277

  19. A Rule-Based Modeling for the Description of Flexible and Self-healing Business Processes

    NASA Astrophysics Data System (ADS)

    Boukhebouze, Mohamed; Amghar, Youssef; Benharkat, Aïcha-Nabila; Maamar, Zakaria

    In this paper we discuss the importance of ensuring that business processes are label robust and agile at the same time robust and agile. To this end, we consider reviewing the way business processes are managed. For instance we consider offering a flexible way to model processes so that changes in regulations are handled through some self-healing mechanisms. These changes may raise exceptions at run-time if not properly reflected on these processes. To this end we propose a new rule based model that adopts the ECA rules and is built upon formal tools. The business logic of a process can be summarized with a set of rules that implement an organization’s policies. Each business rule is formalized using our ECAPE formalism (Event-Condition-Action-Post condition- post Event). This formalism allows translating a process into a graph of rules that is analyzed in terms of reliably and flexibility.

  20. Modeling for (physical) biologists: an introduction to the rule-based approach

    NASA Astrophysics Data System (ADS)

    Chylek, Lily A.; Harris, Leonard A.; Faeder, James R.; Hlavacek, William S.

    2015-07-01

    Models that capture the chemical kinetics of cellular regulatory networks can be specified in terms of rules for biomolecular interactions. A rule defines a generalized reaction, meaning a reaction that permits multiple reactants, each capable of participating in a characteristic transformation and each possessing certain, specified properties, which may be local, such as the state of a particular site or domain of a protein. In other words, a rule defines a transformation and the properties that reactants must possess to participate in the transformation. A rule also provides a rate law. A rule-based approach to modeling enables consideration of mechanistic details at the level of functional sites of biomolecules and provides a facile and visual means for constructing computational models, which can be analyzed to study how system-level behaviors emerge from component interactions.

  1. Rule-Based vs. Behavior-Based Self-Deployment for Mobile Wireless Sensor Networks.

    PubMed

    Urdiales, Cristina; Aguilera, Francisco; González-Parada, Eva; Cano-García, Jose; Sandoval, Francisco

    2016-01-01

    In mobile wireless sensor networks (MWSN), nodes are allowed to move autonomously for deployment. This process is meant: (i) to achieve good coverage; and (ii) to distribute the communication load as homogeneously as possible. Rather than optimizing deployment, reactive algorithms are based on a set of rules or behaviors, so nodes can determine when to move. This paper presents an experimental evaluation of both reactive deployment approaches: rule-based and behavior-based ones. Specifically, we compare a backbone dispersion algorithm with a social potential fields algorithm. Most tests are done under simulation for a large number of nodes in environments with and without obstacles. Results are validated using a small robot network in the real world. Our results show that behavior-based deployment tends to provide better coverage and communication balance, especially for a large number of nodes in areas with obstacles. PMID:27399709

  2. Modeling for (physical) biologists: an introduction to the rule-based approach

    PubMed Central

    Chylek, Lily A; Harris, Leonard A; Faeder, James R; Hlavacek, William S

    2015-01-01

    Models that capture the chemical kinetics of cellular regulatory networks can be specified in terms of rules for biomolecular interactions. A rule defines a generalized reaction, meaning a reaction that permits multiple reactants, each capable of participating in a characteristic transformation and each possessing certain, specified properties, which may be local, such as the state of a particular site or domain of a protein. In other words, a rule defines a transformation and the properties that reactants must possess to participate in the transformation. A rule also provides a rate law. A rule-based approach to modeling enables consideration of mechanistic details at the level of functional sites of biomolecules and provides a facile and visual means for constructing computational models, which can be analyzed to study how system-level behaviors emerge from component interactions. PMID:26178138

  3. Rule-Based vs. Behavior-Based Self-Deployment for Mobile Wireless Sensor Networks.

    PubMed

    Urdiales, Cristina; Aguilera, Francisco; González-Parada, Eva; Cano-García, Jose; Sandoval, Francisco

    2016-07-07

    In mobile wireless sensor networks (MWSN), nodes are allowed to move autonomously for deployment. This process is meant: (i) to achieve good coverage; and (ii) to distribute the communication load as homogeneously as possible. Rather than optimizing deployment, reactive algorithms are based on a set of rules or behaviors, so nodes can determine when to move. This paper presents an experimental evaluation of both reactive deployment approaches: rule-based and behavior-based ones. Specifically, we compare a backbone dispersion algorithm with a social potential fields algorithm. Most tests are done under simulation for a large number of nodes in environments with and without obstacles. Results are validated using a small robot network in the real world. Our results show that behavior-based deployment tends to provide better coverage and communication balance, especially for a large number of nodes in areas with obstacles.

  4. Rule-Based vs. Behavior-Based Self-Deployment for Mobile Wireless Sensor Networks

    PubMed Central

    Urdiales, Cristina; Aguilera, Francisco; González-Parada, Eva; Cano-García, Jose; Sandoval, Francisco

    2016-01-01

    In mobile wireless sensor networks (MWSN), nodes are allowed to move autonomously for deployment. This process is meant: (i) to achieve good coverage; and (ii) to distribute the communication load as homogeneously as possible. Rather than optimizing deployment, reactive algorithms are based on a set of rules or behaviors, so nodes can determine when to move. This paper presents an experimental evaluation of both reactive deployment approaches: rule-based and behavior-based ones. Specifically, we compare a backbone dispersion algorithm with a social potential fields algorithm. Most tests are done under simulation for a large number of nodes in environments with and without obstacles. Results are validated using a small robot network in the real world. Our results show that behavior-based deployment tends to provide better coverage and communication balance, especially for a large number of nodes in areas with obstacles. PMID:27399709

  5. Reliability Assessment and Robustness Study for Key Navigation Components using Belief Rule Based System

    NASA Astrophysics Data System (ADS)

    You, Yuan; Wang, Liuying; Chang, Leilei; Ling, Xiaodong; Sun, Nan

    2016-02-01

    The gyro device is the key navigation component for maritime tracking and control, and gyro shift is the key factor which influences the performance of the gyro device, which makes conducting the reliability analysis on the gyro device very important. For the gyro device reliability analysis, the residual life probability prediction plays an essential role although it requires a complex process adapted by existed studies. In this study the Belief Rule Base (BRB) system is applied to model the relationship between the time as the input and the residual life probability as the output. Two scenarios are designed to study the robustness of the proposed BRB prediction model. The comparative results show that the BRB prediction model performs better in Scenario II when new the referenced values are predictable.

  6. Accurate crop classification using hierarchical genetic fuzzy rule-based systems

    NASA Astrophysics Data System (ADS)

    Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.

    2014-10-01

    This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

  7. Graph Mining Meets the Semantic Web

    SciTech Connect

    Lee, Sangkeun; Sukumar, Sreenivas R; Lim, Seung-Hwan

    2015-01-01

    The Resource Description Framework (RDF) and SPARQL Protocol and RDF Query Language (SPARQL) were introduced about a decade ago to enable flexible schema-free data interchange on the Semantic Web. Today, data scientists use the framework as a scalable graph representation for integrating, querying, exploring and analyzing data sets hosted at different sources. With increasing adoption, the need for graph mining capabilities for the Semantic Web has emerged. We address that need through implementation of three popular iterative Graph Mining algorithms (Triangle count, Connected component analysis, and PageRank). We implement these algorithms as SPARQL queries, wrapped within Python scripts. We evaluate the performance of our implementation on 6 real world data sets and show graph mining algorithms (that have a linear-algebra formulation) can indeed be unleashed on data represented as RDF graphs using the SPARQL query interface.

  8. Semantic Metadata for Heterogeneous Spatial Planning Documents

    NASA Astrophysics Data System (ADS)

    Iwaniak, A.; Kaczmarek, I.; Łukowicz, J.; Strzelecki, M.; Coetzee, S.; Paluszyński, W.

    2016-09-01

    Spatial planning documents contain information about the principles and rights of land use in different zones of a local authority. They are the basis for administrative decision making in support of sustainable development. In Poland these documents are published on the Web according to a prescribed non-extendable XML schema, designed for optimum presentation to humans in HTML web pages. There is no document standard, and limited functionality exists for adding references to external resources. The text in these documents is discoverable and searchable by general-purpose web search engines, but the semantics of the content cannot be discovered or queried. The spatial information in these documents is geographically referenced but not machine-readable. Major manual efforts are required to integrate such heterogeneous spatial planning documents from various local authorities for analysis, scenario planning and decision support. This article presents results of an implementation using machine-readable semantic metadata to identify relationships among regulations in the text, spatial objects in the drawings and links to external resources. A spatial planning ontology was used to annotate different sections of spatial planning documents with semantic metadata in the Resource Description Framework in Attributes (RDFa). The semantic interpretation of the content, links between document elements and links to external resources were embedded in XHTML pages. An example and use case from the spatial planning domain in Poland is presented to evaluate its efficiency and applicability. The solution enables the automated integration of spatial planning documents from multiple local authorities to assist decision makers with understanding and interpreting spatial planning information. The approach is equally applicable to legal documents from other countries and domains, such as cultural heritage and environmental management.

  9. Using semantic information for processing negation and disjunction in logic programs

    SciTech Connect

    Gaasterland, T.; Lobo, J.

    1993-07-01

    There are many applications in which integrity constraints can play an important role. An example is the semantic query optimization method developed by Chakravarthy, Grant, and Minker for definite deductive databases. They use integrity constraints during query processing to prevent the exploration of search space that is bound to fail. In this paper, the authors generalize the semantic query optimization method to apply to negated atoms. The generalized method is referred to as semantic compilation. They show that semantic compilation provides an alternative search space for negative query literals. They also show how semantic compilation can be used to transform a disjunctive database with or without functions and denial constraints without negation into a new disjunctive database that complies with the integrity constraints.

  10. Using semantic information for processing negation and disjunction in logic programs

    SciTech Connect

    Gaasterland, T. ); Lobo, J. )

    1993-01-01

    There are many applications in which integrity constraints can play an important role. An example is the semantic query optimization method developed by Chakravarthy, Grant, and Minker for definite deductive databases. They use integrity constraints during query processing to prevent the exploration of search space that is bound to fail. In this paper, the authors generalize the semantic query optimization method to apply to negated atoms. The generalized method is referred to as semantic compilation. They show that semantic compilation provides an alternative search space for negative query literals. They also show how semantic compilation can be used to transform a disjunctive database with or without functions and denial constraints without negation into a new disjunctive database that complies with the integrity constraints.

  11. A federated semantic metadata registry framework for enabling interoperability across clinical research and care domains.

    PubMed

    Sinaci, A Anil; Laleci Erturkmen, Gokce B

    2013-10-01

    In order to enable secondary use of Electronic Health Records (EHRs) by bridging the interoperability gap between clinical care and research domains, in this paper, a unified methodology and the supporting framework is introduced which brings together the power of metadata registries (MDR) and semantic web technologies. We introduce a federated semantic metadata registry framework by extending the ISO/IEC 11179 standard, and enable integration of data element registries through Linked Open Data (LOD) principles where each Common Data Element (CDE) can be uniquely referenced, queried and processed to enable the syntactic and semantic interoperability. Each CDE and their components are maintained as LOD resources enabling semantic links with other CDEs, terminology systems and with implementation dependent content models; hence facilitating semantic search, much effective reuse and semantic interoperability across different application domains. There are several important efforts addressing the semantic interoperability in healthcare domain such as IHE DEX profile proposal, CDISC SHARE and CDISC2RDF. Our architecture complements these by providing a framework to interlink existing data element registries and repositories for multiplying their potential for semantic interoperability to a greater extent. Open source implementation of the federated semantic MDR framework presented in this paper is the core of the semantic interoperability layer of the SALUS project which enables the execution of the post marketing safety analysis studies on top of existing EHR systems. PMID:23751263

  12. A federated semantic metadata registry framework for enabling interoperability across clinical research and care domains.

    PubMed

    Sinaci, A Anil; Laleci Erturkmen, Gokce B

    2013-10-01

    In order to enable secondary use of Electronic Health Records (EHRs) by bridging the interoperability gap between clinical care and research domains, in this paper, a unified methodology and the supporting framework is introduced which brings together the power of metadata registries (MDR) and semantic web technologies. We introduce a federated semantic metadata registry framework by extending the ISO/IEC 11179 standard, and enable integration of data element registries through Linked Open Data (LOD) principles where each Common Data Element (CDE) can be uniquely referenced, queried and processed to enable the syntactic and semantic interoperability. Each CDE and their components are maintained as LOD resources enabling semantic links with other CDEs, terminology systems and with implementation dependent content models; hence facilitating semantic search, much effective reuse and semantic interoperability across different application domains. There are several important efforts addressing the semantic interoperability in healthcare domain such as IHE DEX profile proposal, CDISC SHARE and CDISC2RDF. Our architecture complements these by providing a framework to interlink existing data element registries and repositories for multiplying their potential for semantic interoperability to a greater extent. Open source implementation of the federated semantic MDR framework presented in this paper is the core of the semantic interoperability layer of the SALUS project which enables the execution of the post marketing safety analysis studies on top of existing EHR systems.

  13. SAS- Semantic Annotation Service for Geoscience resources on the web

    NASA Astrophysics Data System (ADS)

    Elag, M.; Kumar, P.; Marini, L.; Li, R.; Jiang, P.

    2015-12-01

    There is a growing need for increased integration across the data and model resources that are disseminated on the web to advance their reuse across different earth science applications. Meaningful reuse of resources requires semantic metadata to realize the semantic web vision for allowing pragmatic linkage and integration among resources. Semantic metadata associates standard metadata with resources to turn them into semantically-enabled resources on the web. However, the lack of a common standardized metadata framework as well as the uncoordinated use of metadata fields across different geo-information systems, has led to a situation in which standards and related Standard Names abound. To address this need, we have designed SAS to provide a bridge between the core ontologies required to annotate resources and information systems in order to enable queries and analysis over annotation from a single environment (web). SAS is one of the services that are provided by the Geosematnic framework, which is a decentralized semantic framework to support the integration between models and data and allow semantically heterogeneous to interact with minimum human intervention. Here we present the design of SAS and demonstrate its application for annotating data and models. First we describe how predicates and their attributes are extracted from standards and ingested in the knowledge-base of the Geosemantic framework. Then we illustrate the application of SAS in annotating data managed by SEAD and annotating simulation models that have web interface. SAS is a step in a broader approach to raise the quality of geoscience data and models that are published on the web and allow users to better search, access, and use of the existing resources based on standard vocabularies that are encoded and published using semantic technologies.

  14. The Semantic Mapping of Archival Metadata to the CIDOC CRM Ontology

    ERIC Educational Resources Information Center

    Bountouri, Lina; Gergatsoulis, Manolis

    2011-01-01

    In this article we analyze the main semantics of archival description, expressed through Encoded Archival Description (EAD). Our main target is to map the semantics of EAD to the CIDOC Conceptual Reference Model (CIDOC CRM) ontology as part of a wider integration architecture of cultural heritage metadata. Through this analysis, it is concluded…

  15. EIIS: An Educational Information Intelligent Search Engine Supported by Semantic Services

    ERIC Educational Resources Information Center

    Huang, Chang-Qin; Duan, Ru-Lin; Tang, Yong; Zhu, Zhi-Ting; Yan, Yong-Jian; Guo, Yu-Qing

    2011-01-01

    The semantic web brings a new opportunity for efficient information organization and search. To meet the special requirements of the educational field, this paper proposes an intelligent search engine enabled by educational semantic support service, where three kinds of searches are integrated into Educational Information Intelligent Search (EIIS)…

  16. eFSM--a novel online neural-fuzzy semantic memory model.

    PubMed

    Tung, Whye Loon; Quek, Chai

    2010-01-01

    Fuzzy rule-based systems (FRBSs) have been successfully applied to many areas. However, traditional fuzzy systems are often manually crafted, and their rule bases that represent the acquired knowledge are static and cannot be trained to improve the modeling performance. This subsequently leads to intensive research on the autonomous construction and tuning of a fuzzy system directly from the observed training data to address the knowledge acquisition bottleneck, resulting in well-established hybrids such as neural-fuzzy systems (NFSs) and genetic fuzzy systems (GFSs). However, the complex and dynamic nature of real-world problems demands that fuzzy rule-based systems and models be able to adapt their parameters and ultimately evolve their rule bases to address the nonstationary (time-varying) characteristics of their operating environments. Recently, considerable research efforts have been directed to the study of evolving Tagaki-Sugeno (T-S)-type NFSs based on the concept of incremental learning. In contrast, there are very few incremental learning Mamdani-type NFSs reported in the literature. Hence, this paper presents the evolving neural-fuzzy semantic memory (eFSM) model, a neural-fuzzy Mamdani architecture with a data-driven progressively adaptive structure (i.e., rule base) based on incremental learning. Issues related to the incremental learning of the eFSM rule base are carefully investigated, and a novel parameter learning approach is proposed for the tuning of the fuzzy set parameters in eFSM. The proposed eFSM model elicits highly interpretable semantic knowledge in the form of Mamdani-type if-then fuzzy rules from low-level numeric training data. These Mamdani fuzzy rules define the computing structure of eFSM and are incrementally learned with the arrival of each training data sample. New rules are constructed from the emergence of novel training data and obsolete fuzzy rules that no longer describe the recently observed data trends are pruned. This

  17. Neural changes associated with semantic processing in healthy aging despite intact behavioral performance.

    PubMed

    Lacombe, Jacinthe; Jolicoeur, Pierre; Grimault, Stephan; Pineault, Jessica; Joubert, Sven

    2015-10-01

    Semantic memory recruits an extensive neural network including the left inferior prefrontal cortex (IPC) and the left temporoparietal region, which are involved in semantic control processes, as well as the anterior temporal lobe region (ATL) which is considered to be involved in processing semantic information at a central level. However, little is known about the underlying neuronal integrity of the semantic network in normal aging. Young and older healthy adults carried out a semantic judgment task while their cortical activity was recorded using magnetoencephalography (MEG). Despite equivalent behavioral performance, young adults activated the left IPC to a greater extent than older adults, while the latter group recruited the temporoparietal region bilaterally and the left ATL to a greater extent than younger adults. Results indicate that significant neuronal changes occur in normal aging, mainly in regions underlying semantic control processes, despite an apparent stability in performance at the behavioral level. PMID:26282079

  18. From Science to e-Science to Semantic e-Science: A Heliosphysics Case Study

    NASA Technical Reports Server (NTRS)

    Narock, Thomas; Fox, Peter

    2011-01-01

    The past few years have witnessed unparalleled efforts to make scientific data web accessible. The Semantic Web has proven invaluable in this effort; however, much of the literature is devoted to system design, ontology creation, and trials and tribulations of current technologies. In order to fully develop the nascent field of Semantic e-Science we must also evaluate systems in real-world settings. We describe a case study within the field of Heliophysics and provide a comparison of the evolutionary stages of data discovery, from manual to semantically enable. We describe the socio-technical implications of moving toward automated and intelligent data discovery. In doing so, we highlight how this process enhances what is currently being done manually in various scientific disciplines. Our case study illustrates that Semantic e-Science is more than just semantic search. The integration of search with web services, relational databases, and other cyberinfrastructure is a central tenet of our case study and one that we believe has applicability as a generalized research area within Semantic e-Science. This case study illustrates a specific example of the benefits, and limitations, of semantically replicating data discovery. We show examples of significant reductions in time and effort enable by Semantic e-Science; yet, we argue that a "complete" solution requires integrating semantic search with other research areas such as data provenance and web services.

  19. Semantic Knowledge for Famous Names in Mild Cognitive Impairment

    PubMed Central

    Seidenberg, Michael; Guidotti, Leslie; Nielson, Kristy A.; Woodard, John L.; Durgerian, Sally; Zhang, Qi; Gander, Amelia; Antuono, Piero; Rao, Stephen M.

    2008-01-01

    Person identification represents a unique category of semantic knowledge that is commonly impaired in Alzheimer's Disease (AD), but has received relatively little investigation in patients with Mild Cognitive Impairment (MCI). The current study examined the retrieval of semantic knowledge for famous names from three time epochs (recent, remote, and enduring) in two participant groups; 23 aMCI patients and 23 healthy elderly controls. The aMCI group was less accurate and produced less semantic knowledge than controls for famous names. Names from the enduring period were recognized faster than both recent and remote names in both groups, and remote names were recognized more quickly than recent names. Episodic memory performance was correlated with greater semantic knowledge particularly for recent names. We suggest that the anterograde memory deficits in the aMCI group interferes with learning of recent famous names and as a result produces difficulties with updating and integrating new semantic information with previously stored information. The implications of these findings for characterizing semantic memory deficits in MCI are discussed. PMID:19128524

  20. Semantic Roles and Grammatical Relations.

    ERIC Educational Resources Information Center

    Van Valin, Robert D., Jr.

    The nature of semantic roles and grammatical relations are explored from the perspective of Role and Reference Grammar (RRG). It is proposed that unraveling the relational aspects of grammar involves the recognition that semantic roles fall into two types, thematic relations and macroroles, and that grammatical relations are not universal and are…

  1. Indexing by Latent Semantic Analysis.

    ERIC Educational Resources Information Center

    Deerwester, Scott; And Others

    1990-01-01

    Describes a new method for automatic indexing and retrieval called latent semantic indexing (LSI). Problems with matching query words with document words in term-based information retrieval systems are discussed, semantic structure is examined, singular value decomposition (SVD) is explained, and the mathematics underlying the SVD model is…

  2. Semantic Tools in Information Retrieval.

    ERIC Educational Resources Information Center

    Rubinoff, Morris; Stone, Don C.

    This report discusses the problem of the meansings of words used in information retrieval systems, and shows how semantic tools can aid in the communication which takes place between indexers and searchers via index terms. After treating the differing use of semantic tools in different types of systems, two tools (classification tables and…

  3. Semantic Focus and Sentence Comprehension.

    ERIC Educational Resources Information Center

    Cutler, Anne; Fodor, Jerry A.

    1979-01-01

    Reaction time to detect a phoneme target in a sentence was faster when the target-containing word formed part of the semantic focus of the sentence. Sentence understanding was facilitated by rapid identification of focused information. Active search for accented words can be interpreted as a search for semantic focus. (Author/RD)

  4. Semantic Feature Distinctiveness and Frequency

    ERIC Educational Resources Information Center

    Lamb, Katherine M.

    2012-01-01

    Lexical access is the process in which basic components of meaning in language, the lexical entries (words) are activated. This activation is based on the organization and representational structure of the lexical entries. Semantic features of words, which are the prominent semantic characteristics of a word concept, provide important information…

  5. The semantic planetary data system

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel; Kelly, Sean; Mattmann, Chris

    2005-01-01

    This paper will provide a brief overview of the PDS data model and the PDS catalog. It will then describe the implentation of the Semantic PDS including the development of the formal ontology, the generation of RDFS/XML and RDF/XML data sets, and the buiding of the semantic search application.

  6. Semantic Analysis in Machine Translation.

    ERIC Educational Resources Information Center

    Skorokhodko, E. F.

    1970-01-01

    In many cases machine-translation does not produce satisfactory results within the framework of purely formal (morphological and syntaxic) analysis, particularly, in the case of syntaxic and lexical homonomy. An algorithm for syntaxic-semantic analysis is proposed, and its principles of operation are described. The syntaxico-semantic structure is…

  7. Semantic Processing of Mathematical Gestures

    ERIC Educational Resources Information Center

    Lim, Vanessa K.; Wilson, Anna J.; Hamm, Jeff P.; Phillips, Nicola; Iwabuchi, Sarina J.; Corballis, Michael C.; Arzarello, Ferdinando; Thomas, Michael O. J.

    2009-01-01

    Objective: To examine whether or not university mathematics students semantically process gestures depicting mathematical functions (mathematical gestures) similarly to the way they process action gestures and sentences. Semantic processing was indexed by the N400 effect. Results: The N400 effect elicited by words primed with mathematical gestures…

  8. Hierarchical abstract semantic model for image classification

    NASA Astrophysics Data System (ADS)

    Ye, Zhipeng; Liu, Peng; Zhao, Wei; Tang, Xianglong

    2015-09-01

    Semantic gap limits the performance of bag-of-visual-words. To deal with this problem, a hierarchical abstract semantics method that builds abstract semantic layers, generates semantic visual vocabularies, measures semantic gap, and constructs classifiers using the Adaboost strategy is proposed. First, abstract semantic layers are proposed to narrow the semantic gap between visual features and their interpretation. Then semantic visual words are extracted as features to train semantic classifiers. One popular form of measurement is used to quantify the semantic gap. The Adaboost training strategy is used to combine weak classifiers into strong ones to further improve performance. For a testing image, the category is estimated layer-by-layer. Corresponding abstract hierarchical structures for popular datasets, including Caltech-101 and MSRC, are proposed for evaluation. The experimental results show that the proposed method is capable of narrowing semantic gaps effectively and performs better than other categorization methods.

  9. A Semantic Grid Oriented to E-Tourism

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao Ming

    With increasing complexity of tourism business models and tasks, there is a clear need of the next generation e-Tourism infrastructure to support flexible automation, integration, computation, storage, and collaboration. Currently several enabling technologies such as semantic Web, Web service, agent and grid computing have been applied in the different e-Tourism applications, however there is no a unified framework to be able to integrate all of them. So this paper presents a promising e-Tourism framework based on emerging semantic grid, in which a number of key design issues are discussed including architecture, ontologies structure, semantic reconciliation, service and resource discovery, role based authorization and intelligent agent. The paper finally provides the implementation of the framework.

  10. Semantic information extracting system for classification of radiological reports in radiology information system (RIS)

    NASA Astrophysics Data System (ADS)

    Shi, Liehang; Ling, Tonghui; Zhang, Jianguo

    2016-03-01

    Radiologists currently use a variety of terminologies and standards in most hospitals in China, and even there are multiple terminologies being used for different sections in one department. In this presentation, we introduce a medical semantic comprehension system (MedSCS) to extract semantic information about clinical findings and conclusion from free text radiology reports so that the reports can be classified correctly based on medical terms indexing standards such as Radlex or SONMED-CT. Our system (MedSCS) is based on both rule-based methods and statistics-based methods which improve the performance and the scalability of MedSCS. In order to evaluate the over all of the system and measure the accuracy of the outcomes, we developed computation methods to calculate the parameters of precision rate, recall rate, F-score and exact confidence interval.

  11. Fuzzy-rule based metamodeling of nitrate transport in large catchments

    NASA Astrophysics Data System (ADS)

    van der Heijden, S.; Haberlandt, U.

    2012-04-01

    Especially for nutrient balance simulations, physically based ecohydrological modeling needs an abundance of measured data and model parameters, which for large catchments all too often are not available in sufficient spatial or temporal resolution or are simply unknown. For efficient large-scale studies it is thus beneficial to have methods at one's disposal which are parsimonious concerning the number of model parameters and the necessary input data. One such method is fuzzy-rule based modeling, which compared to other machine-learning techniques has the advantages to produce models (the fuzzy-rules) which are physically interpretable to a certain extent, and to allow the explicit introduction of expert knowledge through pre-defined rules. The study focuses on the application of fuzzy-rule based modeling for nitrate transport simulation in large catchments, in particular concerning decision support. To be able to construct such models it is possible to take a well-calibrated physically based model to produce data. This metamodeling approach replaces missing observed data. Thus, in a first step the ecohydrological model SWAT was calibrated for a 1000 km2 study area in Northern Germany and used to produce the needed data for rule training. Taking into account the different pathways of nitrate emission from soils (surface runoff, interflow, leaching to groundwater), a modular setup was chosen for the fuzzy model. Two modules were created for each pathway, one for the calculation of fertilized soils and one for non-fertilized soils. Adding one module for groundwater and one for river runoff yields a model consisting of eight modules in total. After selection of appropriate input variables (seven to nine variables for each module) the modules were trained using the SWAT data and simulated annealing as a discrete optimization method. Although flow components are of major importance when describing nitrate transport, they also imply a dependence on (deterministic) water

  12. Using rule-based shot dose assignment in model-based MPC applications

    NASA Astrophysics Data System (ADS)

    Bork, Ingo; Buck, Peter; Wang, Lin; Müller, Uwe

    2014-10-01

    Shrinking feature sizes and the need for tighter CD (Critical Dimension) control require the introduction of new technologies in mask making processes. One of those methods is the dose assignment of individual shots on VSB (Variable Shaped Beam) mask writers to compensate CD non-linearity effects and improve dose edge slope. Using increased dose levels only for most critical features, generally only for the smallest CDs on a mask, the change in mask write time is minimal while the increase in image quality can be significant. This paper describes a method combining rule-based shot dose assignment with model-based shot size correction. This combination proves to be very efficient in correcting mask linearity errors while also improving dose edge slope of small features. Shot dose assignment is based on tables assigning certain dose levels to a range of feature sizes. The dose to feature size assignment is derived from mask measurements in such a way that shape corrections are kept to a minimum. For example, if a 50nm drawn line on mask results in a 45nm chrome line using nominal dose, a dose level is chosen which is closest to getting the line back on target. Since CD non-linearity is different for lines, line-ends and contacts, different tables are generated for the different shape categories. The actual dose assignment is done via DRC rules in a pre-processing step before executing the shape correction in the MPC engine. Dose assignment to line ends can be restricted to critical line/space dimensions since it might not be required for all line ends. In addition, adding dose assignment to a wide range of line ends might increase shot count which is undesirable. The dose assignment algorithm is very flexible and can be adjusted based on the type of layer and the best balance between accuracy and shot count. These methods can be optimized for the number of dose levels available for specific mask writers. The MPC engine now needs to be able to handle different dose

  13. Automatic construction of rule-based ICD-9-CM coding systems

    PubMed Central

    Farkas, Richárd; Szarvas, György

    2008-01-01

    Background In this paper we focus on the problem of automatically constructing ICD-9-CM coding systems for radiology reports. ICD-9-CM codes are used for billing purposes by health institutes and are assigned to clinical records manually following clinical treatment. Since this labeling task requires expert knowledge in the field of medicine, the process itself is costly and is prone to errors as human annotators have to consider thousands of possible codes when assigning the right ICD-9-CM labels to a document. In this study we use the datasets made available for training and testing automated ICD-9-CM coding systems by the organisers of an International Challenge on Classifying Clinical Free Text Using Natural Language Processing in spring 2007. The challenge itself was dominated by entirely or partly rule-based systems that solve the coding task using a set of hand crafted expert rules. Since the feasibility of the construction of such systems for thousands of ICD codes is indeed questionable, we decided to examine the problem of automatically constructing similar rule sets that turned out to achieve a remarkable accuracy in the shared task challenge. Results Our results are very promising in the sense that we managed to achieve comparable results with purely hand-crafted ICD-9-CM classifiers. Our best model got a 90.26% F measure on the training dataset and an 88.93% F measure on the challenge test dataset, using the micro-averaged Fβ=1 measure, the official evaluation metric of the International Challenge on Classifying Clinical Free Text Using Natural Language Processing. This result would have placed second in the challenge, with a hand-crafted system achieving slightly better results. Conclusions Our results demonstrate that hand-crafted systems – which proved to be successful in ICD-9-CM coding – can be reproduced by replacing several laborious steps in their construction with machine learning models. These hybrid systems preserve the favourable

  14. Semantic Web for Manufacturing Web Services

    SciTech Connect

    Kulvatunyou, Boonserm; Ivezic, Nenad

    2002-06-01

    As markets become unexpectedly turbulent with a shortened product life cycle and a power shift towards buyers, the need for methods to rapidly and cost-effectively develop products, production facilities and supporting software is becoming urgent. The use of a virtual enterprise plays a vital role in surviving turbulent markets. However, its success requires reliable and large-scale interoperation among trading partners via a semantic web of trading partners' services whose properties, capabilities, and interfaces are encoded in an unambiguous as well as computer-understandable form. This paper demonstrates a promising approach to integration and interoperation between a design house and a manufacturer by developing semantic web services for business and engineering transactions. To this end, detailed activity and information flow diagrams are developed, in which the two trading partners exchange messages and documents. The properties and capabilities of the manufacturer sites are defined using DARPA Agent Markup Language (DAML) ontology definition language. The prototype development of semantic webs shows that enterprises can widely interoperate in an unambiguous and autonomous manner; hence, virtual enterprise is realizable at a low cost.

  15. Latent semantic analysis.

    PubMed

    Evangelopoulos, Nicholas E

    2013-11-01

    This article reviews latent semantic analysis (LSA), a theory of meaning as well as a method for extracting that meaning from passages of text, based on statistical computations over a collection of documents. LSA as a theory of meaning defines a latent semantic space where documents and individual words are represented as vectors. LSA as a computational technique uses linear algebra to extract dimensions that represent that space. This representation enables the computation of similarity among terms and documents, categorization of terms and documents, and summarization of large collections of documents using automated procedures that mimic the way humans perform similar cognitive tasks. We present some technical details, various illustrative examples, and discuss a number of applications from linguistics, psychology, cognitive science, education, information science, and analysis of textual data in general. WIREs Cogn Sci 2013, 4:683-692. doi: 10.1002/wcs.1254 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. PMID:26304272

  16. Latent semantic analysis.

    PubMed

    Evangelopoulos, Nicholas E

    2013-11-01

    This article reviews latent semantic analysis (LSA), a theory of meaning as well as a method for extracting that meaning from passages of text, based on statistical computations over a collection of documents. LSA as a theory of meaning defines a latent semantic space where documents and individual words are represented as vectors. LSA as a computational technique uses linear algebra to extract dimensions that represent that space. This representation enables the computation of similarity among terms and documents, categorization of terms and documents, and summarization of large collections of documents using automated procedures that mimic the way humans perform similar cognitive tasks. We present some technical details, various illustrative examples, and discuss a number of applications from linguistics, psychology, cognitive science, education, information science, and analysis of textual data in general. WIREs Cogn Sci 2013, 4:683-692. doi: 10.1002/wcs.1254 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website.

  17. "Pre-Semantic" Cognition Revisited: Critical Differences between Semantic Aphasia and Semantic Dementia

    ERIC Educational Resources Information Center

    Jefferies, Elizabeth; Rogers, Timothy T.; Hopper, Samantha; Lambon Ralph, Matthew A.

    2010-01-01

    Patients with semantic dementia show a specific pattern of impairment on both verbal and non-verbal "pre-semantic" tasks, e.g., reading aloud, past tense generation, spelling to dictation, lexical decision, object decision, colour decision and delayed picture copying. All seven tasks are characterised by poorer performance for items that are…

  18. X-Informatics: Practical Semantic Science

    NASA Astrophysics Data System (ADS)

    Borne, K. D.

    2009-12-01

    The discipline of data science is merging with multiple science disciplines to form new X-informatics research disciplines. They are almost too numerous to name, but they include geoinformatics, bioinformatics, cheminformatics, biodiversity informatics, ecoinformatics, materials informatics, and the emerging discipline of astroinformatics. Within any X-informatics discipline, the information granules are unique to that discipline -- e.g., gene sequences in bio, the sky object in astro, and the spatial object in geo (such as points, lines, and polygons in the vector model, and pixels in the raster model). Nevertheless the goals are similar: transparent data re-use across subdisciplines and within education settings, information and data integration and fusion, personalization of user interactions with the data collection, semantic search and retrieval, and knowledge discovery. The implementation of an X-informatics framework enables these semantic e-science research goals. We describe the concepts, challenges, and new developments associated with the new discipline of astroinformatics, and how geoinformatics provides valuable lessons learned and a model for practical semantic science within a traditional science discipline through the accretion of data science methodologies (such as formal metadata creation, data models, data mining, information retrieval, knowledge engineering, provenance, taxonomies, and ontologies). The emerging concept of data-as-a-service (DaaS) builds upon the concept of smart data (or data DNA) for intelligent data management, automated workflows, and intelligent processing. Smart data, defined through X-informatics, enables several practical semantic science use cases, including self-discovery, data intelligence, automatic recommendations, relevance analysis, dimension reduction, feature selection, constraint-based mining, interdisciplinary data re-use, knowledge-sharing, data use in education, and more. We describe these concepts within the

  19. Robust Unsupervised Arousal Rating: A Rule-Based Framework with Knowledge-Inspired Vocal Features

    PubMed Central

    Bone, Daniel; Lee, Chi-Chun; Narayanan, Shrikanth

    2015-01-01

    Studies in classifying affect from vocal cues have produced exceptional within-corpus results, especially for arousal (activation or stress); yet cross-corpora affect recognition has only recently garnered attention. An essential requirement of many behavioral studies is affect scoring that generalizes across different social contexts and data conditions. We present a robust, unsupervised (rule-based) method for providing a scale-continuous, bounded arousal rating operating on the vocal signal. The method incorporates just three knowledge-inspired features chosen based on empirical and theoretical evidence. It constructs a speaker’s baseline model for each feature separately, and then computes single-feature arousal scores. Lastly, it advantageously fuses the single-feature arousal scores into a final rating without knowledge of the true affect. The baseline data is preferably labeled as neutral, but some initial evidence is provided to suggest that no labeled data is required in certain cases. The proposed method is compared to a state-of-the-art supervised technique which employs a high-dimensional feature set. The proposed framework achieves highly-competitive performance with additional benefits. The measure is interpretable, scale-continuous as opposed to discrete, and can operate without any affective labeling. An accompanying Matlab tool is made available with the paper. PMID:25705327

  20. The study on dynamic cadastral coding rules based on kinship relationship

    NASA Astrophysics Data System (ADS)

    Xu, Huan; Liu, Nan; Liu, Renyi; Lu, Jingfeng

    2007-06-01

    Cadastral coding rules are an important supplement to the existing national and local standard specifications for building cadastral database. After analyzing the course of cadastral change, especially the parcel change with the method of object-oriented analysis, a set of dynamic cadastral coding rules based on kinship relationship corresponding to the cadastral change is put forward and a coding format composed of street code, block code, father parcel code, child parcel code and grandchild parcel code is worked out within the county administrative area. The coding rule has been applied to the development of an urban cadastral information system called "ReGIS", which is not only able to figure out the cadastral code automatically according to both the type of parcel change and the coding rules, but also capable of checking out whether the code is spatiotemporally unique before the parcel is stored in the database. The system has been used in several cities of Zhejiang Province and got a favorable response. This verifies the feasibility and effectiveness of the coding rules to some extent.

  1. Reliability and performance evaluation of systems containing embedded rule-based expert systems

    NASA Technical Reports Server (NTRS)

    Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.

    1989-01-01

    A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.

  2. Space communications scheduler: A rule-based approach to adaptive deadline scheduling

    NASA Technical Reports Server (NTRS)

    Straguzzi, Nicholas

    1990-01-01

    Job scheduling is a deceptively complex subfield of computer science. The highly combinatorial nature of the problem, which is NP-complete in nearly all cases, requires a scheduling program to intelligently transverse an immense search tree to create the best possible schedule in a minimal amount of time. In addition, the program must continually make adjustments to the initial schedule when faced with last-minute user requests, cancellations, unexpected device failures, quests, cancellations, unexpected device failures, etc. A good scheduler must be quick, flexible, and efficient, even at the expense of generating slightly less-than-optimal schedules. The Space Communication Scheduler (SCS) is an intelligent rule-based scheduling system. SCS is an adaptive deadline scheduler which allocates modular communications resources to meet an ordered set of user-specified job requests on board the NASA Space Station. SCS uses pattern matching techniques to detect potential conflicts through algorithmic and heuristic means. As a result, the system generates and maintains high density schedules without relying heavily on backtracking or blind search techniques. SCS is suitable for many common real-world applications.

  3. Analysis, Simulation, and Verification of Knowledge-Based, Rule-Based, and Expert Systems

    NASA Technical Reports Server (NTRS)

    Hinchey, Mike; Rash, James; Erickson, John; Gracanin, Denis; Rouff, Chris

    2010-01-01

    Mathematically sound techniques are used to view a knowledge-based system (KBS) as a set of processes executing in parallel and being enabled in response to specific rules being fired. The set of processes can be manipulated, examined, analyzed, and used in a simulation. The tool that embodies this technology may warn developers of errors in their rules, but may also highlight rules (or sets of rules) in the system that are underspecified (or overspecified) and need to be corrected for the KBS to operate as intended. The rules embodied in a KBS specify the allowed situations, events, and/or results of the system they describe. In that sense, they provide a very abstract specification of a system. The system is implemented through the combination of the system specification together with an appropriate inference engine, independent of the algorithm used in that inference engine. Viewing the rule base as a major component of the specification, and choosing an appropriate specification notation to represent it, reveals how additional power can be derived from an approach to the knowledge-base system that involves analysis, simulation, and verification. This innovative approach requires no special knowledge of the rules, and allows a general approach where standardized analysis, verification, simulation, and model checking techniques can be applied to the KBS.

  4. Transfer in Rule-Based Category Learning Depends on the Training Task

    PubMed Central

    Kattner, Florian; Cox, Christopher R.; Green, C. Shawn

    2016-01-01

    While learning is often highly specific to the exact stimuli and tasks used during training, there are cases where training results in learning that generalizes more broadly. It has been previously argued that the degree of specificity can be predicted based upon the learning solution(s) dictated by the particular demands of the training task. Here we applied this logic in the domain of rule-based categorization learning. Participants were presented with stimuli corresponding to four different categories and were asked to perform either a category discrimination task (which permits learning specific rule to discriminate two categories) or a category identification task (which does not permit learning a specific discrimination rule). In a subsequent transfer stage, all participants were asked to discriminate stimuli belonging to two of the categories which they had seen, but had never directly discriminated before (i.e., this particular discrimination was omitted from training). As predicted, learning in the category-discrimination tasks tended to be specific, while the category-identification task produced learning that transferred to the transfer discrimination task. These results suggest that the discrimination and identification tasks fostered the acquisition of different category representations which were more or less generalizable. PMID:27764221

  5. Classification of a set of vectors using self-organizing map- and rule-based technique

    NASA Astrophysics Data System (ADS)

    Ae, Tadashi; Okaniwa, Kaishirou; Nosaka, Kenzaburou

    2005-02-01

    There exist various objects, such as pictures, music, texts, etc., around our environment. We have a view for these objects by looking, reading or listening. Our view is concerned with our behaviors deeply, and is very important to understand our behaviors. We have a view for an object, and decide the next action (data selection, etc.) with our view. Such a series of actions constructs a sequence. Therefore, we propose a method which acquires a view as a vector from several words for a view, and apply the vector to sequence generation. We focus on sequences of the data of which a user selects from a multimedia database containing pictures, music, movie, etc... These data cannot be stereotyped because user's view for them changes by each user. Therefore, we represent the structure of the multimedia database as the vector representing user's view and the stereotyped vector, and acquire sequences containing the structure as elements. Such a vector can be classified by SOM (Self-Organizing Map). Hidden Markov Model (HMM) is a method to generate sequences. Therefore, we use HMM of which a state corresponds to the representative vector of user's view, and acquire sequences containing the change of user's view. We call it Vector-state Markov Model (VMM). We introduce the rough set theory as a rule-base technique, which plays a role of classifying the sets of data such as the sets of "Tour".

  6. A Novel Rule-Based Algorithm for Assigning Myocardial Fiber Orientation to Computational Heart Models

    PubMed Central

    Bayer, J. D.; Blake, R. C.; Plank, G.; Trayanova, N. A.

    2012-01-01

    Electrical waves traveling throughout the myocardium elicit muscle contractions responsible for pumping blood throughout the body. The shape and direction of these waves depend on the spatial arrangement of ventricular myocytes, termed fiber orientation. In computational studies simulating electrical wave propagation or mechanical contraction in the heart, accurately representing fiber orientation is critical so that model predictions corroborate with experimental data. Typically, fiber orientation is assigned to heart models based on Diffusion Tensor Imaging (DTI) data, yet few alternative methodologies exist if DTI data is noisy or absent. Here we present a novel Laplace–Dirichlet Rule-Based (LDRB) algorithm to perform this task with speed, precision, and high usability. We demonstrate the application of the LDRB algorithm in an image-based computational model of the canine ventricles. Simulations of electrical activation in this model are compared to those in the same geometrical model but with DTI-derived fiber orientation. The results demonstrate that activation patterns from simulations with LDRB and DTI-derived fiber orientations are nearly indistinguishable, with relative differences ≤6%, absolute mean differences in activation times ≤3.15 ms, and positive correlations ≥0.99. These results convincingly show that the LDRB algorithm is a robust alternative to DTI for assigning fiber orientation to computational heart models. PMID:22648575

  7. Rule-based learning of regular past tense in children with specific language impairment.

    PubMed

    Smith-Lock, Karen M

    2015-01-01

    The treatment of children with specific language impairment was used as a means to investigate whether a single- or dual-mechanism theory best conceptualizes the acquisition of English past tense. The dual-mechanism theory proposes that regular English past-tense forms are produced via a rule-based process whereas past-tense forms of irregular verbs are stored in the lexicon. Single-mechanism theories propose that both regular and irregular past-tense verbs are stored in the lexicon. Five 5-year-olds with specific language impairment received treatment for regular past tense. The children were tested on regular past-tense production and third-person singular "s" twice before treatment and once after treatment, at eight-week intervals. Treatment consisted of one-hour play-based sessions, once weekly, for eight weeks. Crucially, treatment focused on different lexical items from those in the test. Each child demonstrated significant improvement on the untreated past-tense test items after treatment, but no improvement on the untreated third-person singular "s". Generalization to untreated past-tense verbs could not be attributed to a frequency effect or to phonological similarity of trained and tested items. It is argued that the results are consistent with a dual-mechanism theory of past-tense inflection.

  8. Overcoming rule-based rigidity and connectionist limitations through massively-parallel case-based reasoning

    NASA Technical Reports Server (NTRS)

    Barnden, John; Srinivas, Kankanahalli

    1990-01-01

    Symbol manipulation as used in traditional Artificial Intelligence has been criticized by neural net researchers for being excessively inflexible and sequential. On the other hand, the application of neural net techniques to the types of high-level cognitive processing studied in traditional artificial intelligence presents major problems as well. A promising way out of this impasse is to build neural net models that accomplish massively parallel case-based reasoning. Case-based reasoning, which has received much attention recently, is essentially the same as analogy-based reasoning, and avoids many of the problems leveled at traditional artificial intelligence. Further problems are avoided by doing many strands of case-based reasoning in parallel, and by implementing the whole system as a neural net. In addition, such a system provides an approach to some aspects of the problems of noise, uncertainty and novelty in reasoning systems. The current neural net system (Conposit), which performs standard rule-based reasoning, is being modified into a massively parallel case-based reasoning version.

  9. Rule-based fuzzy vector median filters for 3D phase contrast MRI segmentation

    NASA Astrophysics Data System (ADS)

    Sundareswaran, Kartik S.; Frakes, David H.; Yoganathan, Ajit P.

    2008-02-01

    Recent technological advances have contributed to the advent of phase contrast magnetic resonance imaging (PCMRI) as standard practice in clinical environments. In particular, decreased scan times have made using the modality more feasible. PCMRI is now a common tool for flow quantification, and for more complex vector field analyses that target the early detection of problematic flow conditions. Segmentation is one component of this type of application that can impact the accuracy of the final product dramatically. Vascular segmentation, in general, is a long-standing problem that has received significant attention. Segmentation in the context of PCMRI data, however, has been explored less and can benefit from object-based image processing techniques that incorporate fluids specific information. Here we present a fuzzy rule-based adaptive vector median filtering (FAVMF) algorithm that in combination with active contour modeling facilitates high-quality PCMRI segmentation while mitigating the effects of noise. The FAVMF technique was tested on 111 synthetically generated PC MRI slices and on 15 patients with congenital heart disease. The results were compared to other multi-dimensional filters namely the adaptive vector median filter, the adaptive vector directional filter, and the scalar low pass filter commonly used in PC MRI applications. FAVMF significantly outperformed the standard filtering methods (p < 0.0001). Two conclusions can be drawn from these results: a) Filtering should be performed after vessel segmentation of PC MRI; b) Vector based filtering methods should be used instead of scalar techniques.

  10. Automatic de-identification of French clinical records: comparison of rule-based and machine-learning approaches.

    PubMed

    Grouin, Cyril; Zweigenbaum, Pierre

    2013-01-01

    In this paper, we present a comparison of two approaches to automatically de-identify medical records written in French: a rule-based system and a machine-learning based system using a conditional random fields (CRF) formalism. Both systems have been designed to process nine identifiers in a corpus of medical records in cardiology. We performed two evaluations: first, on 62 documents in cardiology, and on 10 documents in foetopathology - produced by optical character recognition (OCR) - to evaluate the robustness of our systems. We achieved a 0.843 (rule-based) and 0.883 (machine-learning) exact match overall F-measure in cardiology. While the rule-based system allowed us to achieve good results on nominative (first and last names) and numerical data (dates, phone numbers, and zip codes), the machine-learning approach performed best on more complex categories (postal addresses, hospital names, medical devices, and towns). On the foetopathology corpus, although our systems have not been designed for this corpus and despite OCR character recognition errors, we obtained promising results: a 0.681 (rule-based) and 0.638 (machine-learning) exact-match overall F-measure. This demonstrates that existing tools can be applied to process new documents of lower quality.

  11. Computerized lung nodule detection on thoracic CT images: combined rule-based and statistical classifier for false-positive reduction

    NASA Astrophysics Data System (ADS)

    Gurcan, Metin N.; Petrick, Nicholas; Sahiner, Berkman; Chan, Heang-Ping; Cascade, Philip N.; Kazerooni, Ella A.; Hadjiiski, Lubomir M.

    2001-07-01

    We are developing a computer-aided diagnosis (CAD) system for lung nodule detection on thoracic helical computed tomography (CT) images. In the first stage of this CAD system, lung regions are identified and suspicious structures are segmented. These structures may include true lung nodules or normal structures that consist mainly of vascular structures. We have designed rule-based classifiers to distinguish nodules and normal structures using 2D and 3D features. After rule-based classification, linear discriminant analysis (LDA) is used to further reduce the number of false positive (FP) objects. We have performed a preliminary study using CT images from 17 patients with 31 lung nodules. When only LDA classification was applied to the segmented objects, the sensitivity was 84% (26/31) with 2.53 (1549/612) FP objects per slice. When the LDA followed the rule-based classifier, the number of FP objects per slice decreased to 1.75 (1072/612) at the same sensitivity. These preliminary results demonstrate the feasibility of our approach for nodule detection and FP reduction on CT images. The inclusion of rule-based classification leads to an improvement in detection accuracy for the CAD system.

  12. HERB: A production system for programming with hierarchical expert rule bases: User's manual, HERB Version 1. 0

    SciTech Connect

    Hummel, K.E.

    1987-12-01

    Expert systems are artificial intelligence programs that solve problems requiring large amounts of heuristic knowledge, based on years of experience and tradition. Production systems are domain-independent tools that support the development of rule-based expert systems. This document describes a general purpose production system known as HERB. This system was developed to support the programming of expert systems using hierarchically structured rule bases. HERB encourages the partitioning of rules into multiple rule bases and supports the use of multiple conflict resolution strategies. Multiple rule bases can also be placed on a system stack and simultaneously searched during each interpreter cycle. Both backward and forward chaining rules are supported by HERB. The condition portion of each rule can contain both patterns, which are matched with facts in a data base, and LISP expressions, which are explicitly evaluated in the LISP environment. Properties of objects can also be stored in the HERB data base and referenced within the scope of each rule. This document serves both as an introduction to the principles of LISP-based production systems and as a user's manual for the HERB system. 6 refs., 17 figs.

  13. The Semantic Distance Model of Relevance Assessment.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.

    1998-01-01

    Presents the Semantic Distance Model (SDM) of Relevance Assessment, a cognitive model of the relationship between semantic distance and relevance assessment. Discusses premises of the model such as the subjective nature of information and the metaphor of semantic distance. Empirical results illustrate the effects of semantic distance and semantic…

  14. Mapping the Structure of Semantic Memory

    ERIC Educational Resources Information Center

    Morais, Ana Sofia; Olsson, Henrik; Schooler, Lael J.

    2013-01-01

    Aggregating snippets from the semantic memories of many individuals may not yield a good map of an individual's semantic memory. The authors analyze the structure of semantic networks that they sampled from individuals through a new snowball sampling paradigm during approximately 6 weeks of 1-hr daily sessions. The semantic networks of individuals…

  15. Improving protein coreference resolution by simple semantic classification

    PubMed Central

    2012-01-01

    Background Current research has shown that major difficulties in event extraction for the biomedical domain are traceable to coreference. Therefore, coreference resolution is believed to be useful for improving event extraction. To address coreference resolution in molecular biology literature, the Protein Coreference (COREF) task was arranged in the BioNLP Shared Task (BioNLP-ST, hereafter) 2011, as a supporting task. However, the shared task results indicated that transferring coreference resolution methods developed for other domains to the biological domain was not a straight-forward task, due to the domain differences in the coreference phenomena. Results We analyzed the contribution of domain-specific information, including the information that indicates the protein type, in a rule-based protein coreference resolution system. In particular, the domain-specific information is encoded into semantic classification modules for which the output is used in different components of the coreference resolution. We compared our system with the top four systems in the BioNLP-ST 2011; surprisingly, we found that the minimal configuration had outperformed the best system in the BioNLP-ST 2011. Analysis of the experimental results revealed that semantic classification, using protein information, has contributed to an increase in performance by 2.3% on the test data, and 4.0% on the development data, in F-score. Conclusions The use of domain-specific information in semantic classification is important for effective coreference resolution. Since it is difficult to transfer domain-specific information across different domains, we need to continue seek for methods to utilize such information in coreference resolution. PMID:23157272

  16. Feasibility of the Rule-Based Approach to Creating Complex Pictograms.

    PubMed

    Kim, Jaemin; Fnu, Vineet; Bell, Elizabeth; Kim, Hyeoneui

    2016-01-01

    To test the effectiveness of the health pictograms created based on the pictogram composite rules, we created 7 new composite pictograms following the composite rules extracted from the USP pictograms. We then tested their understandability by surveying 42 volunteers recruited at a senior wellness center in San Diego, CA. Lower level of comprehension was observed in all 7 new composite pictograms when compared to the USP pictograms with similar styles. No consistent socio-demographic effect on the comprehension of the pictograms was discerned. The major sources of misinterpretations were (1) misunderstanding the main action depicted in the image, (2) ignoring the conditional information, and (3) making an incorrect semantic association between the main information and the conditional information. Design rules from the validated set of pictograms might serve as the starting point for creating a new health pictogram. However, rigorous validation and revision of the initial design should follow. PMID:27332230

  17. Exploiting Recurring Structure in a Semantic Network

    NASA Technical Reports Server (NTRS)

    Wolfe, Shawn R.; Keller, Richard M.

    2004-01-01

    With the growing popularity of the Semantic Web, an increasing amount of information is becoming available in machine interpretable, semantically structured networks. Within these semantic networks are recurring structures that could be mined by existing or novel knowledge discovery methods. The mining of these semantic structures represents an interesting area that focuses on mining both for and from the Semantic Web, with surprising applicability to problems confronting the developers of Semantic Web applications. In this paper, we present representative examples of recurring structures and show how these structures could be used to increase the utility of a semantic repository deployed at NASA.

  18. Matching Alternative Addresses: a Semantic Web Approach

    NASA Astrophysics Data System (ADS)

    Ariannamazi, S.; Karimipour, F.; Hakimpour, F.

    2015-12-01

    Rapid development of crowd-sourcing or volunteered geographic information (VGI) provides opportunities for authoritatives that deal with geospatial information. Heterogeneity of multiple data sources and inconsistency of data types is a key characteristics of VGI datasets. The expansion of cities resulted in the growing number of POIs in the OpenStreetMap, a well-known VGI source, which causes the datasets to outdate in short periods of time. These changes made to spatial and aspatial attributes of features such as names and addresses might cause confusion or ambiguity in the processes that require feature's literal information like addressing and geocoding. VGI sources neither will conform specific vocabularies nor will remain in a specific schema for a long period of time. As a result, the integration of VGI sources is crucial and inevitable in order to avoid duplication and the waste of resources. Information integration can be used to match features and qualify different annotation alternatives for disambiguation. This study enhances the search capabilities of geospatial tools with applications able to understand user terminology to pursuit an efficient way for finding desired results. Semantic web is a capable tool for developing technologies that deal with lexical and numerical calculations and estimations. There are a vast amount of literal-spatial data representing the capability of linguistic information in knowledge modeling, but these resources need to be harmonized based on Semantic Web standards. The process of making addresses homogenous generates a helpful tool based on spatial data integration and lexical annotation matching and disambiguating.

  19. Receptive vocabulary and semantic knowledge in children with SLI and children with Down syndrome.

    PubMed

    Laws, Glynis; Briscoe, Josie; Ang, Su-Yin; Brown, Heather; Hermena, Ehab; Kapikian, Anna

    2015-01-01

    Receptive vocabulary and associated semantic knowledge were compared within and between groups of children with specific language impairment (SLI), children with Down syndrome (DS), and typically developing children. To overcome the potential confounding effects of speech or language difficulties on verbal tests of semantic knowledge, a novel task was devised based on picture-based semantic association tests used to assess adult patients with semantic dementia. Receptive vocabulary, measured by word-picture matching, of children with SLI was weak relative to chronological age and to nonverbal mental age but their semantic knowledge, probed across the same lexical items, did not differ significantly from that of vocabulary-matched typically developing children. By contrast, although receptive vocabulary of children with DS was a relative strength compared to nonverbal cognitive abilities (p < .0001), DS was associated with a significant deficit in semantic knowledge (p < .0001) indicative of dissociation between word-picture matching vocabulary and depth of semantic knowledge. Overall, these data challenge the integrity of semantic-conceptual development in DS and imply that contemporary theories of semantic cognition should also seek to incorporate evidence from atypical conceptual development.

  20. Semantic perception for ground robotics

    NASA Astrophysics Data System (ADS)

    Hebert, M.; Bagnell, J. A.; Bajracharya, M.; Daniilidis, K.; Matthies, L. H.; Mianzo, L.; Navarro-Serment, L.; Shi, J.; Wellfare, M.

    2012-06-01

    Semantic perception involves naming objects and features in the scene, understanding the relations between them, and understanding the behaviors of agents, e.g., people, and their intent from sensor data. Semantic perception is a central component of future UGVs to provide representations which 1) can be used for higher-level reasoning and tactical behaviors, beyond the immediate needs of autonomous mobility, and 2) provide an intuitive description of the robot's environment in terms of semantic elements that can shared effectively with a human operator. In this paper, we summarize the main approaches that we are investigating in the RCTA as initial steps toward the development of perception systems for UGVs.

  1. Workspaces in the Semantic Web

    NASA Technical Reports Server (NTRS)

    Wolfe, Shawn R.; Keller, RIchard M.

    2005-01-01

    Due to the recency and relatively limited adoption of Semantic Web technologies. practical issues related to technology scaling have received less attention than foundational issues. Nonetheless, these issues must be addressed if the Semantic Web is to realize its full potential. In particular, we concentrate on the lack of scoping methods that reduce the size of semantic information spaces so they are more efficient to work with and more relevant to an agent's needs. We provide some intuition to motivate the need for such reduced information spaces, called workspaces, give a formal definition, and suggest possible methods of deriving them.

  2. High Performance Descriptive Semantic Analysis of Semantic Graph Databases

    SciTech Connect

    Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan; Feo, John T.; Haglin, David J.; Mackey, Greg E.; Mizell, David W.

    2011-06-02

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.

  3. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    PubMed

    Li, Yuanqing; Wang, Guangyi; Long, Jinyi; Yu, Zhuliang; Huang, Biao; Li, Xiaojian; Yu, Tianyou; Liang, Changhong; Li, Zheng; Sun, Pei

    2011-01-01

    One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI) experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG). The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  4. Imagery as a Facilitator of Semantic Integration.

    ERIC Educational Resources Information Center

    Weed, Keri; Ryan, Ellen Bouchard

    The relationship between processing style (either auditory or visual) and sentence and imagery strategies was investigated with a sample of 80 second-grade children. Assignment to auditory- and visual-processor groups was based on subjects' recall of 16 pictograph sequences, four of which included visual interference and four of which included…

  5. Deriving a probabilistic syntacto-semantic grammar for biomedicine based on domain-specific terminologies.

    PubMed

    Fan, Jung-Wei; Friedman, Carol

    2011-10-01

    Biomedical natural language processing (BioNLP) is a useful technique that unlocks valuable information stored in textual data for practice and/or research. Syntactic parsing is a critical component of BioNLP applications that rely on correctly determining the sentence and phrase structure of free text. In addition to dealing with the vast amount of domain-specific terms, a robust biomedical parser needs to model the semantic grammar to obtain viable syntactic structures. With either a rule-based or corpus-based approach, the grammar engineering process requires substantial time and knowledge from experts, and does not always yield a semantically transferable grammar. To reduce the human effort and to promote semantic transferability, we propose an automated method for deriving a probabilistic grammar based on a training corpus consisting of concept strings and semantic classes from the Unified Medical Language System (UMLS), a comprehensive terminology resource widely used by the community. The grammar is designed to specify noun phrases only due to the nominal nature of the majority of biomedical terminological concepts. Evaluated on manually parsed clinical notes, the derived grammar achieved a recall of 0.644, precision of 0.737, and average cross-bracketing of 0.61, which demonstrated better performance than a control grammar with the semantic information removed. Error analysis revealed shortcomings that could be addressed to improve performance. The results indicated the feasibility of an approach which automatically incorporates terminology semantics in the building of an operational grammar. Although the current performance of the unsupervised solution does not adequately replace manual engineering, we believe once the performance issues are addressed, it could serve as an aide in a semi-supervised solution. PMID:21549857

  6. Deriving a probabilistic syntacto-semantic grammar for biomedicine based on domain-specific terminologies.

    PubMed

    Fan, Jung-Wei; Friedman, Carol

    2011-10-01

    Biomedical natural language processing (BioNLP) is a useful technique that unlocks valuable information stored in textual data for practice and/or research. Syntactic parsing is a critical component of BioNLP applications that rely on correctly determining the sentence and phrase structure of free text. In addition to dealing with the vast amount of domain-specific terms, a robust biomedical parser needs to model the semantic grammar to obtain viable syntactic structures. With either a rule-based or corpus-based approach, the grammar engineering process requires substantial time and knowledge from experts, and does not always yield a semantically transferable grammar. To reduce the human effort and to promote semantic transferability, we propose an automated method for deriving a probabilistic grammar based on a training corpus consisting of concept strings and semantic classes from the Unified Medical Language System (UMLS), a comprehensive terminology resource widely used by the community. The grammar is designed to specify noun phrases only due to the nominal nature of the majority of biomedical terminological concepts. Evaluated on manually parsed clinical notes, the derived grammar achieved a recall of 0.644, precision of 0.737, and average cross-bracketing of 0.61, which demonstrated better performance than a control grammar with the semantic information removed. Error analysis revealed shortcomings that could be addressed to improve performance. The results indicated the feasibility of an approach which automatically incorporates terminology semantics in the building of an operational grammar. Although the current performance of the unsupervised solution does not adequately replace manual engineering, we believe once the performance issues are addressed, it could serve as an aide in a semi-supervised solution.

  7. Perceptual Learning Improves Adult Amblyopic Vision Through Rule-Based Cognitive Compensation

    PubMed Central

    Zhang, Jun-Yun; Cong, Lin-Juan; Klein, Stanley A.; Levi, Dennis M.; Yu, Cong

    2014-01-01

    Purpose. We investigated whether perceptual learning in adults with amblyopia could be enabled to transfer completely to an orthogonal orientation, which would suggest that amblyopic perceptual learning results mainly from high-level cognitive compensation, rather than plasticity in the amblyopic early visual brain. Methods. Nineteen adults (mean age = 22.5 years) with anisometropic and/or strabismic amblyopia were trained following a training-plus-exposure (TPE) protocol. The amblyopic eyes practiced contrast, orientation, or Vernier discrimination at one orientation for six to eight sessions. Then the amblyopic or nonamblyopic eyes were exposed to an orthogonal orientation via practicing an irrelevant task. Training was first performed at a lower spatial frequency (SF), then at a higher SF near the cutoff frequency of the amblyopic eye. Results. Perceptual learning was initially orientation specific. However, after exposure to the orthogonal orientation, learning transferred to an orthogonal orientation completely. Reversing the exposure and training order failed to produce transfer. Initial lower SF training led to broad improvement of contrast sensitivity, and later higher SF training led to more specific improvement at high SFs. Training improved visual acuity by 1.5 to 1.6 lines (P < 0.001) in the amblyopic eyes with computerized tests and a clinical E acuity chart. It also improved stereoacuity by 53% (P < 0.001). Conclusions. The complete transfer of learning suggests that perceptual learning in amblyopia may reflect high-level learning of rules for performing a visual discrimination task. These rules are applicable to new orientations to enable learning transfer. Therefore, perceptual learning may improve amblyopic vision mainly through rule-based cognitive compensation. PMID:24550359

  8. Syntactic and semantic processing of Chinese middle sentences: evidence from event-related potentials.

    PubMed

    Zeng, Tao; Mao, Wen; Lu, Qing

    2016-05-25

    Scalp-recorded event-related potentials are known to be sensitive to particular aspects of sentence processing. The N400 component is widely recognized as an effect closely related to lexical-semantic processing. The absence of an N400 effect in participants performing tasks in Indo-European languages has been considered evidence that failed syntactic category processing appears to block lexical-semantic integration and that syntactic structure building is a prerequisite of semantic analysis. An event-related potential experiment was designed to investigate whether such syntactic primacy can be considered to apply equally to Chinese sentence processing. Besides correct middles, sentences with either single semantic or single syntactic violation as well as double syntactic and semantic anomaly were used in the present research. Results showed that both purely semantic and combined violation induced a broad negativity in the time window 300-500 ms, indicating the independence of lexical-semantic integration. These findings provided solid evidence that lexical-semantic parsing plays a crucial role in Chinese sentence comprehension.

  9. Problem Solving with General Semantics.

    ERIC Educational Resources Information Center

    Hewson, David

    1996-01-01

    Discusses how to use general semantics formulations to improve problem solving at home or at work--methods come from the areas of artificial intelligence/computer science, engineering, operations research, and psychology. (PA)

  10. Semantic priming from crowded words.

    PubMed

    Yeh, Su-Ling; He, Sheng; Cavanagh, Patrick

    2012-06-01

    Vision in a cluttered scene is extremely inefficient. This damaging effect of clutter, known as crowding, affects many aspects of visual processing (e.g., reading speed). We examined observers' processing of crowded targets in a lexical decision task, using single-character Chinese words that are compact but carry semantic meaning. Despite being unrecognizable and indistinguishable from matched nonwords, crowded prime words still generated robust semantic-priming effects on lexical decisions for test words presented in isolation. Indeed, the semantic-priming effect of crowded primes was similar to that of uncrowded primes. These findings show that the meanings of words survive crowding even when the identities of the words do not, suggesting that crowding does not prevent semantic activation, a process that may have evolved in the context of a cluttered visual environment.

  11. Advancing translational research with the Semantic Web

    PubMed Central

    Ruttenberg, Alan; Clark, Tim; Bug, William; Samwald, Matthias; Bodenreider, Olivier; Chen, Helen; Doherty, Donald; Forsberg, Kerstin; Gao, Yong; Kashyap, Vipul; Kinoshita, June; Luciano, Joanne; Marshall, M Scott; Ogbuji, Chimezie; Rees, Jonathan; Stephens, Susie; Wong, Gwendolyn T; Wu, Elizabeth; Zaccagnini, Davide; Hongsermeier, Tonya; Neumann, Eric; Herman, Ivan; Cheung, Kei-Hoi

    2007-01-01

    Background A fundamental goal of the U.S. National Institute of Health (NIH) "Roadmap" is to strengthen Translational Research, defined as the movement of discoveries in basic research to application at the clinical level. A significant barrier to translational research is the lack of uniformly structured data across related biomedical domains. The Semantic Web is an extension of the current Web that enables navigation and meaningful use of digital resources by automatic processes. It is based on common formats that support aggregation and integration of data drawn from diverse sources. A variety of technologies have been built on this foundation that, together, support identifying, representing, and reasoning across a wide range of biomedical data. The Semantic Web Health Care and Life Sciences Interest Group (HCLSIG), set up within the framework of the World Wide Web Consortium, was launched to explore the application of these technologies in a variety of areas. Subgroups focus on making biomedical data available in RDF, working with biomedical ontologies, prototyping clinical decision support systems, working on drug safety and efficacy communication, and supporting disease researchers navigating and annotating the large amount of potentially relevant literature. Results We present a scenario that shows the value of the information environment the Semantic Web can support for aiding neuroscience researchers. We then report on several projects by members of the HCLSIG, in the process illustrating the range of Semantic Web technologies that have applications in areas of biomedicine. Conclusion Semantic Web technologies present both promise and challenges. Current tools and standards are already adequate to implement components of the bench-to-bedside vision. On the other hand, these technologies are young. Gaps in standards and implementations still exist and adoption is limited by typical problems with early technology, such as the need for a critical mass of

  12. Semantic Support for Complex Ecosystem Research Environments

    NASA Astrophysics Data System (ADS)

    Klawonn, M.; McGuinness, D. L.; Pinheiro, P.; Santos, H. O.; Chastain, K.

    2015-12-01

    As ecosystems come under increasing stresses from diverse sources, there is growing interest in research efforts aimed at monitoring, modeling, and improving understanding of ecosystems and protection options. We aimed to provide a semantic infrastructure capable of representing data initially related to one large aquatic ecosystem research effort - the Jefferson project at Lake George. This effort includes significant historical observational data, extensive sensor-based monitoring data, experimental data, as well as model and simulation data covering topics including lake circulation, watershed runoff, lake biome food webs, etc. The initial measurement representation has been centered on monitoring data and related provenance. We developed a human-aware sensor network ontology (HASNetO) that leverages existing ontologies (PROV-O, OBOE, VSTO*) in support of measurement annotations. We explicitly support the human-aware aspects of human sensor deployment and collection activity to help capture key provenance that often is lacking. Our foundational ontology has since been generalized into a family of ontologies and used to create our human-aware data collection infrastructure that now supports the integration of measurement data along with simulation data. Interestingly, we have also utilized the same infrastructure to work with partners who have some more specific needs for specifying the environmental conditions where measurements occur, for example, knowing that an air temperature is not an external air temperature, but of the air temperature when windows are shut and curtains are open. We have also leveraged the same infrastructure to work with partners more interested in modeling smart cities with data feeds more related to people, mobility, environment, and living. We will introduce our human-aware data collection infrastructure, and demonstrate how it uses HASNetO and its supporting SOLR-based search platform to support data integration and semantic browsing

  13. NASA and The Semantic Web

    NASA Technical Reports Server (NTRS)

    Ashish, Naveen

    2005-01-01

    We provide an overview of several ongoing NASA endeavors based on concepts, systems, and technology from the Semantic Web arena. Indeed NASA has been one of the early adopters of Semantic Web Technology and we describe ongoing and completed R&D efforts for several applications ranging from collaborative systems to airspace information management to enterprise search to scientific information gathering and discovery systems at NASA.

  14. Interaction between process and content in semantic memory: An fMRI study of noun feature knowledge

    PubMed Central

    Peelle, Jonathan E.; Troiani, Vanessa; Grossman, Murray

    2009-01-01

    Effective semantic processing requires both stored conceptual knowledge and the ability to relate this information to our environment. In the current study we examined how neural processing of a concept's features was modulated by the semantic context in which they were presented using two types of nouns: complex nouns, in which all features contribute in a variable manner to an object's meaning (apples are usually red, but not always), and nominal kinds, for which a single feature plays a diagnostic role (an uncle must be the brother of a parent). We used fMRI to monitor neural activity while participants viewed a list of features and decided whether the list accurately described a target concept. We focused on the effect of semantic context on processing of features critical to a concept's representation. Task demands were manipulated by giving participants instructions that encouraged rule-based or similarity-based judgments. Activation patterns for feature processing were found to depend on the type of noun being evaluated and whether or not critical features were consistent with surrounding information: When processing critical features that contradicted other information, complex nouns resulted in additional recruitment compared to nominal kinds in frontal and temporal cortex. We observed modest effects of instruction condition, with rule-based instructions resulting in increased frontal processing and similarity-based instructions recruiting more temporal and parietal regions. Together, these results support the hypothesis that various classes of nouns are represented differently in semantic memory, and emphasize the dynamic interaction of process and content in semantic memory. PMID:19041332

  15. Toward Semantic Web Infrastructure for Spatial FEATURES' Information

    NASA Astrophysics Data System (ADS)

    Arabsheibani, R.; Ariannamazi, S.; Hakimpour, F.

    2015-12-01

    The Web and its capabilities can be employed as a tool for data and information integration if comprehensive datasets and appropriate technologies and standards enable the web with interpretation and easy alignment of data and information. Semantic Web along with the spatial functionalities enable the web to deal with the huge amount of data and information. The present study investigate the advantages and limitations of the Spatial Semantic Web and compare its capabilities with relational models in order to build a spatial data infrastructure. An architecture is proposed and a set of criteria is defined for the efficiency evaluation. The result demonstrate that when using the data with special characteristics such as schema dynamicity, sparse data or available relations between the features, the spatial semantic web and graph databases with spatial operations are preferable.

  16. A framework for network-wide semantic event correlation

    NASA Astrophysics Data System (ADS)

    Hall, Robert T.; Taylor, Joshua

    2013-05-01

    An increasing need for situational awareness within network-deployed Systems Under Test has increased desire for frameworks that facilitate system-wide data correlation and analysis. Massive event streams are generated from heterogeneous sensors which require tedious manual analysis. We present a framework for sensor data integration and event correlation based on Linked Data principles, Semantic Web reasoning technology, complex event processing, and blackboard architectures. Sensor data are encoded as RDF models, then processed by complex event processing agents (which incorporate domain specific reasoners, as well as general purpose Semantic Web reasoning techniques). Agents can publish inferences on shared blackboards and generate new semantic events that are fed back into the system. We present AIS, Inc.'s Cyber Battlefield Training and Effectiveness Environment to demonstrate use of the framework.

  17. [An effect of semantic satiation in conceptual processing].

    PubMed

    Takashi, Shimokido

    2007-12-01

    This study examined whether semantic satiation effects for a picture exemplar differ from a word exemplar. If massive repetition of the category name leads to an inhibition of conceptual processing, then semantic satiation effects would be found in both the word and picture exemplar conditions. However, if the repetition leads to an inhibition of lexical processing, then effects would be found for the word exemplar but not the picture exemplar. To examine these hypotheses, 48 college students were asked to judge whether a target pair of exemplars belonged to the same named category. The results showed that semantic satiation effects were found equally in both exemplar conditions. Moreover, the picture-superiority effect was intact regardless of the prime repetitions. The possibility was discussed that word and picture exemplars are integrated into an abstract and amodal conceptual unit; hence category judgment was affected by the satiation effect. PMID:18186281

  18. Semantic preview benefit during reading.

    PubMed

    Hohenstein, Sven; Kliegl, Reinhold

    2014-01-01

    Word features in parafoveal vision influence eye movements during reading. The question of whether readers extract semantic information from parafoveal words was studied in 3 experiments by using a gaze-contingent display change technique. Subjects read German sentences containing 1 of several preview words that were replaced by a target word during the saccade to the preview (boundary paradigm). In the 1st experiment the preview word was semantically related or unrelated to the target. Fixation durations on the target were shorter for semantically related than unrelated previews, consistent with a semantic preview benefit. In the 2nd experiment, half the sentences were presented following the rules of German spelling (i.e., previews and targets were printed with an initial capital letter), and the other half were presented completely in lowercase. A semantic preview benefit was obtained under both conditions. In the 3rd experiment, we introduced 2 further preview conditions, an identical word and a pronounceable nonword, while also manipulating the text contrast. Whereas the contrast had negligible effects, fixation durations on the target were reliably different for all 4 types of preview. Semantic preview benefits were greater for pretarget fixations closer to the boundary (large preview space) and, although not as consistently, for long pretarget fixation durations (long preview time). The results constrain theoretical proposals about eye movement control in reading. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  19. Lexical retrieval and semantic knowledge in patients with left inferior temporal lobe lesions

    PubMed Central

    Antonucci, Sharon M.; Beeson, Pélagie M.; Labiner, David M.; Rapcsak, Steven Z.

    2009-01-01

    Background It has been proposed that anomia following left inferior temporal lobe lesions may have two different underlying mechanisms with distinct neural substrates. Specifically, naming impairment following damage to more posterior regions (BA 37) has been considered to result from a disconnection between preserved semantic knowledge and phonological word forms (pure anomia), whereas anomia following damage to anterior temporal regions (BAs 38, 20/21) has been attributed to the degradation of semantic representations (semantic anomia). However, the integrity of semantic knowledge in patients with pure anomia has not been demonstrated convincingly, nor were lesions in these cases necessarily confined to BA 37. Furthermore, evidence of semantic anomia often comes from individuals with bilateral temporal lobe damage, so it is unclear whether unilateral temporal lobe lesions are sufficient to produce significant semantic impairment. Aims The main goals of this study were to determine whether anomia following unilateral left inferior temporal lobe damage reflected a loss of semantic knowledge or a post-semantic deficit in lexical retrieval and to identify the neuroanatomical correlates of the naming impairment. Methods & Procedures Eight individuals who underwent left anterior temporal lobectomy (L ATL) and eight individuals who sustained left posterior cerebral artery strokes (L PCA) completed a battery of language measures that assessed lexical retrieval and semantic processing, and 16 age- and education-matched controls also completed this battery. High-resolution structural brain scans were collected to conduct lesion analyses. Outcomes & Results Performance of L ATL and L PCA patients was strikingly similar, with both groups demonstrating naming performance ranging from moderately impaired to unimpaired. Anomia in both groups occurred in the context of mild deficits to semantic knowledge, which manifested primarily as greater difficulty in naming living things

  20. Lexical retrieval and semantic knowledge in patients with left inferior temporal lobe lesions.

    PubMed

    Antonucci, Sharon M; Beeson, Pélagie M; Labiner, David M; Rapcsak, Steven Z

    2008-03-01

    BACKGROUND: It has been proposed that anomia following left inferior temporal lobe lesions may have two different underlying mechanisms with distinct neural substrates. Specifically, naming impairment following damage to more posterior regions (BA 37) has been considered to result from a disconnection between preserved semantic knowledge and phonological word forms (pure anomia), whereas anomia following damage to anterior temporal regions (BAs 38, 20/21) has been attributed to the degradation of semantic representations (semantic anomia). However, the integrity of semantic knowledge in patients with pure anomia has not been demonstrated convincingly, nor were lesions in these cases necessarily confined to BA 37. Furthermore, evidence of semantic anomia often comes from individuals with bilateral temporal lobe damage, so it is unclear whether unilateral temporal lobe lesions are sufficient to produce significant semantic impairment. AIMS: The main goals of this study were to determine whether anomia following unilateral left inferior temporal lobe damage reflected a loss of semantic knowledge or a post-semantic deficit in lexical retrieval and to identify the neuroanatomical correlates of the naming impairment. METHODS #ENTITYSTARTX00026; PROCEDURES: Eight individuals who underwent left anterior temporal lobectomy (L ATL) and eight individuals who sustained left posterior cerebral artery strokes (L PCA) completed a battery of language measures that assessed lexical retrieval and semantic processing, and 16 age- and education-matched controls also completed this battery. High-resolution structural brain scans were collected to conduct lesion analyses. OUTCOMES #ENTITYSTARTX00026; RESULTS: Performance of L ATL and L PCA patients was strikingly similar, with both groups demonstrating naming performance ranging from moderately impaired to unimpaired. Anomia in both groups occurred in the context of mild deficits to semantic knowledge, which manifested primarily as