Sample records for semantically-enabled dataflow execution

  1. SciFlo: Semantically-Enabled Grid Workflow for Collaborative Science

    NASA Astrophysics Data System (ADS)

    Yunck, T.; Wilson, B. D.; Raskin, R.; Manipon, G.

    2005-12-01

    SciFlo is a system for Scientific Knowledge Creation on the Grid using a Semantically-Enabled Dataflow Execution Environment. SciFlo leverages Simple Object Access Protocol (SOAP) Web Services and the Grid Computing standards (WS-* standards and the Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable SOAP Services, native executables, local command-line scripts, and python codes into a distributed computing flow (a graph of operators). SciFlo's XML dataflow documents can be a mixture of concrete operators (fully bound operations) and abstract template operators (late binding via semantic lookup). All data objects and operators can be both simply typed (simple and complex types in XML schema) and semantically typed using controlled vocabularies (linked to OWL ontologies such as SWEET). By exploiting ontology-enhanced search and inference, one can discover (and automatically invoke) Web Services and operators that have been semantically labeled as performing the desired transformation, and adapt a particular invocation to the proper interface (number, types, and meaning of inputs and outputs). The SciFlo client & server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The scientist injects a distributed computation into the Grid by simply filling out an HTML form or directly authoring the underlying XML dataflow document, and results are returned directly to the scientist's desktop. A Visual Programming tool is also being developed, but it is not required. Once an analysis has been specified for a granule or day of data, it can be easily repeated with different control parameters and over months or years of data. SciFlo uses and preserves semantics, and also generates and infers new semantic annotations. Specifically, the SciFlo engine uses semantic metadata to understand (infer) what it is doing and potentially improve the data flow; preserves semantics by saving links to the semantics of (metadata describing) the input datasets, related datasets, and the data transformations (algorithms) used to generate downstream products; generates new metadata by allowing the user to add semantic annotations to the generated data products (or simply accept automatically generated provenance annotations); and infers new semantic metadata by understanding and applying logic to the semantics of the data and the transformations performed. Much ontology development still needs to be done but, nevertheless, SciFlo documents provide a substrate for using and preserving more semantics as ontologies develop. We will give a live demonstration of the growing SciFlo network using an example dataflow in which atmospheric temperature and water vapor profiles from three Earth Observing System (EOS) instruments are retrieved using SOAP (geo-location query & data access) services, co-registered, and visually & statistically compared on demand (see http://sciflo.jpl.nasa.gov for more information).

  2. Decaf: Decoupled Dataflows for In Situ High-Performance Workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dreher, M.; Peterka, T.

    Decaf is a dataflow system for the parallel communication of coupled tasks in an HPC workflow. The dataflow can perform arbitrary data transformations ranging from simply forwarding data to complex data redistribution. Decaf does this by allowing the user to allocate resources and execute custom code in the dataflow. All communication through the dataflow is efficient parallel message passing over MPI. The runtime for calling tasks is entirely message-driven; Decaf executes a task when all messages for the task have been received. Such a messagedriven runtime allows cyclic task dependencies in the workflow graph, for example, to enact computational steeringmore » based on the result of downstream tasks. Decaf includes a simple Python API for describing the workflow graph. This allows Decaf to stand alone as a complete workflow system, but Decaf can also be used as the dataflow layer by one or more other workflow systems to form a heterogeneous task-based computing environment. In one experiment, we couple a molecular dynamics code with a visualization tool using the FlowVR and Damaris workflow systems and Decaf for the dataflow. In another experiment, we test the coupling of a cosmology code with Voronoi tessellation and density estimation codes using MPI for the simulation, the DIY programming model for the two analysis codes, and Decaf for the dataflow. Such workflows consisting of heterogeneous software infrastructures exist because components are developed separately with different programming models and runtimes, and this is the first time that such heterogeneous coupling of diverse components was demonstrated in situ on HPC systems.« less

  3. A software tool for dataflow graph scheduling

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1994-01-01

    A graph-theoretic design process and software tool is presented for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described using a dataflow graph and are intended to be executed repetitively on multiple processors. The dataflow paradigm is very useful in exposing the parallelism inherent in algorithms. It provides a graphical and mathematical model which describes a partial ordering of algorithm tasks based on data precedence.

  4. Dataflow Design Tool: User's Manual

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1996-01-01

    The Dataflow Design Tool is a software tool for selecting a multiprocessor scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. The software tool implements graph-search algorithms and analysis techniques based on the dataflow paradigm. Dataflow analyses provided by the software are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool provides performance optimization through the inclusion of artificial precedence constraints among the schedulable tasks. The user interface and tool capabilities are described. Examples are provided to demonstrate the analysis, scheduling, and optimization functions facilitated by the tool.

  5. GENESIS SciFlo: Choreographing Interoperable Web Services on the Grid using a Semantically-Enabled Dataflow Execution Environment

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.

    2007-12-01

    The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* & Globus Alliance toolkits), and enables scientists to do multi- instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client & server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. SciFlo also publishes its own SOAP services for space/time query and subsetting of Earth Science datasets, and automated access to large datasets via lists of (FTP, HTTP, or DAP) URLs which point to on-line HDF or netCDF files. Typical distributed workflows obtain datasets by calling standard WMS/WCS servers or discovering and fetching data granules from ftp sites; invoke remote analysis operators available as SOAP services (interface described by a WSDL document); and merge results into binary containers (netCDF or HDF files) for further analysis using local executable operators. Naming conventions (HDFEOS and CF-1.0 for netCDF) are exploited to automatically understand and read on-line datasets. More interoperable conventions, and broader adoption of existing converntions, are vital if we are to "scale up" automated choreography of Web Services beyond toy applications. Recently, the ESIP Federation sponsored a collaborative activity in which several ESIP members developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, the benefits of doing collaborative science analysis at the "touch of a button" once services are connected, and further collaborations that are being pursued.

  6. Simulator for heterogeneous dataflow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    1993-01-01

    A new simulator is developed to simulate the execution of an algorithm graph in accordance with the Algorithm to Architecture Mapping Model (ATAMM) rules. ATAMM is a Petri Net model which describes the periodic execution of large-grained, data-independent dataflow graphs and which provides predictable steady state time-optimized performance. This simulator extends the ATAMM simulation capability from a heterogenous set of resources, or functional units, to a more general heterogenous architecture. Simulation test cases show that the simulator accurately executes the ATAMM rules for both a heterogenous architecture and a homogenous architecture, which is the special case for only one processor type. The simulator forms one tool in an ATAMM Integrated Environment which contains other tools for graph entry, graph modification for performance optimization, and playback of simulations for analysis.

  7. MAX - An advanced parallel computer for space applications

    NASA Technical Reports Server (NTRS)

    Lewis, Blair F.; Bunker, Robert L.

    1991-01-01

    MAX is a fault-tolerant multicomputer hardware and software architecture designed to meet the needs of NASA spacecraft systems. It consists of conventional computing modules (computers) connected via a dual network topology. One network is used to transfer data among the computers and between computers and I/O devices. This network's topology is arbitrary. The second network operates as a broadcast medium for operating system synchronization messages and supports the operating system's Byzantine resilience. A fully distributed operating system supports multitasking in an asynchronous event and data driven environment. A large grain dataflow paradigm is used to coordinate the multitasking and provide easy control of concurrency. It is the basis of the system's fault tolerance and allows both static and dynamical location of tasks. Redundant execution of tasks with software voting of results may be specified for critical tasks. The dataflow paradigm also supports simplified software design, test and maintenance. A unique feature is a method for reliably patching code in an executing dataflow application.

  8. Task scheduling in dataflow computer architectures

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1994-01-01

    Dataflow computers provide a platform for the solution of a large class of computational problems, which includes digital signal processing and image processing. Many typical applications are represented by a set of tasks which can be repetitively executed in parallel as specified by an associated dataflow graph. Research in this area aims to model these architectures, develop scheduling procedures, and predict the transient and steady state performance. Researchers at NASA have created a model and developed associated software tools which are capable of analyzing a dataflow graph and predicting its runtime performance under various resource and timing constraints. These models and tools were extended and used in this work. Experiments using these tools revealed certain properties of such graphs that require further study. Specifically, the transient behavior at the beginning of the execution of a graph can have a significant effect on the steady state performance. Transformation and retiming of the application algorithm and its initial conditions can produce a different transient behavior and consequently different steady state performance. The effect of such transformations on the resource requirements or under resource constraints requires extensive study. Task scheduling to obtain maximum performance (based on user-defined criteria), or to satisfy a set of resource constraints, can also be significantly affected by a transformation of the application algorithm. Since task scheduling is performed by heuristic algorithms, further research is needed to determine if new scheduling heuristics can be developed that can exploit such transformations. This work has provided the initial development for further long-term research efforts. A simulation tool was completed to provide insight into the transient and steady state execution of a dataflow graph. A set of scheduling algorithms was completed which can operate in conjunction with the modeling and performance tools previously developed. Initial studies on the performance of these algorithms were done to examine the effects of application algorithm transformations as measured by such quantities as number of processors, time between outputs, time between input and output, communication time, and memory size.

  9. Collaborative Science Using Web Services and the SciFlo Grid Dataflow Engine

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.; Yunck, T.

    2006-12-01

    The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* &Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client &server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. The scientist injects a distributed computation into the Grid by simply filling out an HTML form or directly authoring the underlying XML dataflow document, and results are returned directly to the scientist's desktop. Once an analysis has been specified for a chunk or day of data, it can be easily repeated with different control parameters or over months of data. Recently, the Earth Science Information Partners (ESIP) Federation sponsored a collaborative activity in which several ESIP members advertised their respective WMS/WCS and SOAP services, developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. For several scenarios, the same collaborative workflow was executed in three ways: using hand-coded scripts, by executing a SciFlo document, and by executing a BPEL workflow document. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, and further collaborations that are being pursued.

  10. Master-slave mixed arrays for data-flow computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, T.L.; Fisher, P.D.

    1983-01-01

    Control cells (masters) and computation cells (slaves) are mixed in regular geometric patterns to form reconfigurable arrays known as master-slave mixed arrays (MSMAS). Interconnections of the corners and edges of the hexagonal control cells and the edges of the hexagonal computation cells are used to construct synchronous and asynchronous communication networks, which support local computation and local communication. Data-driven computations result in self-directed ring pipelines within the MSMA, and composite data-flow computations are executed in a pipelined fashion. By viewing an MSMA as a computing network of tightly-linked ring pipelines, data-flow programs can be uniformly distributed over these pipelines formore » efficient resource utilisation. 9 references.« less

  11. Software Tool Integrating Data Flow Diagrams and Petri Nets

    NASA Technical Reports Server (NTRS)

    Thronesbery, Carroll; Tavana, Madjid

    2010-01-01

    Data Flow Diagram - Petri Net (DFPN) is a software tool for analyzing other software to be developed. The full name of this program reflects its design, which combines the benefit of data-flow diagrams (which are typically favored by software analysts) with the power and precision of Petri-net models, without requiring specialized Petri-net training. (A Petri net is a particular type of directed graph, a description of which would exceed the scope of this article.) DFPN assists a software analyst in drawing and specifying a data-flow diagram, then translates the diagram into a Petri net, then enables graphical tracing of execution paths through the Petri net for verification, by the end user, of the properties of the software to be developed. In comparison with prior means of verifying the properties of software to be developed, DFPN makes verification by the end user more nearly certain, thereby making it easier to identify and correct misconceptions earlier in the development process, when correction is less expensive. After the verification by the end user, DFPN generates a printable system specification in the form of descriptions of processes and data.

  12. Sequencing and fan-out mechanism for causing a set of at least two sequential instructions to be performed in a dataflow processing computer

    DOEpatents

    Grafe, Victor G.; Hoch, James E.

    1993-01-01

    A sequencing and data fanout mechanism is provided for a dataflow processor is activated by an input token which causes a sequence of operations to occur by initiating a first instruction to act on data contained within the token and then executing a sequential thread of instructions identified by either a repeat count and an offset within the token, or by an offset within each preceding instruction.

  13. A federated semantic metadata registry framework for enabling interoperability across clinical research and care domains.

    PubMed

    Sinaci, A Anil; Laleci Erturkmen, Gokce B

    2013-10-01

    In order to enable secondary use of Electronic Health Records (EHRs) by bridging the interoperability gap between clinical care and research domains, in this paper, a unified methodology and the supporting framework is introduced which brings together the power of metadata registries (MDR) and semantic web technologies. We introduce a federated semantic metadata registry framework by extending the ISO/IEC 11179 standard, and enable integration of data element registries through Linked Open Data (LOD) principles where each Common Data Element (CDE) can be uniquely referenced, queried and processed to enable the syntactic and semantic interoperability. Each CDE and their components are maintained as LOD resources enabling semantic links with other CDEs, terminology systems and with implementation dependent content models; hence facilitating semantic search, much effective reuse and semantic interoperability across different application domains. There are several important efforts addressing the semantic interoperability in healthcare domain such as IHE DEX profile proposal, CDISC SHARE and CDISC2RDF. Our architecture complements these by providing a framework to interlink existing data element registries and repositories for multiplying their potential for semantic interoperability to a greater extent. Open source implementation of the federated semantic MDR framework presented in this paper is the core of the semantic interoperability layer of the SALUS project which enables the execution of the post marketing safety analysis studies on top of existing EHR systems. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Multiverse data-flow control.

    PubMed

    Schindler, Benjamin; Waser, Jürgen; Ribičić, Hrvoje; Fuchs, Raphael; Peikert, Ronald

    2013-06-01

    In this paper, we present a data-flow system which supports comparative analysis of time-dependent data and interactive simulation steering. The system creates data on-the-fly to allow for the exploration of different parameters and the investigation of multiple scenarios. Existing data-flow architectures provide no generic approach to handle modules that perform complex temporal processing such as particle tracing or statistical analysis over time. Moreover, there is no solution to create and manage module data, which is associated with alternative scenarios. Our solution is based on generic data-flow algorithms to automate this process, enabling elaborate data-flow procedures, such as simulation, temporal integration or data aggregation over many time steps in many worlds. To hide the complexity from the user, we extend the World Lines interaction techniques to control the novel data-flow architecture. The concept of multiple, special-purpose cursors is introduced to let users intuitively navigate through time and alternative scenarios. Users specify only what they want to see, the decision which data are required is handled automatically. The concepts are explained by taking the example of the simulation and analysis of material transport in levee-breach scenarios. To strengthen the general applicability, we demonstrate the investigation of vortices in an offline-simulated dam-break data set.

  15. Data-Flow Based Model Analysis

    NASA Technical Reports Server (NTRS)

    Saad, Christian; Bauer, Bernhard

    2010-01-01

    The concept of (meta) modeling combines an intuitive way of formalizing the structure of an application domain with a high expressiveness that makes it suitable for a wide variety of use cases and has therefore become an integral part of many areas in computer science. While the definition of modeling languages through the use of meta models, e.g. in Unified Modeling Language (UML), is a well-understood process, their validation and the extraction of behavioral information is still a challenge. In this paper we present a novel approach for dynamic model analysis along with several fields of application. Examining the propagation of information along the edges and nodes of the model graph allows to extend and simplify the definition of semantic constraints in comparison to the capabilities offered by e.g. the Object Constraint Language. Performing a flow-based analysis also enables the simulation of dynamic behavior, thus providing an "abstract interpretation"-like analysis method for the modeling domain.

  16. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    PubMed Central

    2011-01-01

    Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples. PMID:21352538

  17. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines.

    PubMed

    Cieślik, Marcin; Mura, Cameron

    2011-02-25

    Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage examples.

  18. Design tool for multiprocessor scheduling and evaluation of iterative dataflow algorithms

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1995-01-01

    A graph-theoretic design process and software tool is defined for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. Graph-search algorithms and analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool applies the design process to a given problem and includes performance optimization through the inclusion of additional precedence constraints among the schedulable tasks.

  19. Common spaceborne multicomputer operating system and development environment

    NASA Technical Reports Server (NTRS)

    Craymer, L. G.; Lewis, B. F.; Hayes, P. J.; Jones, R. L.

    1994-01-01

    A preliminary technical specification for a multicomputer operating system is developed. The operating system is targeted for spaceborne flight missions and provides a broad range of real-time functionality, dynamic remote code-patching capability, and system fault tolerance and long-term survivability features. Dataflow concepts are used for representing application algorithms. Functional features are included to ensure real-time predictability for a class of algorithms which require data-driven execution on an iterative steady state basis. The development environment supports the development of algorithm code, design of control parameters, performance analysis, simulation of real-time dataflow applications, and compiling and downloading of the resulting application.

  20. Rapid Prototyping of High Performance Signal Processing Applications

    NASA Astrophysics Data System (ADS)

    Sane, Nimish

    Advances in embedded systems for digital signal processing (DSP) are enabling many scientific projects and commercial applications. At the same time, these applications are key to driving advances in many important kinds of computing platforms. In this region of high performance DSP, rapid prototyping is critical for faster time-to-market (e.g., in the wireless communications industry) or time-to-science (e.g., in radio astronomy). DSP system architectures have evolved from being based on application specific integrated circuits (ASICs) to incorporate reconfigurable off-the-shelf field programmable gate arrays (FPGAs), the latest multiprocessors such as graphics processing units (GPUs), or heterogeneous combinations of such devices. We, thus, have a vast design space to explore based on performance trade-offs, and expanded by the multitude of possibilities for target platforms. In order to allow systematic design space exploration, and develop scalable and portable prototypes, model based design tools are increasingly used in design and implementation of embedded systems. These tools allow scalable high-level representations, model based semantics for analysis and optimization, and portable implementations that can be verified at higher levels of abstractions and targeted toward multiple platforms for implementation. The designer can experiment using such tools at an early stage in the design cycle, and employ the latest hardware at later stages. In this thesis, we have focused on dataflow-based approaches for rapid DSP system prototyping. This thesis contributes to various aspects of dataflow-based design flows and tools as follows: 1. We have introduced the concept of topological patterns, which exploits commonly found repetitive patterns in DSP algorithms to allow scalable, concise, and parameterizable representations of large scale dataflow graphs in high-level languages. We have shown how an underlying design tool can systematically exploit a high-level application specification consisting of topological patterns in various aspects of the design flow. 2. We have formulated the core functional dataflow (CFDF) model of computation, which can be used to model a wide variety of deterministic dynamic dataflow behaviors. We have also presented key features of the CFDF model and tools based on these features. These tools provide support for heterogeneous dataflow behaviors, an intuitive and common framework for functional specification, support for functional simulation, portability from several existing dataflow models to CFDF, integrated emphasis on minimally-restricted specification of actor functionality, and support for efficient static, quasi-static, and dynamic scheduling techniques. 3. We have developed a generalized scheduling technique for CFDF graphs based on decomposition of a CFDF graph into static graphs that interact at run-time. Furthermore, we have refined this generalized scheduling technique using a new notion of "mode grouping," which better exposes the underlying static behavior. We have also developed a scheduling technique for a class of dynamic applications that generates parameterized looped schedules (PLSs), which can handle dynamic dataflow behavior without major limitations on compile-time predictability. 4. We have demonstrated the use of dataflow-based approaches for design and implementation of radio astronomy DSP systems using an application example of a tunable digital downconverter (TDD) for spectrometers. Design and implementation of this module has been an integral part of this thesis work. This thesis demonstrates a design flow that consists of a high-level software prototype, analysis, and simulation using the dataflow interchange format (DIF) tool, and integration of this design with the existing tool flow for the target implementation on an FPGA platform, called interconnect break-out board (IBOB). We have also explored the trade-off between low hardware cost for fixed configurations of digital downconverters and flexibility offered by TDD designs. 5. This thesis has contributed significantly to the development and release of the latest version of a graph package oriented toward models of computation (MoCGraph). Our enhancements to this package include support for tree data structures, and generalized schedule trees (GSTs), which provide a useful data structure for a wide variety of schedule representations. Our extensions to the MoCGraph package provided key support for the CFDF model, and functional simulation capabilities in the DIF package.

  1. Relations between Short-term Memory Deficits, Semantic Processing, and Executive Function

    PubMed Central

    Allen, Corinne M.; Martin, Randi C.; Martin, Nadine

    2012-01-01

    Background Previous research has suggested separable short-term memory (STM) buffers for the maintenance of phonological and lexical-semantic information, as some patients with aphasia show better ability to retain semantic than phonological information and others show the reverse. Recently, researchers have proposed that deficits to the maintenance of semantic information in STM are related to executive control abilities. Aims The present study investigated the relationship of executive function abilities with semantic and phonological short-term memory (STM) and semantic processing in such patients, as some previous research has suggested that semantic STM deficits and semantic processing abilities are critically related to specific or general executive function deficits. Method and Procedures 20 patients with aphasia and STM deficits were tested on measures of short-term retention, semantic processing, and both complex and simple executive function tasks. Outcome and Results In correlational analyses, we found no relation between semantic STM and performance on simple or complex executive function tasks. In contrast, phonological STM was related to executive function performance in tasks that had a verbal component, suggesting that performance in some executive function tasks depends on maintaining or rehearsing phonological codes. Although semantic STM was not related to executive function ability, performance on semantic processing tasks was related to executive function, perhaps due to similar executive task requirements in both semantic processing and executive function tasks. Conclusions Implications for treatment and interpretations of executive deficits are discussed. PMID:22736889

  2. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    PubMed

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-09

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.

  3. Building Software Agents for Planning, Monitoring, and Optimizing Travel

    DTIC Science & Technology

    2004-01-01

    defined as plans in the Theseus Agent Execution language (Barish et al. 2002). In the Web environment, sources can be quite slow and the latencies of...executor is based on a dataflow paradigm, actions are executed as soon as the data becomes available. Second, Theseus performs the actions in a...while Thesues provides an expressive language for defining information gathering and monitoring plans. The Theseus language supports capabilities

  4. Performance analysis of a large-grain dataflow scheduling paradigm

    NASA Technical Reports Server (NTRS)

    Young, Steven D.; Wills, Robert W.

    1993-01-01

    A paradigm for scheduling computations on a network of multiprocessors using large-grain data flow scheduling at run time is described and analyzed. The computations to be scheduled must follow a static flow graph, while the schedule itself will be dynamic (i.e., determined at run time). Many applications characterized by static flow exist, and they include real-time control and digital signal processing. With the advent of computer-aided software engineering (CASE) tools for capturing software designs in dataflow-like structures, macro-dataflow scheduling becomes increasingly attractive, if not necessary. For parallel implementations, using the macro-dataflow method allows the scheduling to be insulated from the application designer and enables the maximum utilization of available resources. Further, by allowing multitasking, processor utilizations can approach 100 percent while they maintain maximum speedup. Extensive simulation studies are performed on 4-, 8-, and 16-processor architectures that reflect the effects of communication delays, scheduling delays, algorithm class, and multitasking on performance and speedup gains.

  5. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers

    PubMed Central

    Filipovic, Nenad D.

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration. PMID:28611851

  6. Acceleration of Image Segmentation Algorithm for (Breast) Mammogram Images Using High-Performance Reconfigurable Dataflow Computers.

    PubMed

    Milankovic, Ivan L; Mijailovic, Nikola V; Filipovic, Nenad D; Peulic, Aleksandar S

    2017-01-01

    Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration.

  7. Scalable and Accurate SMT-based Model Checking of Data Flow Systems

    DTIC Science & Technology

    2013-10-30

    guided by the semantics of the description language . In this project we developed instead a complementary and novel approach based on a somewhat brute...believe that our approach could help considerably in expanding the reach of abstract interpretation techniques to a variety of tar- get languages , as...project. We worked on developing a framework for compositional verification that capitalizes on the fact that data-flow languages , such as Lustre, have

  8. Addressing Modeling Challenges in Cyber-Physical Systems

    DTIC Science & Technology

    2011-03-04

    A. Lee and Eleftherios Matsikoudis. The semantics of dataflow with firing. In Grard Huet, Gordon Plotkin, Jean - Jacques Lévy, and Yves Bertot...Computer-Aided Design of Integrated Circuits and Systems, 20(3), 2001. [12] Luca P. Carloni, Roberto Passerone, Alessandro Pinto , and Alberto Sangiovanni...gst/fullpage.html?res= 9504EFDA1738F933A2575AC0A9679C8B63. 20 [15] Abhijit Davare, Douglas Densmore, Trevor Meyerowitz, Alessandro Pinto , Alberto

  9. Ubiquitous Computing Services Discovery and Execution Using a Novel Intelligent Web Services Algorithm

    PubMed Central

    Choi, Okkyung; Han, SangYong

    2007-01-01

    Ubiquitous Computing makes it possible to determine in real time the location and situations of service requesters in a web service environment as it enables access to computers at any time and in any place. Though research on various aspects of ubiquitous commerce is progressing at enterprises and research centers, both domestically and overseas, analysis of a customer's personal preferences based on semantic web and rule based services using semantics is not currently being conducted. This paper proposes a Ubiquitous Computing Services System that enables a rule based search as well as semantics based search to support the fact that the electronic space and the physical space can be combined into one and the real time search for web services and the construction of efficient web services thus become possible.

  10. The contribution of executive control to semantic cognition: Convergent evidence from semantic aphasia and executive dysfunction.

    PubMed

    Thompson, Hannah E; Almaghyuli, Azizah; Noonan, Krist A; Barak, Ohr; Lambon Ralph, Matthew A; Jefferies, Elizabeth

    2018-01-03

    Semantic cognition, as described by the controlled semantic cognition (CSC) framework (Rogers et al., , Neuropsychologia, 76, 220), involves two key components: activation of coherent, generalizable concepts within a heteromodal 'hub' in combination with modality-specific features (spokes), and a constraining mechanism that manipulates and gates this knowledge to generate time- and task-appropriate behaviour. Executive-semantic goal representations, largely supported by executive regions such as frontal and parietal cortex, are thought to allow the generation of non-dominant aspects of knowledge when these are appropriate for the task or context. Semantic aphasia (SA) patients have executive-semantic deficits, and these are correlated with general executive impairment. If the CSC proposal is correct, patients with executive impairment should not only exhibit impaired semantic cognition, but should also show characteristics that align with those observed in SA. This possibility remains largely untested, as patients selected on the basis that they show executive impairment (i.e., with 'dysexecutive syndrome') have not been extensively tested on tasks tapping semantic control and have not been previously compared with SA cases. We explored conceptual processing in 12 patients showing symptoms consistent with dysexecutive syndrome (DYS) and 24 SA patients, using a range of multimodal semantic assessments which manipulated control demands. Patients with executive impairments, despite not being selected to show semantic impairments, nevertheless showed parallel patterns to SA cases. They showed strong effects of distractor strength, cues and miscues, and probe-target distance, plus minimal effects of word frequency on comprehension (unlike semantic dementia patients with degradation of conceptual knowledge). This supports a component process account of semantic cognition in which retrieval is shaped by control processes, and confirms that deficits in SA patients reflect difficulty controlling semantic retrieval. © 2018 The Authors. Journal of Neuropsychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  11. Cross border semantic interoperability for clinical research: the EHR4CR semantic resources and services

    PubMed Central

    Daniel, Christel; Ouagne, David; Sadou, Eric; Forsberg, Kerstin; Gilchrist, Mark Mc; Zapletal, Eric; Paris, Nicolas; Hussain, Sajjad; Jaulent, Marie-Christine; MD, Dipka Kalra

    2016-01-01

    With the development of platforms enabling the use of routinely collected clinical data in the context of international clinical research, scalable solutions for cross border semantic interoperability need to be developed. Within the context of the IMI EHR4CR project, we first defined the requirements and evaluation criteria of the EHR4CR semantic interoperability platform and then developed the semantic resources and supportive services and tooling to assist hospital sites in standardizing their data for allowing the execution of the project use cases. The experience gained from the evaluation of the EHR4CR platform accessing to semantically equivalent data elements across 11 European participating EHR systems from 5 countries demonstrated how far the mediation model and mapping efforts met the expected requirements of the project. Developers of semantic interoperability platforms are beginning to address a core set of requirements in order to reach the goal of developing cross border semantic integration of data. PMID:27570649

  12. Software Epistemology

    DTIC Science & Technology

    2016-03-01

    in-vitro decision to incubate a startup, Lexumo [7], which is developing a commercial Software as a Service ( SaaS ) vulnerability assessment...LTS Label Transition System MUSE Mining and Understanding Software Enclaves RTEMS Real-Time Executive for Multi-processor Systems SaaS Software ...as a Service SSA Static Single Assignment SWE Software Epistemology UD/DU Def-Use/Use-Def Chains (Dataflow Graph)

  13. Lazy evaluation of FP programs: A data-flow approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Y.H.; Gaudiot, J.L.

    1988-12-31

    This paper presents a lazy evaluation system for the list-based functional language, Backus` FP in data-driven environment. A superset language of FP, called DFP (Demand-driven FP), is introduced. FP eager programs are transformed into DFP lazy programs which contain the notions of demands. The data-driven execution of DFP programs has the same effects of lazy evaluation. DFP lazy programs have the property of always evaluating a sufficient and necessary result. The infinite sequence generator is used to demonstrate the eager-lazy program transformation and the execution of the lazy programs.

  14. Highlights of X-Stack ExM Deliverable Swift/T

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wozniak, Justin M.

    Swift/T is a key success from the ExM: System support for extreme-scale, many-task applications1 X-Stack project, which proposed to use concurrent dataflow as an innovative programming model to exploit extreme parallelism in exascale computers. The Swift/T component of the project reimplemented the Swift language from scratch to allow applications that compose scientific modules together to be build and run on available petascale computers (Blue Gene, Cray). Swift/T does this via a new compiler and runtime that generates and executes the application as an MPI program. We assume that mission-critical emerging exascale applications will be composed as scalable applications using existingmore » software components, connected by data dependencies. Developers wrap native code fragments using a higherlevel language, then build composite applications to form a computational experiment. This exemplifies hierarchical concurrency: lower-level messaging libraries are used for fine-grained parallelism; highlevel control is used for inter-task coordination. These patterns are best expressed with dataflow, but static DAGs (i.e., other workflow languages) limit the applications that can be built; they do not provide the expressiveness of Swift, such as conditional execution, iteration, and recursive functions.« less

  15. Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow.

    PubMed

    Wongsuphasawat, Kanit; Smilkov, Daniel; Wexler, James; Wilson, Jimbo; Mane, Dandelion; Fritz, Doug; Krishnan, Dilip; Viegas, Fernanda B; Wattenberg, Martin

    2018-01-01

    We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.

  16. SemanticSCo: A platform to support the semantic composition of services for gene expression analysis.

    PubMed

    Guardia, Gabriela D A; Ferreira Pires, Luís; da Silva, Eduardo G; de Farias, Cléver R G

    2017-02-01

    Gene expression studies often require the combined use of a number of analysis tools. However, manual integration of analysis tools can be cumbersome and error prone. To support a higher level of automation in the integration process, efforts have been made in the biomedical domain towards the development of semantic web services and supporting composition environments. Yet, most environments consider only the execution of simple service behaviours and requires users to focus on technical details of the composition process. We propose a novel approach to the semantic composition of gene expression analysis services that addresses the shortcomings of the existing solutions. Our approach includes an architecture designed to support the service composition process for gene expression analysis, and a flexible strategy for the (semi) automatic composition of semantic web services. Finally, we implement a supporting platform called SemanticSCo to realize the proposed composition approach and demonstrate its functionality by successfully reproducing a microarray study documented in the literature. The SemanticSCo platform provides support for the composition of RESTful web services semantically annotated using SAWSDL. Our platform also supports the definition of constraints/conditions regarding the order in which service operations should be invoked, thus enabling the definition of complex service behaviours. Our proposed solution for semantic web service composition takes into account the requirements of different stakeholders and addresses all phases of the service composition process. It also provides support for the definition of analysis workflows at a high-level of abstraction, thus enabling users to focus on biological research issues rather than on the technical details of the composition process. The SemanticSCo source code is available at https://github.com/usplssb/SemanticSCo. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. ATAMM enhancement and multiprocessing performance evaluation

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.

    1994-01-01

    The algorithm to architecture mapping model (ATAAM) is a Petri net based model which provides a strategy for periodic execution of a class of real-time algorithms on multicomputer dataflow architecture. The execution of large-grained, decision-free algorithms on homogeneous processing elements is studied. The ATAAM provides an analytical basis for calculating performance bounds on throughput characteristics. Extension of the ATAMM as a strategy for cyclo-static scheduling provides for a truly distributed ATAMM multicomputer operating system. An ATAAM testbed consisting of a centralized graph manager and three processors is described using embedded firmware on 68HC11 microcontrollers.

  18. Using Semantic Web Technologies for Cohort Identification from Electronic Health Records for Clinical Research

    PubMed Central

    Pathak, Jyotishman; Kiefer, Richard C.; Chute, Christopher G.

    2012-01-01

    The ability to conduct genome-wide association studies (GWAS) has enabled new exploration of how genetic variations contribute to health and disease etiology. One of the key requirements to perform GWAS is the identification of subject cohorts with accurate classification of disease phenotypes. In this work, we study how emerging Semantic Web technologies can be applied in conjunction with clinical data stored in electronic health records (EHRs) to accurately identify subjects with specific diseases for inclusion in cohort studies. In particular, we demonstrate the role of using Resource Description Framework (RDF) for representing EHR data and enabling federated querying and inferencing via standardized Web protocols for identifying subjects with Diabetes Mellitus. Our study highlights the potential of using Web-scale data federation approaches to execute complex queries. PMID:22779040

  19. Region Templates: Data Representation and Management for High-Throughput Image Analysis

    PubMed Central

    Pan, Tony; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Klasky, Scott; Saltz, Joel

    2015-01-01

    We introduce a region template abstraction and framework for the efficient storage, management and processing of common data types in analysis of large datasets of high resolution images on clusters of hybrid computing nodes. The region template abstraction provides a generic container template for common data structures, such as points, arrays, regions, and object sets, within a spatial and temporal bounding box. It allows for different data management strategies and I/O implementations, while providing a homogeneous, unified interface to applications for data storage and retrieval. A region template application is represented as a hierarchical dataflow in which each computing stage may be represented as another dataflow of finer-grain tasks. The execution of the application is coordinated by a runtime system that implements optimizations for hybrid machines, including performance-aware scheduling for maximizing the utilization of computing devices and techniques to reduce the impact of data transfers between CPUs and GPUs. An experimental evaluation on a state-of-the-art hybrid cluster using a microscopy imaging application shows that the abstraction adds negligible overhead (about 3%) and achieves good scalability and high data transfer rates. Optimizations in a high speed disk based storage implementation of the abstraction to support asynchronous data transfers and computation result in an application performance gain of about 1.13×. Finally, a processing rate of 11,730 4K×4K tiles per minute was achieved for the microscopy imaging application on a cluster with 100 nodes (300 GPUs and 1,200 CPU cores). This computation rate enables studies with very large datasets. PMID:26139953

  20. Rewriting Logic Semantics of a Plan Execution Language

    NASA Technical Reports Server (NTRS)

    Dowek, Gilles; Munoz, Cesar A.; Rocha, Camilo

    2009-01-01

    The Plan Execution Interchange Language (PLEXIL) is a synchronous language developed by NASA to support autonomous spacecraft operations. In this paper, we propose a rewriting logic semantics of PLEXIL in Maude, a high-performance logical engine. The rewriting logic semantics is by itself a formal interpreter of the language and can be used as a semantic benchmark for the implementation of PLEXIL executives. The implementation in Maude has the additional benefit of making available to PLEXIL designers and developers all the formal analysis and verification tools provided by Maude. The formalization of the PLEXIL semantics in rewriting logic poses an interesting challenge due to the synchronous nature of the language and the prioritized rules defining its semantics. To overcome this difficulty, we propose a general procedure for simulating synchronous set relations in rewriting logic that is sound and, for deterministic relations, complete. We also report on the finding of two issues at the design level of the original PLEXIL semantics that were identified with the help of the executable specification in Maude.

  1. A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment

    PubMed Central

    Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae

    2015-01-01

    User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service. PMID:26393609

  2. A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment.

    PubMed

    Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae

    2015-09-18

    User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service.

  3. Exploiting Semantic Web Technologies to Develop OWL-Based Clinical Practice Guideline Execution Engines.

    PubMed

    Jafarpour, Borna; Abidi, Samina Raza; Abidi, Syed Sibte Raza

    2016-01-01

    Computerizing paper-based CPG and then executing them can provide evidence-informed decision support to physicians at the point of care. Semantic web technologies especially web ontology language (OWL) ontologies have been profusely used to represent computerized CPG. Using semantic web reasoning capabilities to execute OWL-based computerized CPG unties them from a specific custom-built CPG execution engine and increases their shareability as any OWL reasoner and triple store can be utilized for CPG execution. However, existing semantic web reasoning-based CPG execution engines suffer from lack of ability to execute CPG with high levels of expressivity, high cognitive load of computerization of paper-based CPG and updating their computerized versions. In order to address these limitations, we have developed three CPG execution engines based on OWL 1 DL, OWL 2 DL and OWL 2 DL + semantic web rule language (SWRL). OWL 1 DL serves as the base execution engine capable of executing a wide range of CPG constructs, however for executing highly complex CPG the OWL 2 DL and OWL 2 DL + SWRL offer additional executional capabilities. We evaluated the technical performance and medical correctness of our execution engines using a range of CPG. Technical evaluations show the efficiency of our CPG execution engines in terms of CPU time and validity of the generated recommendation in comparison to existing CPG execution engines. Medical evaluations by domain experts show the validity of the CPG-mediated therapy plans in terms of relevance, safety, and ordering for a wide range of patient scenarios.

  4. Vulnerability detection using data-flow graphs and SMT solvers

    DTIC Science & Technology

    2016-10-31

    concerns. The framework is modular and pipelined to allow scalable analysis on distributed systems. Our vulnerability detection framework employs machine...Design We designed the framework to be modular to enable flexible reuse and extendibility. In its current form, our framework performs the following

  5. Longitudinal Study of a Novel Performance-based Measure of Daily Function

    DTIC Science & Technology

    2015-04-01

    measures of cognition (e.g., episodic memory , semantic memory , executive function, speed). We found that patients with MCI had compromises in...UPSA, as well as measures of cognition (e.g., episodic memory , semantic memory , executive function, speed). We found that patients with MCI had... memory , semantic memory , executive function, speed). We found that patients with MCI had compromises in everyday functional competence and that the

  6. Language Deficits as a Preclinical Window into Parkinson's Disease: Evidence from Asymptomatic Parkin and Dardarin Mutation Carriers.

    PubMed

    García, Adolfo M; Sedeño, Lucas; Trujillo, Natalia; Bocanegra, Yamile; Gomez, Diana; Pineda, David; Villegas, Andrés; Muñoz, Edinson; Arias, William; Ibáñez, Agustín

    2017-02-01

    The worldwide spread of Parkinson's disease (PD) calls for sensitive and specific measures enabling its early (or, ideally, preclinical) detection. Here, we use language measures revealing deficits in PD to explore whether similar disturbances are present in asymptomatic individuals at risk for the disease. We administered executive, semantic, verb-production, and syntactic tasks to sporadic PD patients, genetic PD patients with PARK2 (parkin) or LRRK2 (dardarin) mutation, asymptomatic first-degree relatives of the latter with similar mutations, and socio-demographically matched controls. Moreover, to detect sui generis language disturbances, we ran analysis of covariance tests using executive functions as covariate. The two clinical groups showed impairments in all measures, most of which survived covariation with executive functions. However, the key finding concerned asymptomatic mutation carriers. While these subjects showed intact executive, semantic, and action-verb production skills, they evinced deficits in a syntactic test with minimal working memory load. We propose that this sui generis disturbance may constitute a prodromal sign anticipating eventual development of PD. Moreover, our results suggest that mutations on specific genes (PARK2 and LRRK2) compromising basal ganglia functioning may be subtly related to language-processing mechanisms. (JINS, 2017, 23, 150-158).

  7. Applying Dataflow Architecture and Visualization Tools to In Vitro Pharmacology Data Automation.

    PubMed

    Pechter, David; Xu, Serena; Kurtz, Marc; Williams, Steven; Sonatore, Lisa; Villafania, Artjohn; Agrawal, Sony

    2016-12-01

    The pace and complexity of modern drug discovery places ever-increasing demands on scientists for data analysis and interpretation. Data flow programming and modern visualization tools address these demands directly. Three different requirements-one for allosteric modulator analysis, one for a specialized clotting analysis, and one for enzyme global progress curve analysis-are reviewed, and their execution in a combined data flow/visualization environment is outlined. © 2016 Society for Laboratory Automation and Screening.

  8. Simulation and Verification of Synchronous Set Relations in Rewriting Logic

    NASA Technical Reports Server (NTRS)

    Rocha, Camilo; Munoz, Cesar A.

    2011-01-01

    This paper presents a mathematical foundation and a rewriting logic infrastructure for the execution and property veri cation of synchronous set relations. The mathematical foundation is given in the language of abstract set relations. The infrastructure consists of an ordersorted rewrite theory in Maude, a rewriting logic system, that enables the synchronous execution of a set relation provided by the user. By using the infrastructure, existing algorithm veri cation techniques already available in Maude for traditional asynchronous rewriting, such as reachability analysis and model checking, are automatically available to synchronous set rewriting. The use of the infrastructure is illustrated with an executable operational semantics of a simple synchronous language and the veri cation of temporal properties of a synchronous system.

  9. Semantic control and modality: an input processing deficit in aphasia leading to deregulated semantic cognition in a single modality.

    PubMed

    Thompson, Hannah E; Jefferies, Elizabeth

    2013-08-01

    Research suggests that semantic memory deficits can occur in at least three ways. Patients can (1) show amodal degradation of concepts within the semantic store itself, such as in semantic dementia (SD), (2) have difficulty in controlling activation within the semantic system and accessing appropriate knowledge in line with current goals or context, as in semantic aphasia (SA) and (3) experience a semantic deficit in only one modality following degraded input from sensory cortex. Patients with SA show deficits of semantic control and access across word and picture tasks, consistent with the view that their problems arise from impaired modality-general control processes. However, there are a few reports in the literature of patients with semantic access problems restricted to auditory-verbal materials, who show decreasing ability to retrieve concepts from words when they are presented repeatedly with closely related distractors. These patients challenge the notion that semantic control processes are modality-general and suggest instead a separation of 'access' to auditory-verbal and non-verbal semantic systems. We had the rare opportunity to study such a case in detail. Our aims were to examine the effect of manipulations of control demands in auditory-verbal semantic, non-verbal semantic and non-semantic tasks, allowing us to assess whether such cases always show semantic control/access impairments that follow a modality-specific pattern, or whether there are alternative explanations. Our findings revealed: (1) deficits on executive tasks, unrelated to semantic demands, which were more evident in the auditory modality than the visual modality; (2) deficits in executively-demanding semantic tasks which were accentuated in the auditory-verbal domain compared with the visual modality, but still present on non-verbal tasks, and (3) a coupling between comprehension and executive control requirements, in that mild impairment on single word comprehension was greatly increased on more demanding, associative judgements across modalities. This pattern of results suggests that mild executive-semantic impairment, paired with disrupted connectivity from auditory input, may give rise to semantic 'access' deficits affecting only the auditory modality. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Leveraging Large-Scale Semantic Networks for Adaptive Robot Task Learning and Execution.

    PubMed

    Boteanu, Adrian; St Clair, Aaron; Mohseni-Kabir, Anahita; Saldanha, Carl; Chernova, Sonia

    2016-12-01

    This work seeks to leverage semantic networks containing millions of entries encoding assertions of commonsense knowledge to enable improvements in robot task execution and learning. The specific application we explore in this project is object substitution in the context of task adaptation. Humans easily adapt their plans to compensate for missing items in day-to-day tasks, substituting a wrap for bread when making a sandwich, or stirring pasta with a fork when out of spoons. Robot plan execution, however, is far less robust, with missing objects typically leading to failure if the robot is not aware of alternatives. In this article, we contribute a context-aware algorithm that leverages the linguistic information embedded in the task description to identify candidate substitution objects without reliance on explicit object affordance information. Specifically, we show that the task context provided by the task labels within the action structure of a task plan can be leveraged to disambiguate information within a noisy large-scale semantic network containing hundreds of potential object candidates to identify successful object substitutions with high accuracy. We present two extensive evaluations of our work on both abstract and real-world robot tasks, showing that the substitutions made by our system are valid, accepted by users, and lead to a statistically significant reduction in robot learning time. In addition, we report the outcomes of testing our approach with a large number of crowd workers interacting with a robot in real time.

  11. Macro-actor execution on multilevel data-driven architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaudiot, J.L.; Najjar, W.

    1988-12-31

    The data-flow model of computation brings to multiprocessors high programmability at the expense of increased overhead. Applying the model at a higher level leads to better performance but also introduces loss of parallelism. We demonstrate here syntax directed program decomposition methods for the creation of large macro-actors in numerical algorithms. In order to alleviate some of the problems introduced by the lower resolution interpretation, we describe a multi-level of resolution and analyze the requirements for its actual hardware and software integration.

  12. A Formalisation of Adaptable Pervasive Flows

    NASA Astrophysics Data System (ADS)

    Bucchiarone, Antonio; Lafuente, Alberto Lluch; Marconi, Annapaola; Pistore, Marco

    Adaptable Pervasive Flows is a novel workflow-based paradigm for the design and execution of pervasive applications, where dynamic workflows situated in the real world are able to modify their execution in order to adapt to changes in their environment. In this paper, we study a formalisation of such flows by means of a formal flow language. More precisely, we define APFoL (Adaptable Pervasive Flow Language) and formalise its textual notation by encoding it in Blite, a formalisation of WS-BPEL. The encoding in Blite equips the language with a formal semantics and enables the use of automated verification techniques. We illustrate the approach with an example of a Warehouse Case Study.

  13. When "Happy" Means "Sad": Neuropsychological Evidence for the Right Prefrontal Cortex Contribution to Executive Semantic Processing

    ERIC Educational Resources Information Center

    Samson, Dana; Connolly, Catherine; Humphreys, Glyn W.

    2007-01-01

    The contribution of the left inferior prefrontal cortex in semantic processing has been widely investigated in the last decade. Converging evidence from functional imaging studies shows that this region is involved in the "executive" or "controlled" aspects of semantic processing. In this study, we report a single case study of a patient, PW, with…

  14. GOOSE: semantic search on internet connected sensors

    NASA Astrophysics Data System (ADS)

    Schutte, Klamer; Bomhof, Freek; Burghouts, Gertjan; van Diggelen, Jurriaan; Hiemstra, Peter; van't Hof, Jaap; Kraaij, Wessel; Pasman, Huib; Smith, Arthur; Versloot, Corne; de Wit, Joost

    2013-05-01

    More and more sensors are getting Internet connected. Examples are cameras on cell phones, CCTV cameras for traffic control as well as dedicated security and defense sensor systems. Due to the steadily increasing data volume, human exploitation of all this sensor data is impossible for effective mission execution. Smart access to all sensor data acts as enabler for questions such as "Is there a person behind this building" or "Alert me when a vehicle approaches". The GOOSE concept has the ambition to provide the capability to search semantically for any relevant information within "all" (including imaging) sensor streams in the entire Internet of sensors. This is similar to the capability provided by presently available Internet search engines which enable the retrieval of information on "all" web pages on the Internet. In line with current Internet search engines any indexing services shall be utilized cross-domain. The two main challenge for GOOSE is the Semantic Gap and Scalability. The GOOSE architecture consists of five elements: (1) an online extraction of primitives on each sensor stream; (2) an indexing and search mechanism for these primitives; (3) a ontology based semantic matching module; (4) a top-down hypothesis verification mechanism and (5) a controlling man-machine interface. This paper reports on the initial GOOSE demonstrator, which consists of the MES multimedia analysis platform and the CORTEX action recognition module. It also provides an outlook into future GOOSE development.

  15. A Socio-technical Approach for Transient SME Alliances

    NASA Astrophysics Data System (ADS)

    Rezgui, Yacine

    The paper discusses technical requirements to promote the adoption of alliance modes of operation by SMEs in the construction sector. These requirements have provided a basis for specifying a set of functionality to support the collaboration and cooperation needs of SMEs. While service-oriented architectures and semantic web services provide the middleware technology to implement the identified functionality, a number of key technical limitations have been identified, including lack of support for the dynamic and non-functional characteristics of SME alliances distributed business processes, lack of execution monitoring functionality to manage running business processes, and lack of support for semantic reasoning to enable SME business process service composition. The paper examines these issues and provides key directions for supporting SME alliances effectively.

  16. Weaknesses in Lexical-Semantic Knowledge Among College Students With Specific Learning Disabilities: Evidence From a Semantic Fluency Task

    PubMed Central

    McGregor, Karla K.; Oleson, Jacob

    2017-01-01

    Purpose The purpose of this study is to determine whether deficits in executive function and lexical-semantic memory compromise the linguistic performance of young adults with specific learning disabilities (LD) enrolled in postsecondary studies. Method One hundred eighty-five students with LD (n = 53) or normal language development (ND, n = 132) named items in the categories animals and food for 1 minute for each category and completed tests of lexical-semantic knowledge and executive control of memory. Groups were compared on total names, mean cluster size, frequency of embedded clusters, frequency of cluster switches, and change in fluency over time. Secondary analyses of variability within the LD group were also conducted. Results The LD group was less fluent than the ND group. Within the LD group, lexical-semantic knowledge predicted semantic fluency and cluster size; executive control of memory predicted semantic fluency and cluster switches. The LD group produced smaller clusters and fewer embedded clusters than the ND group. Groups did not differ in switching or change over time. Conclusions Deficits in the lexical-semantic system associated with LD may persist into young adulthood, even among those who have managed their disability well enough to attend college. Lexical-semantic deficits are associated with compromised semantic fluency, and the two problems are more likely among students with more severe disabilities. PMID:28267833

  17. Weaknesses in Lexical-Semantic Knowledge Among College Students With Specific Learning Disabilities: Evidence From a Semantic Fluency Task.

    PubMed

    Hall, Jessica; McGregor, Karla K; Oleson, Jacob

    2017-03-01

    The purpose of this study is to determine whether deficits in executive function and lexical-semantic memory compromise the linguistic performance of young adults with specific learning disabilities (LD) enrolled in postsecondary studies. One hundred eighty-five students with LD (n = 53) or normal language development (ND, n = 132) named items in the categories animals and food for 1 minute for each category and completed tests of lexical-semantic knowledge and executive control of memory. Groups were compared on total names, mean cluster size, frequency of embedded clusters, frequency of cluster switches, and change in fluency over time. Secondary analyses of variability within the LD group were also conducted. The LD group was less fluent than the ND group. Within the LD group, lexical-semantic knowledge predicted semantic fluency and cluster size; executive control of memory predicted semantic fluency and cluster switches. The LD group produced smaller clusters and fewer embedded clusters than the ND group. Groups did not differ in switching or change over time. Deficits in the lexical-semantic system associated with LD may persist into young adulthood, even among those who have managed their disability well enough to attend college. Lexical-semantic deficits are associated with compromised semantic fluency, and the two problems are more likely among students with more severe disabilities.

  18. A journey to Semantic Web query federation in the life sciences.

    PubMed

    Cheung, Kei-Hoi; Frost, H Robert; Marshall, M Scott; Prud'hommeaux, Eric; Samwald, Matthias; Zhao, Jun; Paschke, Adrian

    2009-10-01

    As interest in adopting the Semantic Web in the biomedical domain continues to grow, Semantic Web technology has been evolving and maturing. A variety of technological approaches including triplestore technologies, SPARQL endpoints, Linked Data, and Vocabulary of Interlinked Datasets have emerged in recent years. In addition to the data warehouse construction, these technological approaches can be used to support dynamic query federation. As a community effort, the BioRDF task force, within the Semantic Web for Health Care and Life Sciences Interest Group, is exploring how these emerging approaches can be utilized to execute distributed queries across different neuroscience data sources. We have created two health care and life science knowledge bases. We have explored a variety of Semantic Web approaches to describe, map, and dynamically query multiple datasets. We have demonstrated several federation approaches that integrate diverse types of information about neurons and receptors that play an important role in basic, clinical, and translational neuroscience research. Particularly, we have created a prototype receptor explorer which uses OWL mappings to provide an integrated list of receptors and executes individual queries against different SPARQL endpoints. We have also employed the AIDA Toolkit, which is directed at groups of knowledge workers who cooperatively search, annotate, interpret, and enrich large collections of heterogeneous documents from diverse locations. We have explored a tool called "FeDeRate", which enables a global SPARQL query to be decomposed into subqueries against the remote databases offering either SPARQL or SQL query interfaces. Finally, we have explored how to use the vocabulary of interlinked Datasets (voiD) to create metadata for describing datasets exposed as Linked Data URIs or SPARQL endpoints. We have demonstrated the use of a set of novel and state-of-the-art Semantic Web technologies in support of a neuroscience query federation scenario. We have identified both the strengths and weaknesses of these technologies. While Semantic Web offers a global data model including the use of Uniform Resource Identifiers (URI's), the proliferation of semantically-equivalent URI's hinders large scale data integration. Our work helps direct research and tool development, which will be of benefit to this community.

  19. A journey to Semantic Web query federation in the life sciences

    PubMed Central

    Cheung, Kei-Hoi; Frost, H Robert; Marshall, M Scott; Prud'hommeaux, Eric; Samwald, Matthias; Zhao, Jun; Paschke, Adrian

    2009-01-01

    Background As interest in adopting the Semantic Web in the biomedical domain continues to grow, Semantic Web technology has been evolving and maturing. A variety of technological approaches including triplestore technologies, SPARQL endpoints, Linked Data, and Vocabulary of Interlinked Datasets have emerged in recent years. In addition to the data warehouse construction, these technological approaches can be used to support dynamic query federation. As a community effort, the BioRDF task force, within the Semantic Web for Health Care and Life Sciences Interest Group, is exploring how these emerging approaches can be utilized to execute distributed queries across different neuroscience data sources. Methods and results We have created two health care and life science knowledge bases. We have explored a variety of Semantic Web approaches to describe, map, and dynamically query multiple datasets. We have demonstrated several federation approaches that integrate diverse types of information about neurons and receptors that play an important role in basic, clinical, and translational neuroscience research. Particularly, we have created a prototype receptor explorer which uses OWL mappings to provide an integrated list of receptors and executes individual queries against different SPARQL endpoints. We have also employed the AIDA Toolkit, which is directed at groups of knowledge workers who cooperatively search, annotate, interpret, and enrich large collections of heterogeneous documents from diverse locations. We have explored a tool called "FeDeRate", which enables a global SPARQL query to be decomposed into subqueries against the remote databases offering either SPARQL or SQL query interfaces. Finally, we have explored how to use the vocabulary of interlinked Datasets (voiD) to create metadata for describing datasets exposed as Linked Data URIs or SPARQL endpoints. Conclusion We have demonstrated the use of a set of novel and state-of-the-art Semantic Web technologies in support of a neuroscience query federation scenario. We have identified both the strengths and weaknesses of these technologies. While Semantic Web offers a global data model including the use of Uniform Resource Identifiers (URI's), the proliferation of semantically-equivalent URI's hinders large scale data integration. Our work helps direct research and tool development, which will be of benefit to this community. PMID:19796394

  20. Scalable and High-Throughput Execution of Clinical Quality Measures from Electronic Health Records using MapReduce and the JBoss® Drools Engine

    PubMed Central

    Peterson, Kevin J.; Pathak, Jyotishman

    2014-01-01

    Automated execution of electronic Clinical Quality Measures (eCQMs) from electronic health records (EHRs) on large patient populations remains a significant challenge, and the testability, interoperability, and scalability of measure execution are critical. The High Throughput Phenotyping (HTP; http://phenotypeportal.org) project aligns with these goals by using the standards-based HL7 Health Quality Measures Format (HQMF) and Quality Data Model (QDM) for measure specification, as well as Common Terminology Services 2 (CTS2) for semantic interpretation. The HQMF/QDM representation is automatically transformed into a JBoss® Drools workflow, enabling horizontal scalability via clustering and MapReduce algorithms. Using Project Cypress, automated verification metrics can then be produced. Our results show linear scalability for nine executed 2014 Center for Medicare and Medicaid Services (CMS) eCQMs for eligible professionals and hospitals for >1,000,000 patients, and verified execution correctness of 96.4% based on Project Cypress test data of 58 eCQMs. PMID:25954459

  1. An AES chip with DPA resistance using hardware-based random order execution

    NASA Astrophysics Data System (ADS)

    Bo, Yu; Xiangyu, Li; Cong, Chen; Yihe, Sun; Liji, Wu; Xiangmin, Zhang

    2012-06-01

    This paper presents an AES (advanced encryption standard) chip that combats differential power analysis (DPA) side-channel attack through hardware-based random order execution. Both decryption and encryption procedures of an AES are implemented on the chip. A fine-grained dataflow architecture is proposed, which dynamically exploits intrinsic byte-level independence in the algorithm. A novel circuit called an HMF (Hold-Match-Fetch) unit is proposed for random control, which randomly sets execution orders for concurrent operations. The AES chip was manufactured in SMIC 0.18 μm technology. The average energy for encrypting one group of plain texts (128 bits secrete keys) is 19 nJ. The core area is 0.43 mm2. A sophisticated experimental setup was built to test the DPA resistance. Measurement-based experimental results show that one byte of a secret key cannot be disclosed from our chip under random mode after 64000 power traces were used in the DPA attack. Compared with the corresponding fixed order execution, the hardware based random order execution is improved by at least 21 times the DPA resistance.

  2. An ontology-based semantic configuration approach to constructing Data as a Service for enterprises

    NASA Astrophysics Data System (ADS)

    Cai, Hongming; Xie, Cheng; Jiang, Lihong; Fang, Lu; Huang, Chenxi

    2016-03-01

    To align business strategies with IT systems, enterprises should rapidly implement new applications based on existing information with complex associations to adapt to the continually changing external business environment. Thus, Data as a Service (DaaS) has become an enabling technology for enterprise through information integration and the configuration of existing distributed enterprise systems and heterogonous data sources. However, business modelling, system configuration and model alignment face challenges at the design and execution stages. To provide a comprehensive solution to facilitate data-centric application design in a highly complex and large-scale situation, a configurable ontology-based service integrated platform (COSIP) is proposed to support business modelling, system configuration and execution management. First, a meta-resource model is constructed and used to describe and encapsulate information resources by way of multi-view business modelling. Then, based on ontologies, three semantic configuration patterns, namely composite resource configuration, business scene configuration and runtime environment configuration, are designed to systematically connect business goals with executable applications. Finally, a software architecture based on model-view-controller (MVC) is provided and used to assemble components for software implementation. The result of the case study demonstrates that the proposed approach provides a flexible method of implementing data-centric applications.

  3. Semantic memory and frontal executive function during transient global amnesia.

    PubMed

    Hodges, J R

    1994-05-01

    To assess semantic memory and frontal executive function, two patients underwent neuropsychological testing during transient global amnesia (TGA) and after an interval of 6-8 weeks. In spite of a profound deficit in anterograde verbal and non-verbal memory, semantic memory was normal, as judged by category fluency measures, picture naming, and picture-word and picture-picture matching, and reading ability was normal. Similarly, there were no deficits on a number of tests known to be sensitive to frontal executive dysfunction. A hexamethylpropyleneamine-oxime (HMPAO) single photon emission CT (SPECT) scan, obtained on one patient 24 hours post-TGA, showed focal left temporal lobe hypoperfusion which had resolved three months later. The observed dissociation between episodic and semantic memory is discussed in the light of contemporary cognitive theories of memory organisation.

  4. The roles of associative and executive processes in creative cognition.

    PubMed

    Beaty, Roger E; Silvia, Paul J; Nusbaum, Emily C; Jauk, Emanuel; Benedek, Mathias

    2014-10-01

    How does the mind produce creative ideas? Past research has pointed to important roles of both executive and associative processes in creative cognition. But such work has largely focused on the influence of one ability or the other-executive or associative-so the extent to which both abilities may jointly affect creative thought remains unclear. Using multivariate structural equation modeling, we conducted two studies to determine the relative influences of executive and associative processes in domain-general creative cognition (i.e., divergent thinking). Participants completed a series of verbal fluency tasks, and their responses were analyzed by means of latent semantic analysis (LSA) and scored for semantic distance as a measure of associative ability. Participants also completed several measures of executive function-including broad retrieval ability (Gr) and fluid intelligence (Gf). Across both studies, we found substantial effects of both associative and executive abilities: As the average semantic distance between verbal fluency responses and cues increased, so did the creative quality of divergent-thinking responses (Study 1 and Study 2). Moreover, the creative quality of divergent-thinking responses was predicted by the executive variables-Gr (Study 1) and Gf (Study 2). Importantly, the effects of semantic distance and the executive function variables remained robust in the same structural equation model predicting divergent thinking, suggesting unique contributions of both constructs. The present research extends recent applications of LSA in creativity research and provides support for the notion that both associative and executive processes underlie the production of novel ideas.

  5. Isomorphisms between Petri nets and dataflow graphs

    NASA Technical Reports Server (NTRS)

    Kavi, Krishna M.; Buckles, Billy P.; Bhat, U. Narayan

    1987-01-01

    Dataflow graphs are a generalized model of computation. Uninterpreted dataflow graphs with nondeterminism resolved via probabilities are shown to be isomorphic to a class of Petri nets known as free choice nets. Petri net analysis methods are readily available in the literature and this result makes those methods accessible to dataflow research. Nevertheless, combinatorial explosion can render Petri net analysis inoperative. Using a previously known technique for decomposing free choice nets into smaller components, it is demonstrated that, in principle, it is possible to determine aspects of the overall behavior from the particular behavior of components.

  6. Automating the Processing of Earth Observation Data

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Pang, Wan-Lin; Nemani, Ramakrishna; Votava, Petr

    2003-01-01

    NASA s vision for Earth science is to build a "sensor web": an adaptive array of heterogeneous satellites and other sensors that will track important events, such as storms, and provide real-time information about the state of the Earth to a wide variety of customers. Achieving this vision will require automation not only in the scheduling of the observations but also in the processing of the resulting data. To address this need, we are developing a planner-based agent to automatically generate and execute data-flow programs to produce the requested data products.

  7. Modeling heterogeneous processor scheduling for real time systems

    NASA Technical Reports Server (NTRS)

    Leathrum, J. F.; Mielke, R. R.; Stoughton, J. W.

    1994-01-01

    A new model is presented to describe dataflow algorithms implemented in a multiprocessing system. Called the resource/data flow graph (RDFG), the model explicitly represents cyclo-static processor schedules as circuits of processor arcs which reflect the order that processors execute graph nodes. The model also allows the guarantee of meeting hard real-time deadlines. When unfolded, the model identifies statically the processor schedule. The model therefore is useful for determining the throughput and latency of systems with heterogeneous processors. The applicability of the model is demonstrated using a space surveillance algorithm.

  8. Parallel Processing with Digital Signal Processing Hardware and Software

    NASA Technical Reports Server (NTRS)

    Swenson, Cory V.

    1995-01-01

    The assembling and testing of a parallel processing system is described which will allow a user to move a Digital Signal Processing (DSP) application from the design stage to the execution/analysis stage through the use of several software tools and hardware devices. The system will be used to demonstrate the feasibility of the Algorithm To Architecture Mapping Model (ATAMM) dataflow paradigm for static multiprocessor solutions of DSP applications. The individual components comprising the system are described followed by the installation procedure, research topics, and initial program development.

  9. Executive Semantic Processing Is Underpinned by a Large-scale Neural Network: Revealing the Contribution of Left Prefrontal, Posterior Temporal, and Parietal Cortex to Controlled Retrieval and Selection Using TMS

    ERIC Educational Resources Information Center

    Whitney, Carin; Kirk, Marie; O'Sullivan, Jamie; Ralph, Matthew A. Lambon; Jefferies, Elizabeth

    2012-01-01

    To understand the meanings of words and objects, we need to have knowledge about these items themselves plus executive mechanisms that compute and manipulate semantic information in a task-appropriate way. The neural basis for semantic control remains controversial. Neuroimaging studies have focused on the role of the left inferior frontal gyrus…

  10. Emotional verbal fluency: a new task on emotion and executive function interaction.

    PubMed

    Sass, Katharina; Fetz, Karolina; Oetken, Sarah; Habel, Ute; Heim, Stefan

    2013-09-01

    The present study introduces "Emotional Verbal Fluency" as a novel (partially computerized) task, which is aimed to investigate the interaction between emotionally loaded words and executive functions. Verbal fluency tasks are thought to measure executive functions but the interaction with emotional aspects is hardly investigated. In the current study, a group of healthy subjects (n = 21, mean age 25 years, 76% females) were asked to generate items that are either part of a semantic category (e.g., plants, toys, vehicles; standard semantic verbal fluency) or can trigger the emotions joy, anger, sadness, fear and disgust. The results of the task revealed no differences between performance on semantic and emotional categories, suggesting a comparable task difficulty for healthy subjects. Hence, these first results on the comparison between semantic and emotional verbal fluency seem to highlight that both might be suitable for examining executive functioning. However, an interaction was found between the category type and repetition (first vs. second sequence of the same category) with larger performance decrease for semantic in comparison to emotional categories. Best performance overall was found for the emotional category "joy" suggesting a positivity bias in healthy subjects. To conclude, emotional verbal fluency is a promising approach to investigate emotional components in an executive task, which may stimulate further research, especially in psychiatric patients who suffer from emotional as well as cognitive deficits.

  11. Emotional Verbal Fluency: A New Task on Emotion and Executive Function Interaction

    PubMed Central

    Sass, Katharina; Fetz, Karolina; Oetken, Sarah; Habel, Ute; Heim, Stefan

    2013-01-01

    The present study introduces “Emotional Verbal Fluency” as a novel (partially computerized) task, which is aimed to investigate the interaction between emotionally loaded words and executive functions. Verbal fluency tasks are thought to measure executive functions but the interaction with emotional aspects is hardly investigated. In the current study, a group of healthy subjects (n = 21, mean age 25 years, 76% females) were asked to generate items that are either part of a semantic category (e.g., plants, toys, vehicles; standard semantic verbal fluency) or can trigger the emotions joy, anger, sadness, fear and disgust. The results of the task revealed no differences between performance on semantic and emotional categories, suggesting a comparable task difficulty for healthy subjects. Hence, these first results on the comparison between semantic and emotional verbal fluency seem to highlight that both might be suitable for examining executive functioning. However, an interaction was found between the category type and repetition (first vs. second sequence of the same category) with larger performance decrease for semantic in comparison to emotional categories. Best performance overall was found for the emotional category “joy” suggesting a positivity bias in healthy subjects. To conclude, emotional verbal fluency is a promising approach to investigate emotional components in an executive task, which may stimulate further research, especially in psychiatric patients who suffer from emotional as well as cognitive deficits. PMID:25379243

  12. The career success of an adult with a learning disability: a psychosocial study of amnesic-semantic aphasia.

    PubMed

    Kershner, J; Kirkpatrick, T; McLaren, D

    1995-02-01

    B.I. is a 39-year-old, intellectually gifted (IQ = 130) man with learning disabilities who, without known cause, demonstrated symptoms of amnesic-semantic aphasia at age 13. This led to placement in a public school class for students with mild mental retardation and to his dropping out of school after repeating Grade 9. His aphasia is associated with a severe deficit in speech comprehension, poor reading and writing, spatial confusion, and episodic memory loss. We studied the remarkable behavioral and cognitive adjustments that have enabled him to lead a fulfilling life and become a highly successful business executive. Implications are discussed in the context of patterns of successful functioning and current views of the neuropsychological and neurological bases of such disorders.

  13. Integrated data management for clinical studies: automatic transformation of data models with semantic annotations for principal investigators, data managers and statisticians.

    PubMed

    Dugas, Martin; Dugas-Breit, Susanne

    2014-01-01

    Design, execution and analysis of clinical studies involves several stakeholders with different professional backgrounds. Typically, principle investigators are familiar with standard office tools, data managers apply electronic data capture (EDC) systems and statisticians work with statistics software. Case report forms (CRFs) specify the data model of study subjects, evolve over time and consist of hundreds to thousands of data items per study. To avoid erroneous manual transformation work, a converting tool for different representations of study data models was designed. It can convert between office format, EDC and statistics format. In addition, it supports semantic annotations, which enable precise definitions for data items. A reference implementation is available as open source package ODMconverter at http://cran.r-project.org.

  14. Compile-Time Schedulability Analysis of Communicating Concurrent Programs

    DTIC Science & Technology

    2006-06-28

    synchronize via the read and write operations on the FIFO channels. These operations have been implemented with the help of semaphores , which...3 1.1.2 Synchronous Dataflow . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.1.3 Boolean Dataflow...described by concurrent programs . . . . . . . . . 4 1.3 A synchronous dataflow model, its topology matrix, and repetition vector . 10 1.4 Select and

  15. Children and adolescents' performance on a medium-length/nonsemantic word-list test.

    PubMed

    Flores-Lázaro, Julio César; Salgado Soruco, María Alejandra; Stepanov, Igor I

    2017-01-01

    Word-list learning tasks are among the most important and frequently used tests for declarative memory evaluation. For example, the California Verbal Learning Test-Children's Version (CVLT-C) and Rey Auditory Verbal Learning Test provide important information about different cognitive-neuropsychological processes. However, the impact of test length (i.e., number of words) and semantic organization (i.e., type of words) on children's and adolescents' memory performance remains to be clarified, especially during this developmental stage. To explore whether a medium-length non-semantically organized test can produce the typical curvilinear performance that semantically organized tests produce, reflecting executive control, we studied and compared the cognitive performance of normal children and adolescents by utilizing mathematical modeling. The model is based on the first-order system transfer function and has been successfully applied to learning curves for the CVLT-C (15 words, semantically organized paradigm). Results indicate that learning nine semantically unrelated words produces typical curvilinear (executive function) performance in children and younger adolescents and that performance could be effectively analyzed with the mathematical model. This indicates that the exponential increase (curvilinear performance) of correctly learned words does not solely depend on semantic and/or length features. This type of test controls semantic and length effects and may represent complementary tools for executive function evaluation in clinical populations in which semantic and/or length processing are affected.

  16. Speech-in-speech perception and executive function involvement

    PubMed Central

    Perrone-Bertolotti, Marcela; Tassin, Maxime

    2017-01-01

    This present study investigated the link between speech-in-speech perception capacities and four executive function components: response suppression, inhibitory control, switching and working memory. We constructed a cross-modal semantic priming paradigm using a written target word and a spoken prime word, implemented in one of two concurrent auditory sentences (cocktail party situation). The prime and target were semantically related or unrelated. Participants had to perform a lexical decision task on visual target words and simultaneously listen to only one of two pronounced sentences. The attention of the participant was manipulated: The prime was in the pronounced sentence listened to by the participant or in the ignored one. In addition, we evaluate the executive function abilities of participants (switching cost, inhibitory-control cost and response-suppression cost) and their working memory span. Correlation analyses were performed between the executive and priming measurements. Our results showed a significant interaction effect between attention and semantic priming. We observed a significant priming effect in the attended but not in the ignored condition. Only priming effects obtained in the ignored condition were significantly correlated with some of the executive measurements. However, no correlation between priming effects and working memory capacity was found. Overall, these results confirm, first, the role of attention for semantic priming effect and, second, the implication of executive functions in speech-in-noise understanding capacities. PMID:28708830

  17. An individual differences approach to semantic cognition: Divergent effects of age on representation, retrieval and selection.

    PubMed

    Hoffman, Paul

    2018-05-25

    Semantic cognition refers to the appropriate use of acquired knowledge about the world. This requires representation of knowledge as well as control processes which ensure that currently-relevant aspects of knowledge are retrieved and selected. Although these abilities can be impaired selectively following brain damage, the relationship between them in healthy individuals is unclear. It is also commonly assumed that semantic cognition is preserved in later life, because older people have greater reserves of knowledge. However, this claim overlooks the possibility of decline in semantic control processes. Here, semantic cognition was assessed in 100 young and older adults. Despite having a broader knowledge base, older people showed specific impairments in semantic control, performing more poorly than young people when selecting among competing semantic representations. Conversely, they showed preserved controlled retrieval of less salient information from the semantic store. Breadth of semantic knowledge was positively correlated with controlled retrieval but was unrelated to semantic selection ability, which was instead correlated with non-semantic executive function. These findings indicate that three distinct elements contribute to semantic cognition: semantic representations that accumulate throughout the lifespan, processes for controlled retrieval of less salient semantic information, which appear age-invariant, and mechanisms for selecting task-relevant aspects of semantic knowledge, which decline with age and may relate more closely to domain-general executive control.

  18. Mining the Human Phenome using Semantic Web Technologies: A Case Study for Type 2 Diabetes

    PubMed Central

    Pathak, Jyotishman; Kiefer, Richard C.; Bielinski, Suzette J.; Chute, Christopher G.

    2012-01-01

    The ability to conduct genome-wide association studies (GWAS) has enabled new exploration of how genetic variations contribute to health and disease etiology. However, historically GWAS have been limited by inadequate sample size due to associated costs for genotyping and phenotyping of study subjects. This has prompted several academic medical centers to form “biobanks” where biospecimens linked to personal health information, typically in electronic health records (EHRs), are collected and stored on large number of subjects. This provides tremendous opportunities to discover novel genotype-phenotype associations and foster hypothesis generation. In this work, we study how emerging Semantic Web technologies can be applied in conjunction with clinical and genotype data stored at the Mayo Clinic Biobank to mine the phenotype data for genetic associations. In particular, we demonstrate the role of using Resource Description Framework (RDF) for representing EHR diagnoses and procedure data, and enable federated querying via standardized Web protocols to identify subjects genotyped with Type 2 Diabetes for discovering gene-disease associations. Our study highlights the potential of Web-scale data federation techniques to execute complex queries. PMID:23304343

  19. Mining the human phenome using semantic web technologies: a case study for Type 2 Diabetes.

    PubMed

    Pathak, Jyotishman; Kiefer, Richard C; Bielinski, Suzette J; Chute, Christopher G

    2012-01-01

    The ability to conduct genome-wide association studies (GWAS) has enabled new exploration of how genetic variations contribute to health and disease etiology. However, historically GWAS have been limited by inadequate sample size due to associated costs for genotyping and phenotyping of study subjects. This has prompted several academic medical centers to form "biobanks" where biospecimens linked to personal health information, typically in electronic health records (EHRs), are collected and stored on large number of subjects. This provides tremendous opportunities to discover novel genotype-phenotype associations and foster hypothesis generation. In this work, we study how emerging Semantic Web technologies can be applied in conjunction with clinical and genotype data stored at the Mayo Clinic Biobank to mine the phenotype data for genetic associations. In particular, we demonstrate the role of using Resource Description Framework (RDF) for representing EHR diagnoses and procedure data, and enable federated querying via standardized Web protocols to identify subjects genotyped with Type 2 Diabetes for discovering gene-disease associations. Our study highlights the potential of Web-scale data federation techniques to execute complex queries.

  20. Solving Partial Differential Equations in a data-driven multiprocessor environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaudiot, J.L.; Lin, C.M.; Hosseiniyar, M.

    1988-12-31

    Partial differential equations can be found in a host of engineering and scientific problems. The emergence of new parallel architectures has spurred research in the definition of parallel PDE solvers. Concurrently, highly programmable systems such as data-how architectures have been proposed for the exploitation of large scale parallelism. The implementation of some Partial Differential Equation solvers (such as the Jacobi method) on a tagged token data-flow graph is demonstrated here. Asynchronous methods (chaotic relaxation) are studied and new scheduling approaches (the Token No-Labeling scheme) are introduced in order to support the implementation of the asychronous methods in a data-driven environment.more » New high-level data-flow language program constructs are introduced in order to handle chaotic operations. Finally, the performance of the program graphs is demonstrated by a deterministic simulation of a message passing data-flow multiprocessor. An analysis of the overhead in the data-flow graphs is undertaken to demonstrate the limits of parallel operations in dataflow PDE program graphs.« less

  1. Business Process Modelling is an Essential Part of a Requirements Analysis. Contribution of EFMI Primary Care Working Group.

    PubMed

    de Lusignan, S; Krause, P; Michalakidis, G; Vicente, M Tristan; Thompson, S; McGilchrist, M; Sullivan, F; van Royen, P; Agreus, L; Desombre, T; Taweel, A; Delaney, B

    2012-01-01

    To perform a requirements analysis of the barriers to conducting research linking of primary care, genetic and cancer data. We extended our initial data-centric approach to include socio-cultural and business requirements. We created reference models of core data requirements common to most studies using unified modelling language (UML), dataflow diagrams (DFD) and business process modelling notation (BPMN). We conducted a stakeholder analysis and constructed DFD and UML diagrams for use cases based on simulated research studies. We used research output as a sensitivity analysis. Differences between the reference model and use cases identified study specific data requirements. The stakeholder analysis identified: tensions, changes in specification, some indifference from data providers and enthusiastic informaticians urging inclusion of socio-cultural context. We identified requirements to collect information at three levels: micro- data items, which need to be semantically interoperable, meso- the medical record and data extraction, and macro- the health system and socio-cultural issues. BPMN clarified complex business requirements among data providers and vendors; and additional geographical requirements for patients to be represented in both linked datasets. High quality research output was the norm for most repositories. Reference models provide high-level schemata of the core data requirements. However, business requirements' modelling identifies stakeholder issues and identifies what needs to be addressed to enable participation.

  2. Applying semantic web technologies for phenome-wide scan using an electronic health record linked Biobank

    PubMed Central

    2012-01-01

    Background The ability to conduct genome-wide association studies (GWAS) has enabled new exploration of how genetic variations contribute to health and disease etiology. However, historically GWAS have been limited by inadequate sample size due to associated costs for genotyping and phenotyping of study subjects. This has prompted several academic medical centers to form “biobanks” where biospecimens linked to personal health information, typically in electronic health records (EHRs), are collected and stored on a large number of subjects. This provides tremendous opportunities to discover novel genotype-phenotype associations and foster hypotheses generation. Results In this work, we study how emerging Semantic Web technologies can be applied in conjunction with clinical and genotype data stored at the Mayo Clinic Biobank to mine the phenotype data for genetic associations. In particular, we demonstrate the role of using Resource Description Framework (RDF) for representing EHR diagnoses and procedure data, and enable federated querying via standardized Web protocols to identify subjects genotyped for Type 2 Diabetes and Hypothyroidism to discover gene-disease associations. Our study highlights the potential of Web-scale data federation techniques to execute complex queries. Conclusions This study demonstrates how Semantic Web technologies can be applied in conjunction with clinical data stored in EHRs to accurately identify subjects with specific diseases and phenotypes, and identify genotype-phenotype associations. PMID:23244446

  3. The semantic and episodic subcomponents of famous person knowledge: dissociation in healthy subjects.

    PubMed

    Piolino, Pascale; Lamidey, Virginie; Desgranges, Béatrice; Eustache, Francis

    2007-01-01

    Fifty-two subjects between ages 40 and 79 years were administered a questionnaire assessing their ability to recall semantic information about famous people from 4 different decades and to recollect its episodic source of acquisition together with autonoetic consciousness via the remember-know paradigm. In addition, they underwent a battery of standardized neuropsychological tests to assess episodic and semantic memory and executive functions. The analyses of age reveal differences for the episodic source score but no differences between age groups for the semantic scores within each decade. Regardless of the age of people, the analyses also show that semantic memory subcomponents of the famous person test are highly associated with each other as well as with the source component. The recall of semantic information on the famous person test relies on participants' semantic abilities, whereas the recall of its episodic source depends on their executive functions. The present findings confirm the existence of an episodic-semantic distinction in knowledge about famous people. They provide further evidence that personal source and semantic information are at once distinct and highly interactive within the framework of remote memory. (c) 2007 APA, all rights reserved.

  4. Semantic Information Processing of Physical Simulation Based on Scientific Concept Vocabulary Model

    NASA Astrophysics Data System (ADS)

    Kino, Chiaki; Suzuki, Yoshio; Takemiya, Hiroshi

    Scientific Concept Vocabulary (SCV) has been developed to actualize Cognitive methodology based Data Analysis System: CDAS which supports researchers to analyze large scale data efficiently and comprehensively. SCV is an information model for processing semantic information for physics and engineering. In the model of SCV, all semantic information is related to substantial data and algorisms. Consequently, SCV enables a data analysis system to recognize the meaning of execution results output from a numerical simulation. This method has allowed a data analysis system to extract important information from a scientific view point. Previous research has shown that SCV is able to describe simple scientific indices and scientific perceptions. However, it is difficult to describe complex scientific perceptions by currently-proposed SCV. In this paper, a new data structure for SCV has been proposed in order to describe scientific perceptions in more detail. Additionally, the prototype of the new model has been constructed and applied to actual data of numerical simulation. The result means that the new SCV is able to describe more complex scientific perceptions.

  5. Reduced short-term memory capacity in Alzheimer's disease: the role of phonological, lexical, and semantic processing.

    PubMed

    Caza, Nicole; Belleville, Sylvie

    2008-05-01

    Individuals with Alzheimer's disease (AD) are often reported to have reduced verbal short-term memory capacity, typically attributed to their attention/executive deficits. However, these individuals also tend to show progressive impairment of semantic, lexical, and phonological processing which may underlie their low short-term memory capacity. The goals of this study were to assess the contribution of each level of representation (phonological, lexical, and semantic) to immediate serial recall performance in 18 individuals with AD, and to examine how these linguistic effects on short-term memory were modulated by their reduced capacity to manipulate information in short-term memory associated with executive dysfunction. Results showed that individuals with AD had difficulty recalling items that relied on phonological representations, which led to increased lexicality effects relative to the control group. This finding suggests that patients have a greater reliance on lexical/semantic information than controls, possibly to make up for deficits in retention and processing of phonological material. This lexical/semantic effect was not found to be significantly correlated with patients' capacity to manipulate verbal material in short-term memory, indicating that language processing and executive deficits may independently contribute to reducing verbal short-term memory capacity in AD.

  6. Verbal and Non-verbal Fluency in Adults with Developmental Dyslexia: Phonological Processing or Executive Control Problems?

    PubMed

    Smith-Spark, James H; Henry, Lucy A; Messer, David J; Zięcik, Adam P

    2017-08-01

    The executive function of fluency describes the ability to generate items according to specific rules. Production of words beginning with a certain letter (phonemic fluency) is impaired in dyslexia, while generation of words belonging to a certain semantic category (semantic fluency) is typically unimpaired. However, in dyslexia, verbal fluency has generally been studied only in terms of overall words produced. Furthermore, performance of adults with dyslexia on non-verbal design fluency tasks has not been explored but would indicate whether deficits could be explained by executive control, rather than phonological processing, difficulties. Phonemic, semantic and design fluency tasks were presented to adults with dyslexia and without dyslexia, using fine-grained performance measures and controlling for IQ. Hierarchical regressions indicated that dyslexia predicted lower phonemic fluency, but not semantic or design fluency. At the fine-grained level, dyslexia predicted a smaller number of switches between subcategories on phonemic fluency, while dyslexia did not predict the size of phonemically related clusters of items. Overall, the results suggested that phonological processing problems were at the root of dyslexia-related fluency deficits; however, executive control difficulties could not be completely ruled out as an alternative explanation. Developments in research methodology, equating executive demands across fluency tasks, may resolve this issue. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. What dementia reveals about proverb interpretation and its neuroanatomical correlates.

    PubMed

    Kaiser, Natalie C; Lee, Grace J; Lu, Po H; Mather, Michelle J; Shapira, Jill; Jimenez, Elvira; Thompson, Paul M; Mendez, Mario F

    2013-08-01

    Neuropsychologists frequently include proverb interpretation as a measure of executive abilities. A concrete interpretation of proverbs, however, may reflect semantic impairments from anterior temporal lobes, rather than executive dysfunction from frontal lobes. The investigation of proverb interpretation among patients with different dementias with varying degrees of temporal and frontal dysfunction may clarify the underlying brain-behavior mechanisms for abstraction from proverbs. We propose that patients with behavioral variant frontotemporal dementia (bvFTD), who are characteristically more impaired on proverb interpretation than those with Alzheimer's disease (AD), are disproportionately impaired because of anterior temporal-mediated semantic deficits. Eleven patients with bvFTD and 10 with AD completed the Delis-Kaplan Executive Function System (D-KEFS) Proverbs Test and a series of neuropsychological measures of executive and semantic functions. The analysis included both raw and age-adjusted normed data for multiple choice responses on the D-KEFS Proverbs Test using independent samples t-tests. Tensor-based morphometry (TBM) applied to 3D T1-weighted MRI scans mapped the association between regional brain volume and proverb performance. Computations of mean Jacobian values within select regions of interest provided a numeric summary of regional volume, and voxel-wise regression yielded 3D statistical maps of the association between tissue volume and proverb scores. The patients with bvFTD were significantly worse than those with AD in proverb interpretation. The worse performance of the bvFTD patients involved a greater number of concrete responses to common, familiar proverbs, but not to uncommon, unfamiliar ones. These concrete responses to common proverbs correlated with semantic measures, whereas concrete responses to uncommon proverbs correlated with executive functions. After controlling for dementia diagnosis, TBM analyses indicated significant correlations between impaired proverb interpretation and the anterior temporal lobe region (left>right). Among two dementia groups, those with bvFTD, demonstrated a greater number of concrete responses to common proverbs compared to those with AD, and this performance correlated with semantic deficits and the volume of the left anterior lobe, the hub of semantic knowledge. The findings of this study suggest that common proverb interpretation is greatly influenced by semantic dysfunction and that the use of proverbs for testing executive functions needs to include the interpretation of unfamiliar proverbs. Published by Elsevier Ltd.

  8. The Semantic Reactivity of Red, Blue, and Purple: A Linguistic Analysis of Post-Election Statements Made by Executive Leadership of Three Public Flagship Universities

    ERIC Educational Resources Information Center

    Taylor, Zachary Wayne

    2017-01-01

    Examining post-election statements made by UC System, UT-Austin, and UW-Madison executive leadership, this study employs word frequency, collocation, and a three-pronged latent semantic analysis to explicate the associative diction, major concepts, and institutional priorities expressed by said leadership to answer the research question,…

  9. Using AI and Semantic Web Technologies to attack Process Complexity in Open Systems

    NASA Astrophysics Data System (ADS)

    Thompson, Simon; Giles, Nick; Li, Yang; Gharib, Hamid; Nguyen, Thuc Duong

    Recently many vendors and groups have advocated using BPEL and WS-BPEL as a workflow language to encapsulate business logic. While encapsulating workflow and process logic in one place is a sensible architectural decision the implementation of complex workflows suffers from the same problems that made managing and maintaining hierarchical procedural programs difficult. BPEL lacks constructs for logical modularity such as the requirements construct from the STL [12] or the ability to adapt constructs like pure abstract classes for the same purpose. We describe a system that uses semantic web and agent concepts to implement an abstraction layer for BPEL based on the notion of Goals and service typing. AI planning was used to enable process engineers to create and validate systems that used services and goals as first class concepts and compiled processes at run time for execution.

  10. Towards semantic interoperability for electronic health records.

    PubMed

    Garde, Sebastian; Knaup, Petra; Hovenga, Evelyn; Heard, Sam

    2007-01-01

    In the field of open electronic health records (EHRs), openEHR as an archetype-based approach is being increasingly recognised. It is the objective of this paper to shortly describe this approach, and to analyse how openEHR archetypes impact on health professionals and semantic interoperability. Analysis of current approaches to EHR systems, terminology and standards developments. In addition to literature reviews, we organised face-to-face and additional telephone interviews and tele-conferences with members of relevant organisations and committees. The openEHR archetypes approach enables syntactic interoperability and semantic interpretability -- both important prerequisites for semantic interoperability. Archetypes enable the formal definition of clinical content by clinicians. To enable comprehensive semantic interoperability, the development and maintenance of archetypes needs to be coordinated internationally and across health professions. Domain knowledge governance comprises a set of processes that enable the creation, development, organisation, sharing, dissemination, use and continuous maintenance of archetypes. It needs to be supported by information technology. To enable EHRs, semantic interoperability is essential. The openEHR archetypes approach enables syntactic interoperability and semantic interpretability. However, without coordinated archetype development and maintenance, 'rank growth' of archetypes would jeopardize semantic interoperability. We therefore believe that openEHR archetypes and domain knowledge governance together create the knowledge environment required to adopt EHRs.

  11. UBioLab: a web-LABoratory for Ubiquitous in-silico experiments.

    PubMed

    Bartocci, E; Di Berardini, M R; Merelli, E; Vito, L

    2012-03-01

    The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists -for what concerns their management and visualization- and for bioinformaticians -for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle -and possibly to handle in a transparent and uniform way- aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features -as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques- give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.

  12. The role of the right hemisphere in semantic control: A case-series comparison of right and left hemisphere stroke

    PubMed Central

    Thompson, Hannah E.; Henshall, Lauren; Jefferies, Elizabeth

    2016-01-01

    Semantic control processes guide conceptual retrieval so that we are able to focus on non-dominant associations and features when these are required for the task or context, yet the neural basis of semantic control is not fully understood. Neuroimaging studies have emphasised the role of left inferior frontal gyrus (IFG) in controlled retrieval, while neuropsychological investigations of semantic control deficits have almost exclusively focussed on patients with left-sided damage (e.g., patients with semantic aphasia, SA). Nevertheless, activation in fMRI during demanding semantic tasks typically extends to right IFG. To investigate the role of the right hemisphere (RH) in semantic control, we compared nine RH stroke patients with 21 left-hemisphere SA patients, 11 mild SA cases and 12 healthy, aged-matched controls on semantic and executive tasks, plus experimental tasks that manipulated semantic control in paradigms particularly sensitive to RH damage. RH patients had executive deficits to parallel SA patients but they performed well on standard semantic tests. Nevertheless, multimodal semantic control deficits were found in experimental tasks involving facial emotions and the ‘summation’ of meaning across multiple items. On these tasks, RH patients showed effects similar to those in SA cases – multimodal deficits that were sensitive to distractor strength and cues and miscues, plus increasingly poor performance in cyclical matching tasks which repeatedly probed the same set of concepts. Thus, despite striking differences in single-item comprehension, evidence presented here suggests semantic control is bilateral, and disruption of this component of semantic cognition can be seen following damage to either hemisphere. PMID:26945505

  13. Assembling Large, Multi-Sensor Climate Datasets Using the SciFlo Grid Workflow System

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.; Fetzer, E.

    2008-12-01

    NASA's Earth Observing System (EOS) is the world's most ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the A-Train platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the cloud scenes from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time matchups between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, and assemble merged datasets for further scientific and statistical analysis. To meet these large-scale challenges, we are utilizing a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data query, access, subsetting, co-registration, mining, fusion, and advanced statistical analysis. SciFlo is a semantically-enabled ("smart") Grid Workflow system that ties together a peer-to-peer network of computers into an efficient engine for distributed computation. The SciFlo workflow engine enables scientists to do multi-instrument Earth Science by assembling remotely-invokable Web Services (SOAP or http GET URLs), native executables, command-line scripts, and Python codes into a distributed computing flow. A scientist visually authors the graph of operation in the VizFlow GUI, or uses a text editor to modify the simple XML workflow documents. The SciFlo client & server engines optimize the execution of such distributed workflows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The engine transparently moves data to the operators, and moves operators to the data (on the dozen trusted SciFlo nodes). SciFlo also deploys a variety of Data Grid services to: query datasets in space and time, locate & retrieve on-line data granules, provide on-the-fly variable and spatial subsetting, and perform pairwise instrument matchups for A-Train datasets. These services are combined into efficient workflows to assemble the desired large-scale, merged climate datasets. SciFlo is currently being applied in several large climate studies: comparisons of aerosol optical depth between MODIS, MISR, AERONET ground network, and U. Michigan's IMPACT aerosol transport model; characterization of long-term biases in microwave and infrared instruments (AIRS, MLS) by comparisons to GPS temperature retrievals accurate to 0.1 degrees Kelvin; and construction of a decade-long, multi-sensor water vapor climatology stratified by classified cloud scene by bringing together datasets from AIRS/AMSU, AMSR-E, MLS, MODIS, and CloudSat (NASA MEASUREs grant, Fetzer PI). The presentation will discuss the SciFlo technologies, their application in these distributed workflows, and the many challenges encountered in assembling and analyzing these massive datasets.

  14. Assembling Large, Multi-Sensor Climate Datasets Using the SciFlo Grid Workflow System

    NASA Astrophysics Data System (ADS)

    Wilson, B.; Manipon, G.; Xing, Z.; Fetzer, E.

    2009-04-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To meet these large-scale challenges, we are utilizing a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data query, access, subsetting, co-registration, mining, fusion, and advanced statistical analysis. SciFlo is a semantically-enabled ("smart") Grid Workflow system that ties together a peer-to-peer network of computers into an efficient engine for distributed computation. The SciFlo workflow engine enables scientists to do multi-instrument Earth Science by assembling remotely-invokable Web Services (SOAP or http GET URLs), native executables, command-line scripts, and Python codes into a distributed computing flow. A scientist visually authors the graph of operation in the VizFlow GUI, or uses a text editor to modify the simple XML workflow documents. The SciFlo client & server engines optimize the execution of such distributed workflows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The engine transparently moves data to the operators, and moves operators to the data (on the dozen trusted SciFlo nodes). SciFlo also deploys a variety of Data Grid services to: query datasets in space and time, locate & retrieve on-line data granules, provide on-the-fly variable and spatial subsetting, perform pairwise instrument matchups for A-Train datasets, and compute fused products. These services are combined into efficient workflows to assemble the desired large-scale, merged climate datasets. SciFlo is currently being applied in several large climate studies: comparisons of aerosol optical depth between MODIS, MISR, AERONET ground network, and U. Michigan's IMPACT aerosol transport model; characterization of long-term biases in microwave and infrared instruments (AIRS, MLS) by comparisons to GPS temperature retrievals accurate to 0.1 degrees Kelvin; and construction of a decade-long, multi-sensor water vapor climatology stratified by classified cloud scene by bringing together datasets from AIRS/AMSU, AMSR-E, MLS, MODIS, and CloudSat (NASA MEASUREs grant, Fetzer PI). The presentation will discuss the SciFlo technologies, their application in these distributed workflows, and the many challenges encountered in assembling and analyzing these massive datasets.

  15. Modeling and prototyping of biometric systems using dataflow programming

    NASA Astrophysics Data System (ADS)

    Minakova, N.; Petrov, I.

    2018-01-01

    The development of biometric systems is one of the labor-intensive processes. Therefore, the creation and analysis of approaches and techniques is an urgent task at present. This article presents a technique of modeling and prototyping biometric systems based on dataflow programming. The technique includes three main stages: the development of functional blocks, the creation of a dataflow graph and the generation of a prototype. A specially developed software modeling environment that implements this technique is described. As an example of the use of this technique, an example of the implementation of the iris localization subsystem is demonstrated. A variant of modification of dataflow programming is suggested to solve the problem related to the undefined order of block activation. The main advantage of the presented technique is the ability to visually display and design the model of the biometric system, the rapid creation of a working prototype and the reuse of the previously developed functional blocks.

  16. Neural differences in the processing of semantic relationships across cultures.

    PubMed

    Gutchess, Angela H; Hedden, Trey; Ketay, Sarah; Aron, Arthur; Gabrieli, John D E

    2010-06-01

    The current study employed functional MRI to investigate the contribution of domain-general (e.g. executive functions) and domain-specific (e.g. semantic knowledge) processes to differences in semantic judgments across cultures. Previous behavioral experiments have identified cross-cultural differences in categorization, with East Asians preferring strategies involving thematic or functional relationships (e.g. cow-grass) and Americans preferring categorical relationships (e.g. cow-chicken). East Asians and American participants underwent functional imaging while alternating between categorical or thematic strategies to sort triads of words, as well as matching words on control trials. Many similarities were observed. However, across both category and relationship trials compared to match (control) trials, East Asians activated a frontal-parietal network implicated in controlled executive processes, whereas Americans engaged regions of the temporal lobes and the cingulate, possibly in response to conflict in the semantic content of information. The results suggest that cultures differ in the strategies employed to resolve conflict between competing semantic judgments.

  17. Data Type Registry - Cross Road Between Catalogs, Data And Semantics

    NASA Astrophysics Data System (ADS)

    Richard, S. M.; Zaslavsky, I.; Bristol, S.

    2017-12-01

    As more data become accessible online, the opportunity is increasing to improve search for information within datasets and for automating some levels of data integration. A prerequisite for these advances is indexing the kinds of information that are present in datasets and providing machine actionable descriptions of data structures. We are exploring approaches to enabling these capabilities in the EarthCube DigitalCrust and Data Discovery Hub Building Block projects, building on the Data type registry (DTR) workgroup activity in the Research Data Alliance. We are prototyping a registry implementation using the CNRI Cordra platform and API to enable 'deep registration' of datasets for building hydrogeologic models of the Earth's Crust, and executing complex science scenarios for river chemistry and coral bleaching data. These use cases require the ability to respond to queries such as: What are properties of Entity X; What entities include property Y (or L, M, N…), and What DataTypes are about Entity X and include property Y. Development of the registry to enable these capabilities requires more in-depth metadata than is commonly available, so we are also exploring approaches to analyzing simple tabular data to automate recognition of entities and properties, and assist users with establishing semantic mappings to data integration vocabularies. This poster will review the current capabilities and implementation of a data type registry.

  18. Detecting and Characterizing Semantic Inconsistencies in Ported Code

    NASA Technical Reports Server (NTRS)

    Ray, Baishakhi; Kim, Miryung; Person, Suzette J.; Rungta, Neha

    2013-01-01

    Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (I) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows thai SPA can dell-oct porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points

  19. Detecting and Characterizing Semantic Inconsistencies in Ported Code

    NASA Technical Reports Server (NTRS)

    Ray, Baishakhi; Kim, Miryung; Person,Suzette; Rungta, Neha

    2013-01-01

    Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (1) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows that SPA can detect porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points.

  20. Semantic and Phonemic Verbal Fluency in Blinds

    ERIC Educational Resources Information Center

    Nejati, Vahid; Asadi, Anoosh

    2010-01-01

    A person who has suffered the total loss of a sensory system has, indirectly, suffered a brain lesion. Semantic and phonologic verbal fluency are used for evaluation of executive function and language. The aim of this study is evaluation and comparison of phonemic and semantic verbal fluency in acquired blinds. We compare 137 blinds and 124…

  1. The role of the right hemisphere in semantic control: A case-series comparison of right and left hemisphere stroke.

    PubMed

    Thompson, Hannah E; Henshall, Lauren; Jefferies, Elizabeth

    2016-05-01

    Semantic control processes guide conceptual retrieval so that we are able to focus on non-dominant associations and features when these are required for the task or context, yet the neural basis of semantic control is not fully understood. Neuroimaging studies have emphasised the role of left inferior frontal gyrus (IFG) in controlled retrieval, while neuropsychological investigations of semantic control deficits have almost exclusively focussed on patients with left-sided damage (e.g., patients with semantic aphasia, SA). Nevertheless, activation in fMRI during demanding semantic tasks typically extends to right IFG. To investigate the role of the right hemisphere (RH) in semantic control, we compared nine RH stroke patients with 21 left-hemisphere SA patients, 11 mild SA cases and 12 healthy, aged-matched controls on semantic and executive tasks, plus experimental tasks that manipulated semantic control in paradigms particularly sensitive to RH damage. RH patients had executive deficits to parallel SA patients but they performed well on standard semantic tests. Nevertheless, multimodal semantic control deficits were found in experimental tasks involving facial emotions and the 'summation' of meaning across multiple items. On these tasks, RH patients showed effects similar to those in SA cases - multimodal deficits that were sensitive to distractor strength and cues and miscues, plus increasingly poor performance in cyclical matching tasks which repeatedly probed the same set of concepts. Thus, despite striking differences in single-item comprehension, evidence presented here suggests semantic control is bilateral, and disruption of this component of semantic cognition can be seen following damage to either hemisphere. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. UBioLab: a web-laboratory for ubiquitous in-silico experiments.

    PubMed

    Bartocci, Ezio; Cacciagrano, Diletta; Di Berardini, Maria Rita; Merelli, Emanuela; Vito, Leonardo

    2012-07-09

    The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists –for what concerns their management and visualization– and for bioinformaticians –for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle –and possibly to handle in a transparent and uniform way– aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features –as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques– give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.

  3. AthenaMT: upgrading the ATLAS software framework for the many-core world with multi-threading

    NASA Astrophysics Data System (ADS)

    Leggett, Charles; Baines, John; Bold, Tomasz; Calafiura, Paolo; Farrell, Steven; van Gemmeren, Peter; Malon, David; Ritsch, Elmar; Stewart, Graeme; Snyder, Scott; Tsulaia, Vakhtang; Wynne, Benjamin; ATLAS Collaboration

    2017-10-01

    ATLAS’s current software framework, Gaudi/Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single threaded design has been recognized for some time to be increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. After concluding a rigorous requirements phase, where many design components were examined in detail, ATLAS has begun the migration to a new data-flow driven, multi-threaded framework, which enables the simultaneous processing of singleton, thread unsafe legacy Algorithms, cloned Algorithms that execute concurrently in their own threads with different Event contexts, and fully re-entrant, thread safe Algorithms. In this paper we report on the process of modifying the framework to safely process multiple concurrent events in different threads, which entails significant changes in the underlying handling of features such as event and time dependent data, asynchronous callbacks, metadata, integration with the online High Level Trigger for partial processing in certain regions of interest, concurrent I/O, as well as ensuring thread safety of core services. We also report on upgrading the framework to handle Algorithms that are fully re-entrant.

  4. Parceling the Power.

    ERIC Educational Resources Information Center

    Hiatt, Blanchard; Gwynne, Peter

    1984-01-01

    To make computing power broadly available and truly friendly, both soft and hard meshing and synchronization problems will have to be solved. Possible solutions and research related to these problems are discussed. Topics considered include compilers, parallelism, networks, distributed sensors, dataflow, CEDAR system (using dataflow principles),…

  5. Typing pictures: Linguistic processing cascades into finger movements.

    PubMed

    Scaltritti, Michele; Arfé, Barbara; Torrance, Mark; Peressotti, Francesca

    2016-11-01

    The present study investigated the effect of psycholinguistic variables on measures of response latency and mean interkeystroke interval in a typewritten picture naming task, with the aim to outline the functional organization of the stages of cognitive processing and response execution associated with typewritten word production. Onset latencies were modulated by lexical and semantic variables traditionally linked to lexical retrieval, such as word frequency, age of acquisition, and naming agreement. Orthographic variables, both at the lexical and sublexical level, appear to influence just within-word interkeystroke intervals, suggesting that orthographic information may play a relevant role in controlling actual response execution. Lexical-semantic variables also influenced speed of execution. This points towards cascaded flow of activation between stages of lexical access and response execution. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. A three-level atomicity model for decentralized workflow management systems

    NASA Astrophysics Data System (ADS)

    Ben-Shaul, Israel Z.; Heineman, George T.

    1996-12-01

    A workflow management system (WFMS) employs a workflow manager (WM) to execute and automate the various activities within a workflow. To protect the consistency of data, the WM encapsulates each activity with a transaction; a transaction manager (TM) then guarantees the atomicity of activities. Since workflows often group several activities together, the TM is responsible for guaranteeing the atomicity of these units. There are scalability issues, however, with centralized WFMSs. Decentralized WFMSs provide an architecture for multiple autonomous WFMSs to interoperate, thus accommodating multiple workflows and geographically-dispersed teams. When atomic units are composed of activities spread across multiple WFMSs, however, there is a conflict between global atomicity and local autonomy of each WFMS. This paper describes a decentralized atomicity model that enables workflow administrators to specify the scope of multi-site atomicity based upon the desired semantics of multi-site tasks in the decentralized WFMS. We describe an architecture that realizes our model and execution paradigm.

  7. Is semantic verbal fluency impairment explained by executive function deficits in schizophrenia?

    PubMed

    Berberian, Arthur A; Moraes, Giovanna V; Gadelha, Ary; Brietzke, Elisa; Fonseca, Ana O; Scarpato, Bruno S; Vicente, Marcella O; Seabra, Alessandra G; Bressan, Rodrigo A; Lacerda, Acioly L

    2016-04-19

    To investigate if verbal fluency impairment in schizophrenia reflects executive function deficits or results from degraded semantic store or inefficient search and retrieval strategies. Two groups were compared: 141 individuals with schizophrenia and 119 healthy age and education-matched controls. Both groups performed semantic and phonetic verbal fluency tasks. Performance was evaluated using three scores, based on 1) number of words generated; 2) number of clustered/related words; and 3) switching score. A fourth performance score based on the number of clusters was also measured. SZ individuals produced fewer words than controls. After controlling for the total number of words produced, a difference was observed between the groups in the number of cluster-related words generated in the semantic task. In both groups, the number of words generated in the semantic task was higher than that generated in the phonemic task, although a significant group vs. fluency type interaction showed that subjects with schizophrenia had disproportionate semantic fluency impairment. Working memory was positively associated with increased production of words within clusters and inversely correlated with switching. Semantic fluency impairment may be attributed to an inability (resulting from reduced cognitive control) to distinguish target signal from competing noise and to maintain cues for production of memory probes.

  8. FX-87 performance measurements: data-flow implementation. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammel, R.T.; Gifford, D.K.

    1988-11-01

    This report documents a series of experiments performed to explore the thesis that the FX-87 effect system permits a compiler to schedule imperative programs (i.e., programs that may contain side-effects) for execution on a parallel computer. The authors analyze how much the FX-87 static effect system can improve the execution times of five benchmark programs on a parallel graph interpreter. Three of their benchmark programs do not use side-effects (factorial, fibonacci, and polynomial division) and thus did not have any effect-induced constraints. Their FX-87 performance was comparable to their performance in a purely functional language. Two of the benchmark programsmore » use side effects (DNA sequence matching and Scheme interpretation) and the compiler was able to use effect information to reduce their execution times by factors of 1.7 to 5.4 when compared with sequential execution times. These results support the thesis that a static effect system is a powerful tool for compilation to multiprocessor computers. However, the graph interpreter we used was based on unrealistic assumptions, and thus our results may not accurately reflect the performance of a practical FX-87 implementation. The results also suggest that conventional loop analysis would complement the FX-87 effect system« less

  9. A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Kegelmeyer, W. Philip, Jr.; Wong, Matthew H.

    The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in jobmore » queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.« less

  10. Weaknesses in Lexical-Semantic Knowledge among College Students with Specific Learning Disabilities: Evidence from a Semantic Fluency Task

    ERIC Educational Resources Information Center

    Hall, Jessica; McGregor, Karla K.; Oleson, Jacob

    2017-01-01

    Purpose: The purpose of this study is to determine whether deficits in executive function and lexical-semantic memory compromise the linguistic performance of young adults with specific learning disabilities (LD) enrolled in postsecondary studies. Method: One hundred eighty-five students with LD (n = 53) or normal language development (ND, n =…

  11. Assessment of cognition in early dementia

    PubMed Central

    Silverberg, Nina B.; Ryan, Laurie M.; Carrillo, Maria C.; Sperling, Reisa; Petersen, Ronald C.; Posner, Holly B.; Snyder, Peter J.; Hilsabeck, Robin; Gallagher, Michela; Raber, Jacob; Rizzo, Albert; Possin, Katherine; King, Jonathan; Kaye, Jeffrey; Ott, Brian R.; Albert, Marilyn S.; Wagster, Molly V.; Schinka, John A.; Cullum, C. Munro; Farias, Sarah T.; Balota, David; Rao, Stephen; Loewenstein, David; Budson, Andrew E.; Brandt, Jason; Manly, Jennifer J.; Barnes, Lisa; Strutt, Adriana; Gollan, Tamar H.; Ganguli, Mary; Babcock, Debra; Litvan, Irene; Kramer, Joel H.; Ferman, Tanis J.

    2012-01-01

    Better tools for assessing cognitive impairment in the early stages of Alzheimer’s disease (AD) are required to enable diagnosis of the disease before substantial neurodegeneration has taken place and to allow detection of subtle changes in the early stages of progression of the disease. The National Institute on Aging and the Alzheimer’s Association convened a meeting to discuss state of the art methods for cognitive assessment, including computerized batteries, as well as new approaches in the pipeline. Speakers described research using novel tests of object recognition, spatial navigation, attentional control, semantic memory, semantic interference, prospective memory, false memory and executive function as among the tools that could provide earlier identification of individuals with AD. In addition to early detection, there is a need for assessments that reflect real-world situations in order to better assess functional disability. It is especially important to develop assessment tools that are useful in ethnically, culturally and linguistically diverse populations as well as in individuals with neurodegenerative disease other than AD. PMID:23559893

  12. Dataflow Architectures.

    DTIC Science & Technology

    1986-02-12

    of Electrical Engineering and Computer Science. MIT, Cambridge, MA,June 1983. 33. Hiraki , K.. K. Nishida and T. Shimada. "Evaluation of Associative...J. R. Gurd. "A Practical Dataflow Computer". Computer 15,2 (February 1982), 51-57. 50. Yuba, T., T. Shimada, K. Hiraki , and H. Kashiwagi. Sigma-i: A

  13. Observing health professionals' workflow patterns for diabetes care - First steps towards an ontology for EHR services.

    PubMed

    Schweitzer, M; Lasierra, N; Hoerbst, A

    2015-01-01

    Increasing the flexibility from a user-perspective and enabling a workflow based interaction, facilitates an easy user-friendly utilization of EHRs for healthcare professionals' daily work. To offer such versatile EHR-functionality, our approach is based on the execution of clinical workflows by means of a composition of semantic web-services. The backbone of such architecture is an ontology which enables to represent clinical workflows and facilitates the selection of suitable services. In this paper we present the methods and results after running observations of diabetes routine consultations which were conducted in order to identify those workflows and the relation among the included tasks. Mentioned workflows were first modeled by BPMN and then generalized. As a following step in our study, interviews will be conducted with clinical personnel to validate modeled workflows.

  14. A Nonlinear Model for Interactive Data Analysis and Visualization and an Implementation Using Progressive Computation for Massive Remote Climate Data Ensembles

    NASA Astrophysics Data System (ADS)

    Christensen, C.; Liu, S.; Scorzelli, G.; Lee, J. W.; Bremer, P. T.; Summa, B.; Pascucci, V.

    2017-12-01

    The creation, distribution, analysis, and visualization of large spatiotemporal datasets is a growing challenge for the study of climate and weather phenomena in which increasingly massive domains are utilized to resolve finer features, resulting in datasets that are simply too large to be effectively shared. Existing workflows typically consist of pipelines of independent processes that preclude many possible optimizations. As data sizes increase, these pipelines are difficult or impossible to execute interactively and instead simply run as large offline batch processes. Rather than limiting our conceptualization of such systems to pipelines (or dataflows), we propose a new model for interactive data analysis and visualization systems in which we comprehensively consider the processes involved from data inception through analysis and visualization in order to describe systems composed of these processes in a manner that facilitates interactive implementations of the entire system rather than of only a particular component. We demonstrate the application of this new model with the implementation of an interactive system that supports progressive execution of arbitrary user scripts for the analysis and visualization of massive, disparately located climate data ensembles. It is currently in operation as part of the Earth System Grid Federation server running at Lawrence Livermore National Lab, and accessible through both web-based and desktop clients. Our system facilitates interactive analysis and visualization of massive remote datasets up to petabytes in size, such as the 3.5 PB 7km NASA GEOS-5 Nature Run simulation, previously only possible offline or at reduced resolution. To support the community, we have enabled general distribution of our application using public frameworks including Docker and Anaconda.

  15. Managing Parallelism and Resources in Scientific Dataflow Programs

    DTIC Science & Technology

    1990-03-01

    1983. [52] K. Hiraki , K. Nishida, S. Sekiguchi, and T. Shimada. Maintainence architecture and its LSI implementation of a dataflow computer with a... Hiraki , and K. Nishida. An architecture of a data flow machine and its evaluation. In Proceedings of CompCon 84, pages 486-490. IEEE, 1984. [84] N

  16. Semantic web for integrated network analysis in biomedicine.

    PubMed

    Chen, Huajun; Ding, Li; Wu, Zhaohui; Yu, Tong; Dhanapalan, Lavanya; Chen, Jake Y

    2009-03-01

    The Semantic Web technology enables integration of heterogeneous data on the World Wide Web by making the semantics of data explicit through formal ontologies. In this article, we survey the feasibility and state of the art of utilizing the Semantic Web technology to represent, integrate and analyze the knowledge in various biomedical networks. We introduce a new conceptual framework, semantic graph mining, to enable researchers to integrate graph mining with ontology reasoning in network data analysis. Through four case studies, we demonstrate how semantic graph mining can be applied to the analysis of disease-causal genes, Gene Ontology category cross-talks, drug efficacy analysis and herb-drug interactions analysis.

  17. The modulatory influence of the functional COMT Val158Met polymorphism on lexical decisions and semantic priming.

    PubMed

    Reuter, Martin; Montag, Christian; Peters, Kristina; Kocher, Anne; Kiefer, Markus

    2009-01-01

    The role of the prefrontal Cortex (PFC) in higher cognitive functions - including working memory, conflict resolution, set shifting and semantic processing - has been demonstrated unequivocally. Despite the great heterogeneity among tasks measuring these phenotypes, due in part to the different cognitive sub-processes implied and the specificity of the stimulus material used, there is agreement that all of these tasks recruit an executive control system located in the PFC. On a biochemical level it is known that the dopaminergic system plays an important role in executive control functions. Evidence comes from molecular genetics relating the functional COMT Val158Met polymorphism to working memory and set shifting. In order determine whether this pattern of findings generalises to linguistic and semantic processing, we investigated the effects of the COMT Val158Met polymorphism in lexical decision making using masked and unmasked versions of the semantic priming paradigm on N = 104 healthy subjects. Although we observed strong priming effects in all conditions (masked priming, unmasked priming with short/long stimulus asynchronies (SOAs), direct and indirect priming), COMT was not significantly related to priming, suggesting no reliable influence on semantic processing. However, COMT Val158Met was strongly associated with lexical decision latencies in all priming conditions if considered separately, explaining between 9 and 14.5% of the variance. Therefore, the findings indicate that COMT mainly influences more general executive control functions in the PFC supporting the speed of lexical decisions.

  18. The influence of the parents' educational level on the development of executive functions.

    PubMed

    Ardila, Alfredo; Rosselli, Monica; Matute, Esmeralda; Guajardo, Soledad

    2005-01-01

    Information about the influence of educational variables on the development of executive functions is limited. The aim of this study was to analyze the relation of the parents' educational level and the type of school the child attended (private or public school) to children's executive functioning test performance. Six hundred twenty-two participants, ages 5 to 14 years (276 boys, 346 girls) were selected from Colombia and Mexico and grouped according to three variables: age (5-6, 7-8, 9-10, 11-12, and 13-14 years), gender (boys and girls), and school type (private and public). Eight executive functioning tests taken from the Evaluacion Neuropsicologica Infantil; Matute, Rosselli, Ardila, & Ostrosky, (in press) were individually administered: Semantic Verbal Fluency, Phonemic Verbal Fluency, Semantic Graphic Fluency, Nonsemantic Graphic Fluency, Matrices, Similarities, Card Sorting, and the Mexican Pyramid. There was a significant effect of age on all the test scores and a significant effect of type of school attended on all but Semantic Verbal Fluency and Nonsemantic Graphic Fluency tests. Most children's test scores, particularly verbal test scores, significantly correlated with parents' educational level. Our results suggest that the differences in test scores between the public and private school children depended on some conditions existing outside the school, such as the parents' level of education. Implications of these findings for the understanding of the influence of environmental factors on the development of executive functions are presented.

  19. Modelling Metamorphism by Abstract Interpretation

    NASA Astrophysics Data System (ADS)

    Dalla Preda, Mila; Giacobazzi, Roberto; Debray, Saumya; Coogan, Kevin; Townsend, Gregg M.

    Metamorphic malware apply semantics-preserving transformations to their own code in order to foil detection systems based on signature matching. In this paper we consider the problem of automatically extract metamorphic signatures from these malware. We introduce a semantics for self-modifying code, later called phase semantics, and prove its correctness by showing that it is an abstract interpretation of the standard trace semantics. Phase semantics precisely models the metamorphic code behavior by providing a set of traces of programs which correspond to the possible evolutions of the metamorphic code during execution. We show that metamorphic signatures can be automatically extracted by abstract interpretation of the phase semantics, and that regular metamorphism can be modelled as finite state automata abstraction of the phase semantics.

  20. Creating personalised clinical pathways by semantic interoperability with electronic health records.

    PubMed

    Wang, Hua-Qiong; Li, Jing-Song; Zhang, Yi-Fan; Suzuki, Muneou; Araki, Kenji

    2013-06-01

    There is a growing realisation that clinical pathways (CPs) are vital for improving the treatment quality of healthcare organisations. However, treatment personalisation is one of the main challenges when implementing CPs, and the inadequate dynamic adaptability restricts the practicality of CPs. The purpose of this study is to improve the practicality of CPs using semantic interoperability between knowledge-based CPs and semantic electronic health records (EHRs). Simple protocol and resource description framework query language is used to gather patient information from semantic EHRs. The gathered patient information is entered into the CP ontology represented by web ontology language. Then, after reasoning over rules described by semantic web rule language in the Jena semantic framework, we adjust the standardised CPs to meet different patients' practical needs. A CP for acute appendicitis is used as an example to illustrate how to achieve CP customisation based on the semantic interoperability between knowledge-based CPs and semantic EHRs. A personalised care plan is generated by comprehensively analysing the patient's personal allergy history and past medical history, which are stored in semantic EHRs. Additionally, by monitoring the patient's clinical information, an exception is recorded and handled during CP execution. According to execution results of the actual example, the solutions we present are shown to be technically feasible. This study contributes towards improving the clinical personalised practicality of standardised CPs. In addition, this study establishes the foundation for future work on the research and development of an independent CP system. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Semantic Service Design for Collaborative Business Processes in Internetworked Enterprises

    NASA Astrophysics Data System (ADS)

    Bianchini, Devis; Cappiello, Cinzia; de Antonellis, Valeria; Pernici, Barbara

    Modern collaborating enterprises can be seen as borderless organizations whose processes are dynamically transformed and integrated with the ones of their partners (Internetworked Enterprises, IE), thus enabling the design of collaborative business processes. The adoption of Semantic Web and service-oriented technologies for implementing collaboration in such distributed and heterogeneous environments promises significant benefits. IE can model their own processes independently by using the Software as a Service paradigm (SaaS). Each enterprise maintains a catalog of available services and these can be shared across IE and reused to build up complex collaborative processes. Moreover, each enterprise can adopt its own terminology and concepts to describe business processes and component services. This brings requirements to manage semantic heterogeneity in process descriptions which are distributed across different enterprise systems. To enable effective service-based collaboration, IEs have to standardize their process descriptions and model them through component services using the same approach and principles. For enabling collaborative business processes across IE, services should be designed following an homogeneous approach, possibly maintaining a uniform level of granularity. In the paper we propose an ontology-based semantic modeling approach apt to enrich and reconcile semantics of process descriptions to facilitate process knowledge management and to enable semantic service design (by discovery, reuse and integration of process elements/constructs). The approach brings together Semantic Web technologies, techniques in process modeling, ontology building and semantic matching in order to provide a comprehensive semantic modeling framework.

  2. Semantics of data and service registration to advance interdisciplinary information and data access.

    NASA Astrophysics Data System (ADS)

    Fox, P. P.; McGuinness, D. L.; Raskin, R.; Sinha, A. K.

    2008-12-01

    In developing an application of semantic web methods and technologies to address the integration of heterogeneous and interdisciplinary earth-science datasets, we have developed methodologies for creating rich semantic descriptions (ontologies) of the application domains. We have leveraged and extended where possible existing ontology frameworks such as SWEET. As a result of this semantic approach, we have also utilized ontologic descriptions of key enabling elements of the application, such as the registration of datasets with ontologies at several levels of granularity. This has enabled the location and usage of the data across disciplines. We are also realizing the need to develop similar semantic registration of web service data holdings as well as those provided with community and/or standard markup languages (e.g. GeoSciML). This level of semantic enablement extending beyond domain terms and relations significantly enhances our ability to provide a coherent semantic data framework for data and information systems. Much of this work is on the frontier of technology development and we will present the current and near-future capabilities we are developing. This work arises from the Semantically-Enabled Science Data Integration (SESDI) project, which is an NASA/ESTO/ACCESS-funded project involving the High Altitude Observatory at the National Center for Atmospheric Research (NCAR), McGuinness Associates Consulting, NASA/JPL and Virginia Polytechnic University.

  3. Enabling complex queries to drug information sources through functional composition.

    PubMed

    Peters, Lee; Mortensen, Jonathan; Nguyen, Thang; Bodenreider, Olivier

    2013-01-01

    Our objective was to enable an end-user to create complex queries to drug information sources through functional composition, by creating sequences of functions from application program interfaces (API) to drug terminologies. The development of a functional composition model seeks to link functions from two distinct APIs. An ontology was developed using Protégé to model the functions of the RxNorm and NDF-RT APIs by describing the semantics of their input and output. A set of rules were developed to define the interoperable conditions for functional composition. The operational definition of interoperability between function pairs is established by executing the rules on the ontology. We illustrate that the functional composition model supports common use cases, including checking interactions for RxNorm drugs and deploying allergy lists defined in reference to drug properties in NDF-RT. This model supports the RxMix application (http://mor.nlm.nih.gov/RxMix/), an application we developed for enabling complex queries to the RxNorm and NDF-RT APIs.

  4. The impact of lexical-semantic impairment and of executive dysfunction on the word reading performance of patients with probable Alzheimer dementia.

    PubMed

    Colombo, Lucia; Fonti, Cristina; Cappa, Stefano

    2004-01-01

    The influence of lexical-semantic impairment and of executive dysfunction on word naming performance was investigated in a group of patients with probable Alzheimer dementia (AD). The patients, who varied in the severity of the illness, were tested in a word naming task where they had to read aloud Italian three-syllable words with a dominant or subordinate stress pattern. These types of words have been shown to interact with frequency in normal adults [J. Exp. Psychol.: Hum. Percept. Perform. 18 (4) (1992) 987], so that the effect of the subordinate stress pattern (slower reading times) is only apparent for low frequency words. The frequency and stress effects on accuracy increased across dementia severity levels. Regression analyses showed that the impairment in reading low frequency words with subordinate stress depended largely on the level of lexical-semantic impairment, measured by a test of semantic memory and comprehension. Implications for the current reading models are discussed.

  5. A Disorder of Executive Function and Its Role in Language Processing

    PubMed Central

    Martin, Randi C.; Allen, Corinne M.

    2014-01-01

    R. Martin and colleagues have proposed separate stores for the maintenance of phonological and semantic information in short-term memory. Evidence from patients with aphasia has shown that damage to these separable buffers has specific consequences for language comprehension and production, suggesting an interdependence between language and memory systems. This article discusses recent research on aphasic patients with limited-capacity short-term memories (STMs) and reviews evidence suggesting that deficits in retaining semantic information in STM may be caused by a disorder in the executive control process of inhibition, specific to verbal representations. In contrast, a phonological STM deficit may be due to overly rapid decay. In semantic STM deficits, it is hypothesized that the inhibitory deficit produces difficulty inhibiting irrelevant verbal representations, which may lead to excessive interference. In turn, the excessive interference associated with semantic STM deficits has implications for single-word and sentence processing, and it may be the source of the reduced STM capacity shown by these patients. PMID:18720317

  6. The effect of working memory load on semantic illusions: what the phonological loop and central executive have to contribute.

    PubMed

    Büttner, Anke Caroline

    2012-01-01

    When asked how many animals of each kind Moses took on the Ark, most people respond with "two" despite the substituted name (Moses for Noah) in the question. Possible explanations for semantic illusions appear to be related to processing limitations such as those of working memory. Indeed, individual working memory capacity has an impact upon how sentences containing substitutions are processed. This experiment examined further the role of working memory in the occurrence of semantic illusions using a dual-task working memory load approach. Participants verified statements while engaging in either articulatory suppression or random number generation. Secondary task type had a significant effect on semantic illusion rate, but only when comparing the control condition to the two dual-task conditions. Furthermore, secondary task performance in the random number generation condition declined, suggesting a tradeoff between tasks. Response time analyses also showed a different pattern of processing across the conditions. The findings suggest that the phonological loop plays a role in representing semantic illusion sentences coherently and in monitoring for details, while the role of the central executive is to assist gist-processing of sentences. This usually efficient strategy leads to error in the case of semantic illusions.

  7. ATLAS DataFlow Infrastructure: Recent results from ATLAS cosmic and first-beam data-taking

    NASA Astrophysics Data System (ADS)

    Vandelli, Wainer; ATLAS TDAQ Collaboration

    2010-04-01

    The ATLAS DataFlow infrastructure is responsible for the collection and conveyance of event data from the detector front-end electronics to the mass storage. Several optimized and multi-threaded applications fulfill this purpose operating over a multi-stage Gigabit Ethernet network which is the backbone of the ATLAS Trigger and Data Acquisition System. The system must be able to efficiently transport event-data with high reliability, while providing aggregated bandwidths larger than 5 GByte/s and coping with many thousands network connections. Nevertheless, routing and streaming capabilities and monitoring and data accounting functionalities are also fundamental requirements. During 2008, a few months of ATLAS cosmic data-taking and the first experience with the LHC beams provided an unprecedented test-bed for the evaluation of the performance of the ATLAS DataFlow, in terms of functionality, robustness and stability. Besides, operating the system far from its design specifications helped in exercising its flexibility and contributed in understanding its limitations. Moreover, the integration with the detector and the interfacing with the off-line data processing and management have been able to take advantage of this extended data taking-period as well. In this paper we report on the usage of the DataFlow infrastructure during the ATLAS data-taking. These results, backed-up by complementary performance tests, validate the architecture of the ATLAS DataFlow and prove that the system is robust, flexible and scalable enough to cope with the final requirements of the ATLAS experiment.

  8. The role of executive functions and theory of mind in children's prosocial lie-telling.

    PubMed

    Williams, Shanna; Moore, Kelsey; Crossman, Angela M; Talwar, Victoria

    2016-01-01

    Children's prosocial lying was examined in relation to executive functioning skills and theory of mind development. Prosocial lying was observed using a disappointing gift paradigm. Of the 79 children (ages 6-12 years) who completed the disappointing gift paradigm, 47 (59.5%) told a prosocial lie to a research assistant about liking their prize. In addition, of those children who told prosocial lies, 25 (53.2%) maintained semantic leakage control during follow-up questioning, thereby demonstrating advanced lie-telling skills. When executive functioning was examined, children who told prosocial lies were found to have significantly higher performance on measures of working memory and inhibitory control. In addition, children who lied and maintained semantic leakage control also displayed more advanced theory of mind understanding. Although children's age was not a predictor of lie-telling behavior (i.e., truthful vs. lie-teller), age was a significant predictor of semantic leakage control, with older children being more likely to maintain their lies during follow-up questioning. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Don’t Like RDF Reification? Making Statements about Statements Using Singleton Property

    PubMed Central

    Nguyen, Vinh; Bodenreider, Olivier; Sheth, Amit

    2015-01-01

    Statements about RDF statements, or meta triples, provide additional information about individual triples, such as the source, the occurring time or place, or the certainty. Integrating such meta triples into semantic knowledge bases would enable the querying and reasoning mechanisms to be aware of provenance, time, location, or certainty of triples. However, an efficient RDF representation for such meta knowledge of triples remains challenging. The existing standard reification approach allows such meta knowledge of RDF triples to be expressed using RDF by two steps. The first step is representing the triple by a Statement instance which has subject, predicate, and object indicated separately in three different triples. The second step is creating assertions about that instance as if it is a statement. While reification is simple and intuitive, this approach does not have formal semantics and is not commonly used in practice as described in the RDF Primer. In this paper, we propose a novel approach called Singleton Property for representing statements about statements and provide a formal semantics for it. We explain how this singleton property approach fits well with the existing syntax and formal semantics of RDF, and the syntax of SPARQL query language. We also demonstrate the use of singleton property in the representation and querying of meta knowledge in two examples of Semantic Web knowledge bases: YAGO2 and BKR. Our experiments on the BKR show that the singleton property approach gives a decent performance in terms of number of triples, query length and query execution time compared to existing approaches. This approach, which is also simple and intuitive, can be easily adopted for representing and querying statements about statements in other knowledge bases. PMID:25750938

  10. Combining Archetypes, Ontologies and Formalization Enables Automated Computation of Quality Indicators.

    PubMed

    Legaz-García, María Del Carmen; Dentler, Kathrin; Fernández-Breis, Jesualdo Tomás; Cornet, Ronald

    2017-01-01

    ArchMS is a framework that represents clinical information and knowledge using ontologies in OWL, which facilitates semantic interoperability and thereby the exploitation and secondary use of clinical data. However, it does not yet support the automated assessment of quality of care. CLIF is a stepwise method to formalize quality indicators. The method has been implemented in the CLIF tool which supports its users in generating computable queries based on a patient data model which can be based on archetypes. To enable the automated computation of quality indicators using ontologies and archetypes, we tested whether ArchMS and the CLIF tool can be integrated. We successfully automated the process of generating SPARQL queries from quality indicators that have been formalized with CLIF and integrated them into ArchMS. Hence, ontologies and archetypes can be combined for the execution of formalized quality indicators.

  11. SPARQL-enabled identifier conversion with Identifiers.org

    PubMed Central

    Wimalaratne, Sarala M.; Bolleman, Jerven; Juty, Nick; Katayama, Toshiaki; Dumontier, Michel; Redaschi, Nicole; Le Novère, Nicolas; Hermjakob, Henning; Laibe, Camille

    2015-01-01

    Motivation: On the semantic web, in life sciences in particular, data is often distributed via multiple resources. Each of these sources is likely to use their own International Resource Identifier for conceptually the same resource or database record. The lack of correspondence between identifiers introduces a barrier when executing federated SPARQL queries across life science data. Results: We introduce a novel SPARQL-based service to enable on-the-fly integration of life science data. This service uses the identifier patterns defined in the Identifiers.org Registry to generate a plurality of identifier variants, which can then be used to match source identifiers with target identifiers. We demonstrate the utility of this identifier integration approach by answering queries across major producers of life science Linked Data. Availability and implementation: The SPARQL-based identifier conversion service is available without restriction at http://identifiers.org/services/sparql. Contact: sarala@ebi.ac.uk PMID:25638809

  12. SPARQL-enabled identifier conversion with Identifiers.org.

    PubMed

    Wimalaratne, Sarala M; Bolleman, Jerven; Juty, Nick; Katayama, Toshiaki; Dumontier, Michel; Redaschi, Nicole; Le Novère, Nicolas; Hermjakob, Henning; Laibe, Camille

    2015-06-01

    On the semantic web, in life sciences in particular, data is often distributed via multiple resources. Each of these sources is likely to use their own International Resource Identifier for conceptually the same resource or database record. The lack of correspondence between identifiers introduces a barrier when executing federated SPARQL queries across life science data. We introduce a novel SPARQL-based service to enable on-the-fly integration of life science data. This service uses the identifier patterns defined in the Identifiers.org Registry to generate a plurality of identifier variants, which can then be used to match source identifiers with target identifiers. We demonstrate the utility of this identifier integration approach by answering queries across major producers of life science Linked Data. The SPARQL-based identifier conversion service is available without restriction at http://identifiers.org/services/sparql. © The Author 2015. Published by Oxford University Press.

  13. Longitudinal Study of a Novel, Performance-based Measure of Daily Function

    DTIC Science & Technology

    2016-06-01

    have functional impairments, and healthy age matched controls on the UPSA, as well as measures of cognition (e.g., episodic memory , semantic memory ...controls on the UPSA, as well as measures of cognition (e.g., episodic memory , semantic memory , executive function, speed). We found that patients with...diagnosis have functional impairments, and healthy age matched controls on the UPSA, as well as measures of cognition (e.g., episodic memory , semantic

  14. Learning for Semantic Parsing with Kernels under Various Forms of Supervision

    DTIC Science & Technology

    2007-08-01

    natural language sentences to their formal executable meaning representations. This is a challenging problem and is critical for developing computing...sentences are semantically tractable. This indi- cates that Geoquery is more challenging domain for semantic parsing than ATIS. In the past, there have been a...Combining parsers. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/ VLC -99), pp. 187–194

  15. Dataflow models for fault-tolerant control systems

    NASA Technical Reports Server (NTRS)

    Papadopoulos, G. M.

    1984-01-01

    Dataflow concepts are used to generate a unified hardware/software model of redundant physical systems which are prone to faults. Basic results in input congruence and synchronization are shown to reduce to a simple model of data exchanges between processing sites. Procedures are given for the construction of congruence schemata, the distinguishing features of any correctly designed redundant system.

  16. Algorithm Optimally Orders Forward-Chaining Inference Rules

    NASA Technical Reports Server (NTRS)

    James, Mark

    2008-01-01

    People typically develop knowledge bases in a somewhat ad hoc manner by incrementally adding rules with no specific organization. This often results in a very inefficient execution of those rules since they are so often order sensitive. This is relevant to tasks like Deep Space Network in that it allows the knowledge base to be incrementally developed and have it automatically ordered for efficiency. Although data flow analysis was first developed for use in compilers for producing optimal code sequences, its usefulness is now recognized in many software systems including knowledge-based systems. However, this approach for exhaustively computing data-flow information cannot directly be applied to inference systems because of the ubiquitous execution of the rules. An algorithm is presented that efficiently performs a complete producer/consumer analysis for each antecedent and consequence clause in a knowledge base to optimally order the rules to minimize inference cycles. An algorithm was developed that optimally orders a knowledge base composed of forwarding chaining inference rules such that independent inference cycle executions are minimized, thus, resulting in significantly faster execution. This algorithm was integrated into the JPL tool Spacecraft Health Inference Engine (SHINE) for verification and it resulted in a significant reduction in inference cycles for what was previously considered an ordered knowledge base. For a knowledge base that is completely unordered, then the improvement is much greater.

  17. Executive Function and Remission of Geriatric Depression: The Role of Semantic Strategy

    PubMed Central

    Morimoto, Sarah Shizuko; Gunning, Faith M.; Murphy, Christopher F.; Kanellopoulos, Dora; Kelly, Robert E.; Alexopoulos, George S.

    2013-01-01

    BACKGROUND This study tested the hypothesis that use of semantic organizational strategy in approaching the Mattis Dementia Rating Scale (MDRS) Complex Verbal Initiation Perseveration (I/P) task, a test of semantic fluency, is the function specifically associated with remission of late-life depression. METHOD 70 elders with major depression participated in a 12-week escitalopram treatment trial. Neuropsychological performance was assessed at baseline after a 2-week drug washout period. Patients with a Hamilton Depression Rating Scale Score less than or equal to 7 for two consecutive weeks and who no longer met DSM-IV criteria were considered to be remitted. Cox proportional hazards survival analysis was used to examine the relationship between subtests of the I/P, other neuropsychological domains and remission rate. Participants’ performance on the CV I/P was coded for perseverations, and use of semantic strategy. RESULTS The relationship of performance on the Complex Verbal I/P and remission rate was significant. No other subtest of the MDRS I/P evidenced this association. There was no significant relationship of speed, confrontation naming, verbal memory or perseveration with remission rate. Remitters’ use of verbal strategy was significantly greater than non-remitters. CONCLUSIONS Geriatric depressed patients who showed decrements in performance on a semantic fluency task showed poorer remission rates than those who showed adequate performance on this measure. Executive impairment in verbal strategy explained performance. This finding supports the concept that executive functioning exerts a “top down” effect on other basic cognitive processes, perhaps as a result of frontostriatal network dysfunction implicated in geriatric depression. PMID:20808124

  18. Parametric Effects of Syntactic-Semantic Conflict in Broca's Area during Sentence Processing

    ERIC Educational Resources Information Center

    Thothathiri, Malathi; Kim, Albert; Trueswell, John C.; Thompson-Schill, Sharon L.

    2012-01-01

    The hypothesized role of Broca's area in sentence processing ranges from domain-general executive function to domain-specific computation that is specific to certain syntactic structures. We examined this issue by manipulating syntactic structure and conflict between syntactic and semantic cues in a sentence processing task. Functional…

  19. Model Checking Abstract PLEXIL Programs with SMART

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.

    2007-01-01

    We describe a method to automatically generate discrete-state models of abstract Plan Execution Interchange Language (PLEXIL) programs that can be analyzed using model checking tools. Starting from a high-level description of a PLEXIL program or a family of programs with common characteristics, the generator lays the framework that models the principles of program execution. The concrete parts of the program are not automatically generated, but require the modeler to introduce them by hand. As a case study, we generate models to verify properties of the PLEXIL macro constructs that are introduced as shorthand notation. After an exhaustive analysis, we conclude that the macro definitions obey the intended semantics and behave as expected, but contingently on a few specific requirements on the timing semantics of micro-steps in the concrete executive implementation.

  20. The relationships of 'ecstasy' (MDMA) and cannabis use to impaired executive inhibition and access to semantic long-term memory.

    PubMed

    Murphy, Philip N; Erwin, Philip G; Maciver, Linda; Fisk, John E; Larkin, Derek; Wareing, Michelle; Montgomery, Catharine; Hilton, Joanne; Tames, Frank J; Bradley, Belinda; Yanulevitch, Kate; Ralley, Richard

    2011-10-01

    This study aimed to examine the relationship between the consumption of ecstasy (3,4-methylenedioxymethamphetamine (MDMA)) and cannabis, and performance on the random letter generation task which generates dependent variables drawing upon executive inhibition and access to semantic long-term memory (LTM). The participant group was a between-participant independent variable with users of both ecstasy and cannabis (E/C group, n = 15), users of cannabis but not ecstasy (CA group, n = 13) and controls with no exposure to these drugs (CO group, n = 12). Dependent variables measured violations of randomness: number of repeat sequences, number of alphabetical sequences (both drawing upon inhibition) and redundancy (drawing upon access to semantic LTM). E/C participants showed significantly higher redundancy than CO participants but did not differ from CA participants. There were no significant effects for the other dependent variables. A regression model comprising intelligence measures and estimates of ecstasy and cannabis consumption predicted redundancy scores, but only cannabis consumption contributed significantly to this prediction. Impaired access to semantic LTM may be related to cannabis consumption, although the involvement of ecstasy and other stimulant drugs cannot be excluded here. Executive inhibitory functioning, as measured by the random letter generation task, is unrelated to ecstasy and cannabis consumption. Copyright © 2011 John Wiley & Sons, Ltd.

  1. Incorporating Brokers within Collaboration Environments

    NASA Astrophysics Data System (ADS)

    Rajasekar, A.; Moore, R.; de Torcy, A.

    2013-12-01

    A collaboration environment, such as the integrated Rule Oriented Data System (iRODS - http://irods.diceresearch.org), provides interoperability mechanisms for accessing storage systems, authentication systems, messaging systems, information catalogs, networks, and policy engines from a wide variety of clients. The interoperability mechanisms function as brokers, translating actions requested by clients to the protocol required by a specific technology. The iRODS data grid is used to enable collaborative research within hydrology, seismology, earth science, climate, oceanography, plant biology, astronomy, physics, and genomics disciplines. Although each domain has unique resources, data formats, semantics, and protocols, the iRODS system provides a generic framework that is capable of managing collaborative research initiatives that span multiple disciplines. Each interoperability mechanism (broker) is linked to a name space that enables unified access across the heterogeneous systems. The collaboration environment provides not only support for brokers, but also support for virtualization of name spaces for users, files, collections, storage systems, metadata, and policies. The broker enables access to data or information in a remote system using the appropriate protocol, while the collaboration environment provides a uniform naming convention for accessing and manipulating each object. Within the NSF DataNet Federation Consortium project (http://www.datafed.org), three basic types of interoperability mechanisms have been identified and applied: 1) drivers for managing manipulation at the remote resource (such as data subsetting), 2) micro-services that execute the protocol required by the remote resource, and 3) policies for controlling the execution. For example, drivers have been written for manipulating NetCDF and HDF formatted files within THREDDS servers. Micro-services have been written that manage interactions with the CUAHSI data repository, the DataONE information catalog, and the GeoBrain broker. Policies have been written that manage transfer of messages between an iRODS message queue and the Advanced Message Queuing Protocol. Examples of these brokering mechanisms will be presented. The DFC collaboration environment serves as the intermediary between community resources and compute grids, enabling reproducible data-driven research. It is possible to create an analysis workflow that retrieves data subsets from a remote server, assemble the required input files, automate the execution of the workflow, automatically track the provenance of the workflow, and share the input files, workflow, and output files. A collaborator can re-execute a shared workflow, compare results, change input files, and re-execute an analysis.

  2. Semantic Annotation of Computational Components

    NASA Technical Reports Server (NTRS)

    Vanderbilt, Peter; Mehrotra, Piyush

    2004-01-01

    This paper describes a methodology to specify machine-processable semantic descriptions of computational components to enable them to be shared and reused. A particular focus of this scheme is to enable automatic compositon of such components into simple work-flows.

  3. Distributed semantic networks and CLIPS

    NASA Technical Reports Server (NTRS)

    Snyder, James; Rodriguez, Tony

    1991-01-01

    Semantic networks of frames are commonly used as a method of reasoning in many problems. In most of these applications the semantic network exists as a single entity in a single process environment. Advances in workstation hardware provide support for more sophisticated applications involving multiple processes, interacting in a distributed environment. In these applications the semantic network may well be distributed over several concurrently executing tasks. This paper describes the design and implementation of a frame based, distributed semantic network in which frames are accessed both through C Language Integrated Production System (CLIPS) expert systems and procedural C++ language programs. The application area is a knowledge based, cooperative decision making model utilizing both rule based and procedural experts.

  4. PolyCheck: Dynamic Verification of Iteration Space Transformations on Affine Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Wenlei; Krishnamoorthy, Sriram; Pouchet, Louis-noel

    2016-01-11

    High-level compiler transformations, especially loop transformations, are widely recognized as critical optimizations to restructure programs to improve data locality and expose parallelism. Guaranteeing the correctness of program transformations is essential, and to date three main approaches have been developed: proof of equivalence of affine programs, matching the execution traces of programs, and checking bit-by-bit equivalence of the outputs of the programs. Each technique suffers from limitations in either the kind of transformations supported, space complexity, or the sensitivity to the testing dataset. In this paper, we take a novel approach addressing all three limitations to provide an automatic bug checkermore » to verify any iteration reordering transformations on affine programs, including non-affine transformations, with space consumption proportional to the original program data, and robust to arbitrary datasets of a given size. We achieve this by exploiting the structure of affine program control- and data-flow to generate at compile-time lightweight checker code to be executed within the transformed program. Experimental results assess the correctness and effectiveness of our method, and its increased coverage over previous approaches.« less

  5. A novel adaptive Cuckoo search for optimal query plan generation.

    PubMed

    Gomathi, Ramalingam; Sharmila, Dhandapani

    2014-01-01

    The emergence of multiple web pages day by day leads to the development of the semantic web technology. A World Wide Web Consortium (W3C) standard for storing semantic web data is the resource description framework (RDF). To enhance the efficiency in the execution time for querying large RDF graphs, the evolving metaheuristic algorithms become an alternate to the traditional query optimization methods. This paper focuses on the problem of query optimization of semantic web data. An efficient algorithm called adaptive Cuckoo search (ACS) for querying and generating optimal query plan for large RDF graphs is designed in this research. Experiments were conducted on different datasets with varying number of predicates. The experimental results have exposed that the proposed approach has provided significant results in terms of query execution time. The extent to which the algorithm is efficient is tested and the results are documented.

  6. Another expert system rule inference based on DNA molecule logic gates

    NASA Astrophysics Data System (ADS)

    WÄ siewicz, Piotr

    2013-10-01

    With the help of silicon industry microfluidic processors were invented utilizing nano membrane valves, pumps and microreactors. These so called lab-on-a-chips combined together with molecular computing create molecular-systems-ona- chips. This work presents a new approach to implementation of molecular inference systems. It requires the unique representation of signals by DNA molecules. The main part of this work includes the concept of logic gates based on typical genetic engineering reactions. The presented method allows for constructing logic gates with many inputs and for executing them at the same quantity of elementary operations, regardless of a number of input signals. Every microreactor of the lab-on-a-chip performs one unique operation on input molecules and can be connected by dataflow output-input connections to other ones.

  7. Characteristics of gray matter morphological change in Parkinson's disease patients with semantic abstract reasoning deficits.

    PubMed

    Wang, Li; Nie, Kun; Zhao, Xin; Feng, Shujun; Xie, Sifen; He, Xuetao; Ma, Guixian; Wang, Limin; Huang, Zhiheng; Huang, Biao; Zhang, Yuhu; Wang, Lijuan

    2018-04-23

    Semantic abstract reasoning(SAR) is an important executive domain that is involved in semantic information processing and enables one to make sense of the attributes of objects, facts and concepts in the world. We sought to investigate whether Parkinson's disease subjects(PDs) have difficulty in SAR and to examine the associated pattern of gray matter morphological changes. Eighty-six PDs and 30 healthy controls were enrolled. PDs were grouped into PD subjects with Similarities preservation(PDSP, n = 62) and PD subjects with Similarities impairment(PDSI, n = 24)according to their performance on the Similarities subtest of the Wechsler Adult Intelligence Scale. Brain structural images were captured with a 3T MRI scanner. Surface-based investigation of cortical thickness and automated segmentation of deep gray matter were conducted using FreeSurfer software. PDs performed notably worse on the Similarities test than controls(F = 13.56, P < 0.001).In the PDSI group, cortical thinning associated with Similarities scores was found in the left superior frontal, left superior parietal and left rostral middle frontal regions. Notable atrophy of the bilateral hippocampi was observed, but only the right hippocampus volume was positively correlated with the Similarities scores of the PDSI group. PDs have difficulty in SAR, and this limitation may be associated with impaired conceptual abstraction and generalization along with semantic memory deficits. Cortical thinning in the left frontal and parietal regions and atrophy in the right hippocampus may explain these impairments among Chinese PDs. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Software for the EVLA

    NASA Astrophysics Data System (ADS)

    Butler, Bryan J.; van Moorsel, Gustaaf; Tody, Doug

    2004-09-01

    The Expanded Very Large Array (EVLA) project is the next generation instrument for high resolution long-millimeter to short-meter wavelength radio astronomy. It is currently funded by NSF, with completion scheduled for 2012. The EVLA will upgrade the VLA with new feeds, receivers, data transmission hardware, correlator, and a new software system to enable the instrument to achieve its full potential. This software includes both that required for controlling and monitoring the instrument and that involved with the scientific dataflow. We concentrate here on a portion of the dataflow software, including: proposal preparation, submission, and handling; observation preparation, scheduling, and remote monitoring; data archiving; and data post-processing, including both automated (pipeline) and manual processing. The primary goals of the software are: to maximize the scientific return of the EVLA; provide ease of use, for both novices and experts; exploit commonality amongst all NRAO telescopes where possible. This last point is both a bane and a blessing: we are not at liberty to do whatever we want in the software, but on the other hand we may borrow from other projects (notably ALMA and GBT) where appropriate. The software design methodology includes detailed initial use-cases and requirements from the scientists, intimate interaction between the scientists and the programmers during design and implementation, and a thorough testing and acceptance plan.

  9. Mechanisms underlying the production of false memories for famous people's names in aging and Alzheimer's disease.

    PubMed

    Plancher, Gaën; Guyard, Anne; Nicolas, Serge; Piolino, Pascale

    2009-10-01

    It is well known that the occurrence of false memories increases with aging, but the results remain inconsistent concerning Alzheimer's disease (AD). Moreover, the mechanisms underlying the production of false memories are still unclear. Using an experimental episodic memory test with material based on the names of famous people in a procedure derived from the DRM paradigm [Roediger, H. L., III, & McDermott, K. B. (1995). Creating false memories: Remembering words not presented in lists. Journal of Experimental Psychology: Learning, Memory & Cognition, 21, 803-814], we examined correct and false recall and recognition in 30 young adults, 40 healthy older adults, and 30 patients with AD. Moreover, we evaluated the relationships between false memory performance, correct episodic memory performance, and a set of neuropsychological assessments evaluating the semantic memory and executive functions. The results clearly indicated that correct recall and recognition performance decreased with the subjects' age, but it decreased even more with AD. In addition, semantically related false recalls and false recognitions increased with age but not with dementia. On the contrary, non-semantically related false recalls and false recognitions increased with AD. Finally, the regression analyses showed that executive functions mediated related false memories and episodic memory mediated related and unrelated false memories in aging. Moreover, executive functions predicted related and unrelated false memories in AD, and episodic and semantic memory predicted semantically related and unrelated false memories in AD. In conclusion, the results obtained are consistent with the current constructive models of memory suggesting that false memory creation depends on different cognitive functions and, consequently, that the impairments of these functions influence the production of false memories.

  10. Accelerating Cancer Systems Biology Research through Semantic Web Technology

    PubMed Central

    Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S.

    2012-01-01

    Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute’s caBIG®, so users can not only interact with the DMR through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers’ intellectual property. PMID:23188758

  11. Accelerating cancer systems biology research through Semantic Web technology.

    PubMed

    Wang, Zhihui; Sagotsky, Jonathan; Taylor, Thomas; Shironoshita, Patrick; Deisboeck, Thomas S

    2013-01-01

    Cancer systems biology is an interdisciplinary, rapidly expanding research field in which collaborations are a critical means to advance the field. Yet the prevalent database technologies often isolate data rather than making it easily accessible. The Semantic Web has the potential to help facilitate web-based collaborative cancer research by presenting data in a manner that is self-descriptive, human and machine readable, and easily sharable. We have created a semantically linked online Digital Model Repository (DMR) for storing, managing, executing, annotating, and sharing computational cancer models. Within the DMR, distributed, multidisciplinary, and inter-organizational teams can collaborate on projects, without forfeiting intellectual property. This is achieved by the introduction of a new stakeholder to the collaboration workflow, the institutional licensing officer, part of the Technology Transfer Office. Furthermore, the DMR has achieved silver level compatibility with the National Cancer Institute's caBIG, so users can interact with the DMR not only through a web browser but also through a semantically annotated and secure web service. We also discuss the technology behind the DMR leveraging the Semantic Web, ontologies, and grid computing to provide secure inter-institutional collaboration on cancer modeling projects, online grid-based execution of shared models, and the collaboration workflow protecting researchers' intellectual property. Copyright © 2012 Wiley Periodicals, Inc.

  12. Semantic memory in object use.

    PubMed

    Silveri, Maria Caterina; Ciccarelli, Nicoletta

    2009-10-01

    We studied five patients with semantic memory disorders, four with semantic dementia and one with herpes simplex virus encephalitis, to investigate the involvement of semantic conceptual knowledge in object use. Comparisons between patients who had semantic deficits of different severity, as well as the follow-up, showed that the ability to use objects was largely preserved when the deficit was mild but progressively decayed as the deficit became more severe. Naming was generally more impaired than object use. Production tasks (pantomime execution and actual object use) and comprehension tasks (pantomime recognition and action recognition) as well as functional knowledge about objects were impaired when the semantic deficit was severe. Semantic and unrelated errors were produced during object use, but actions were always fluent and patients performed normally on a novel tools task in which the semantic demand was minimal. Patients with severe semantic deficits scored borderline on ideational apraxia tasks. Our data indicate that functional semantic knowledge is crucial for using objects in a conventional way and suggest that non-semantic factors, mainly non-declarative components of memory, might compensate to some extent for semantic disorders and guarantee some residual ability to use very common objects independently of semantic knowledge.

  13. The Digital electronic Guideline Library (DeGeL): a hybrid framework for representation and use of clinical guidelines.

    PubMed

    Shahar, Yuval; Young, Ohad; Shalom, Erez; Mayaffit, Alon; Moskovitch, Robert; Hessing, Alon; Galperin, Maya

    2004-01-01

    We propose to present a poster (and potentially also a demonstration of the implemented system) summarizing the current state of our work on a hybrid, multiple-format representation of clinical guidelines that facilitates conversion of guidelines from free text to a formal representation. We describe a distributed Web-based architecture (DeGeL) and a set of tools using the hybrid representation. The tools enable performing tasks such as guideline specification, semantic markup, search, retrieval, visualization, eligibility determination, runtime application and retrospective quality assessment. The representation includes four parallel formats: Free text (one or more original sources); semistructured text (labeled by the target guideline-ontology semantic labels); semiformal text (which includes some control specification); and a formal, machine-executable representation. The specification, indexing, search, retrieval, and browsing tools are essentially independent of the ontology chosen for guideline representation, but editing the semi-formal and formal formats requires ontology-specific tools, which we have developed in the case of the Asbru guideline-specification language. The four formats support increasingly sophisticated computational tasks. The hybrid guidelines are stored in a Web-based library. All tools, such as for runtime guideline application or retrospective quality assessment, are designed to operate on all representations. We demonstrate the hybrid framework by providing examples from the semantic markup and search tools.

  14. Sympathetic arousal, but not disturbed executive functioning, mediates the impairment of cognitive flexibility under stress.

    PubMed

    Marko, Martin; Riečanský, Igor

    2018-05-01

    Cognitive flexibility emerges from an interplay of multiple cognitive systems, of which lexical-semantic and executive are thought to be the most important. Yet this has not been addressed by previous studies demonstrating that such forms of flexible thought deteriorate under stress. Motivated by these shortcomings, the present study evaluated several candidate mechanisms implied to mediate the impairing effects of stress on flexible thinking. Fifty-seven healthy adults were randomly assigned to psychosocial stress or control condition while assessed for performance on cognitive flexibility, working memory capacity, semantic fluency, and self-reported cognitive interference. Stress response was indicated by changes in skin conductance, hearth rate, and state anxiety. Our analyses showed that acute stress impaired cognitive flexibility via a concomitant increase in sympathetic arousal, while this mediator was positively associated with semantic fluency. Stress also decreased working memory capacity, which was partially mediated by elevated cognitive interference, but neither of these two measures were associated with cognitive flexibility or sympathetic arousal. Following these findings, we conclude that acute stress impairs cognitive flexibility via sympathetic arousal that modulates lexical-semantic and associative processes. In particular, the results indicate that stress-level of sympathetic activation may restrict the accessibility and integration of remote associates and bias the response competition towards prepotent and dominant ideas. Importantly, our results indicate that stress-induced impairments of cognitive flexibility and executive functions are mediated by distinct neurocognitive mechanisms. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Towards a Semantically-Enabled Control Strategy for Building Simulations: Integration of Semantic Technologies and Model Predictive Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delgoshaei, Parastoo; Austin, Mark A.; Pertzborn, Amanda J.

    State-of-the-art building simulation control methods incorporate physical constraints into their mathematical models, but omit implicit constraints associated with policies of operation and dependency relationships among rules representing those constraints. To overcome these shortcomings, there is a recent trend in enabling the control strategies with inference-based rule checking capabilities. One solution is to exploit semantic web technologies in building simulation control. Such approaches provide the tools for semantic modeling of domains, and the ability to deduce new information based on the models through use of Description Logic (DL). In a step toward enabling this capability, this paper presents a cross-disciplinary data-drivenmore » control strategy for building energy management simulation that integrates semantic modeling and formal rule checking mechanisms into a Model Predictive Control (MPC) formulation. The results show that MPC provides superior levels of performance when initial conditions and inputs are derived from inference-based rules.« less

  16. Collaborative E-Learning Using Semantic Course Blog

    ERIC Educational Resources Information Center

    Lu, Lai-Chen; Yeh, Ching-Long

    2008-01-01

    Collaborative e-learning delivers many enhancements to e-learning technology; it enables students to collaborate with each other and improves their learning efficiency. Semantic blog combines semantic Web and blog technology that users can import, export, view, navigate, and query the blog. We developed a semantic course blog for collaborative…

  17. Language Networks Associated with Computerized Semantic Indices

    PubMed Central

    Pakhomov, Serguei V. S.; Jones, David T.; Knopman, David S.

    2014-01-01

    Tests of generative semantic verbal fluency are widely used to study organization and representation of concepts in the human brain. Previous studies demonstrated that clustering and switching behavior during verbal fluency tasks is supported by multiple brain mechanisms associated with semantic memory and executive control. Previous work relied on manual assessments of semantic relatedness between words and grouping of words into semantic clusters. We investigated a computational linguistic approach to measuring the strength of semantic relatedness between words based on latent semantic analysis of word co-occurrences in a subset of a large online encyclopedia. We computed semantic clustering indices and compared them to brain network connectivity measures obtained with task-free fMRI in a sample consisting of healthy participants and those differentially affected by cognitive impairment. We found that semantic clustering indices were associated with brain network connectivity in distinct areas including fronto-temporal, fronto-parietal and fusiform gyrus regions. This study shows that computerized semantic indices complement traditional assessments of verbal fluency to provide a more complete account of the relationship between brain and verbal behavior involved organization and retrieval of lexical information from memory. PMID:25315785

  18. Managing and Securing Critical Infrastructure - A Semantic Policy and Trust Driven Approach

    DTIC Science & Technology

    2011-08-01

    enviromental factors, then it is very likely that the corresponding device has been compromised and controlled by an adversary. In this case, the report... Enviromental Factors in Faulty Case (b) Result of Policy Execution in Faulty Case Figure 7: Policy Execution in Faulty Case (a) Enviromental Factors

  19. SemantEco: a semantically powered modular architecture for integrating distributed environmental and ecological data

    USGS Publications Warehouse

    Patton, Evan W.; Seyed, Patrice; Wang, Ping; Fu, Linyun; Dein, F. Joshua; Bristol, R. Sky; McGuinness, Deborah L.

    2014-01-01

    We aim to inform the development of decision support tools for resource managers who need to examine large complex ecosystems and make recommendations in the face of many tradeoffs and conflicting drivers. We take a semantic technology approach, leveraging background ontologies and the growing body of linked open data. In previous work, we designed and implemented a semantically enabled environmental monitoring framework called SemantEco and used it to build a water quality portal named SemantAqua. Our previous system included foundational ontologies to support environmental regulation violations and relevant human health effects. In this work, we discuss SemantEco’s new architecture that supports modular extensions and makes it easier to support additional domains. Our enhanced framework includes foundational ontologies to support modeling of wildlife observation and wildlife health impacts, thereby enabling deeper and broader support for more holistically examining the effects of environmental pollution on ecosystems. We conclude with a discussion of how, through the application of semantic technologies, modular designs will make it easier for resource managers to bring in new sources of data to support more complex use cases.

  20. Executing SADI services in Galaxy.

    PubMed

    Aranguren, Mikel Egaña; González, Alejandro Rodríguez; Wilkinson, Mark D

    2014-01-01

    In recent years Galaxy has become a popular workflow management system in bioinformatics, due to its ease of installation, use and extension. The availability of Semantic Web-oriented tools in Galaxy, however, is limited. This is also the case for Semantic Web Services such as those provided by the SADI project, i.e. services that consume and produce RDF. Here we present SADI-Galaxy, a tool generator that deploys selected SADI Services as typical Galaxy tools. SADI-Galaxy is a Galaxy tool generator: through SADI-Galaxy, any SADI-compliant service becomes a Galaxy tool that can participate in other out-standing features of Galaxy such as data storage, history, workflow creation, and publication. Galaxy can also be used to execute and combine SADI services as it does with other Galaxy tools. Finally, we have semi-automated the packing and unpacking of data into RDF such that other Galaxy tools can easily be combined with SADI services, plugging the rich SADI Semantic Web Service environment into the popular Galaxy ecosystem. SADI-Galaxy bridges the gap between Galaxy, an easy to use but "static" workflow system with a wide user-base, and SADI, a sophisticated, semantic, discovery-based framework for Web Services, thus benefiting both user communities.

  1. Exploiting loop level parallelism in nonprocedural dataflow programs

    NASA Technical Reports Server (NTRS)

    Gokhale, Maya B.

    1987-01-01

    Discussed are how loop level parallelism is detected in a nonprocedural dataflow program, and how a procedural program with concurrent loops is scheduled. Also discussed is a program restructuring technique which may be applied to recursive equations so that concurrent loops may be generated for a seemingly iterative computation. A compiler which generates C code for the language described below has been implemented. The scheduling component of the compiler and the restructuring transformation are described.

  2. Creative constraints: Brain activity and network dynamics underlying semantic interference during idea production.

    PubMed

    Beaty, Roger E; Christensen, Alexander P; Benedek, Mathias; Silvia, Paul J; Schacter, Daniel L

    2017-03-01

    Functional neuroimaging research has recently revealed brain network interactions during performance on creative thinking tasks-particularly among regions of the default and executive control networks-but the cognitive mechanisms related to these interactions remain poorly understood. Here we test the hypothesis that the executive control network can interact with the default network to inhibit salient conceptual knowledge (i.e., pre-potent responses) elicited from memory during creative idea production. Participants studied common noun-verb pairs and were given a cued-recall test with corrective feedback to strengthen the paired association in memory. They then completed a verb generation task that presented either a previously studied noun (high-constraint) or an unstudied noun (low-constraint), and were asked to "think creatively" while searching for a novel verb to relate to the presented noun. Latent Semantic Analysis of verbal responses showed decreased semantic distance values in the high-constraint (i.e., interference) condition, which corresponded to increased neural activity within regions of the default (posterior cingulate cortex and bilateral angular gyri), salience (right anterior insula), and executive control (left dorsolateral prefrontal cortex) networks. Independent component analysis of intrinsic functional connectivity networks extended this finding by revealing differential interactions among these large-scale networks across the task conditions. The results suggest that interactions between the default and executive control networks underlie response inhibition during constrained idea production, providing insight into specific neurocognitive mechanisms supporting creative cognition. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Verbal Fluency Performance in Amnestic MCI and Older Adults with Cognitive Complaints

    PubMed Central

    Nutter-Upham, Katherine E.; Saykin, Andrew J.; Rabin, Laura A.; Roth, Robert M.; Wishart, Heather A.; Pare, Nadia; Flashman, Laura A.

    2009-01-01

    Verbal fluency tests are employed regularly during neuropsychological assessments of older adults, and deficits are a common finding in patients with Alzheimer’s disease (AD). Little extant research, however, has investigated verbal fluency ability and subtypes in preclinical stages of neurodegenerative disease. We examined verbal fluency performance in 107 older adults with amnestic mild cognitive impairment (MCI, n = 37), cognitive complaints (CC, n = 37) despite intact neuropsychological functioning, and demographically-matched healthy controls (HC, n = 33). Participants completed fluency tasks with letter, semantic category, and semantic switching constraints. Both phonemic and semantic fluency were statistically (but not clinically) reduced in amnestic MCI relative to cognitively intact older adults, indicating subtle changes in both the quality of the semantic store and retrieval slowing. Investigation of the underlying constructs of verbal fluency yielded two factors: Switching (including switching and shifting tasks) and Production (including letter, category, and action naming tasks), and both factors discriminated MCI from HC albeit to different degrees. Correlational findings further suggested that all fluency tasks involved executive control to some degree, while those with an added executive component (i.e., switching and shifting) were less dependent on semantic knowledge. Overall, our findings highlight the importance of including multiple verbal fluency tests in assessment batteries targeting preclinical dementia populations and suggest that individual fluency tasks may tap specific cognitive processes. PMID:18339515

  4. Utility of behavioral versus cognitive measures in differentiating between subtypes of frontotemporal lobar degeneration and Alzheimer's disease.

    PubMed

    Heidler-Gary, Jennifer; Gottesman, Rebecca; Newhart, Melissa; Chang, Shannon; Ken, Lynda; Hillis, Argye E

    2007-01-01

    We hypothesized that a modified version of the Frontal Behavioral Inventory (FBI-mod), along with a few cognitive tests, would be clinically useful in distinguishing between clinically defined Alzheimer's disease (AD) and subtypes of frontotemporal lobar degeneration (FTLD): frontotemporal dementia (dysexecutive type), progressive nonfluent aphasia, and semantic dementia. We studied 80 patients who were diagnosed with AD (n = 30) or FTLD (n = 50), on the basis of a comprehensive neuropsychological battery, imaging, neurological examination, and history. We found significant between-group differences on the FBI-mod, two subtests of the Rey Auditory Verbal Learning Test (verbal learning and delayed recall), and the Trail Making Test Part B (one measure of 'executive functioning'). AD was characterized by relatively severe impairment in verbal learning, delayed recall, and executive functioning, with relatively normal scores on the FBI-mod. Frontotemporal dementia was characterized by relatively severe impairment on the FBI-mod and executive functioning in the absence of severe impairment in verbal learning and recall. Progressive nonfluent aphasia was characterized by severe impairment in executive functioning with relatively normal scores on verbal learning and recall and FBI-mod. Finally, semantic dementia was characterized by relatively severe deficits in delayed recall, but relatively normal performance on new learning, executive functioning, and on FBI-mod. Discriminant function analysis confirmed that the FBI-mod, in conjunction with the Rey Auditory Verbal Learning Test, and the Trail Making Test Part B categorized the majority of patients as subtypes of FTLD or AD in the same way as a full neuropsychological battery, neurological examination, complete history, and imaging. These tests may be useful for efficient clinical diagnosis, although progressive nonfluent aphasia and semantic dementia are likely to be best distinguished by language tests not included in standard neuropsychological test batteries.

  5. Framework for Building Collaborative Research Environment

    DOE PAGES

    Devarakonda, Ranjeet; Palanisamy, Giriprakash; San Gil, Inigo

    2014-10-25

    Wide range of expertise and technologies are the key to solving some global problems. Semantic web technology can revolutionize the nature of how scientific knowledge is produced and shared. The semantic web is all about enabling machine-machine readability instead of a routine human-human interaction. Carefully structured data, as in machine readable data is the key to enabling these interactions. Drupal is an example of one such toolset that can render all the functionalities of Semantic Web technology right out of the box. Drupal’s content management system automatically stores the data in a structured format enabling it to be machine. Withinmore » this paper, we will discuss how Drupal promotes collaboration in a research setting such as Oak Ridge National Laboratory (ORNL) and Long Term Ecological Research Center (LTER) and how it is effectively using the Semantic Web in achieving this.« less

  6. Automated Data Processing as an AI Planning Problem

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Pang, Wanlin; Nemani, Ramakrishna; Votava, Petr

    2003-01-01

    NASA s vision for Earth Science is to build a "sensor web"; an adaptive array of heterogeneous satellites and other sensors that will track important events, such as storms, and provide real-time information about the state of the Earth to a wide variety of customers. Achieving his vision will require automation not only in the scheduling of the observations but also in the processing af tee resulting data. Ta address this need, we have developed a planner-based agent to automatically generate and execute data-flow programs to produce the requested data products. Data processing domains are substantially different from other planning domains that have been explored, and this has led us to substantially different choices in terms of representation and algorithms. We discuss some of these differences and discuss the approach we have adopted.

  7. Data Grid Management Systems

    NASA Technical Reports Server (NTRS)

    Moore, Reagan W.; Jagatheesan, Arun; Rajasekar, Arcot; Wan, Michael; Schroeder, Wayne

    2004-01-01

    The "Grid" is an emerging infrastructure for coordinating access across autonomous organizations to distributed, heterogeneous computation and data resources. Data grids are being built around the world as the next generation data handling systems for sharing, publishing, and preserving data residing on storage systems located in multiple administrative domains. A data grid provides logical namespaces for users, digital entities and storage resources to create persistent identifiers for controlling access, enabling discovery, and managing wide area latencies. This paper introduces data grids and describes data grid use cases. The relevance of data grids to digital libraries and persistent archives is demonstrated, and research issues in data grids and grid dataflow management systems are discussed.

  8. A Smart Modeling Framework for Integrating BMI-enabled Models as Web Services

    NASA Astrophysics Data System (ADS)

    Jiang, P.; Elag, M.; Kumar, P.; Peckham, S. D.; Liu, R.; Marini, L.; Hsu, L.

    2015-12-01

    Serviced-oriented computing provides an opportunity to couple web service models using semantic web technology. Through this approach, models that are exposed as web services can be conserved in their own local environment, thus making it easy for modelers to maintain and update the models. In integrated modeling, the serviced-oriented loose-coupling approach requires (1) a set of models as web services, (2) the model metadata describing the external features of a model (e.g., variable name, unit, computational grid, etc.) and (3) a model integration framework. We present the architecture of coupling web service models that are self-describing by utilizing a smart modeling framework. We expose models that are encapsulated with CSDMS (Community Surface Dynamics Modeling System) Basic Model Interfaces (BMI) as web services. The BMI-enabled models are self-describing by uncovering models' metadata through BMI functions. After a BMI-enabled model is serviced, a client can initialize, execute and retrieve the meta-information of the model by calling its BMI functions over the web. Furthermore, a revised version of EMELI (Peckham, 2015), an Experimental Modeling Environment for Linking and Interoperability, is chosen as the framework for coupling BMI-enabled web service models. EMELI allows users to combine a set of component models into a complex model by standardizing model interface using BMI as well as providing a set of utilities smoothing the integration process (e.g., temporal interpolation). We modify the original EMELI so that the revised modeling framework is able to initialize, execute and find the dependencies of the BMI-enabled web service models. By using the revised EMELI, an example will be presented on integrating a set of topoflow model components that are BMI-enabled and exposed as web services. Reference: Peckham, S.D. (2014) EMELI 1.0: An experimental smart modeling framework for automatic coupling of self-describing models, Proceedings of HIC 2014, 11th International Conf. on Hydroinformatics, New York, NY.

  9. Common world model for unmanned systems: Phase 2

    NASA Astrophysics Data System (ADS)

    Dean, Robert M. S.; Oh, Jean; Vinokurov, Jerry

    2014-06-01

    The Robotics Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities. Key to this effort is the Common World Model, which moves beyond the state-of-the-art by representing the world using semantic and symbolic as well as metric information. It joins these layers of information to define objects in the world. These objects may be reasoned upon jointly using traditional geometric, symbolic cognitive algorithms and new computational nodes formed by the combination of these disciplines to address Symbol Grounding and Uncertainty. The Common World Model must understand how these objects relate to each other. It includes the concept of Self-Information about the robot. By encoding current capability, component status, task execution state, and their histories we track information which enables the robot to reason and adapt its performance using Meta-Cognition and Machine Learning principles. The world model also includes models of how entities in the environment behave which enable prediction of future world states. To manage complexity, we have adopted a phased implementation approach. Phase 1, published in these proceedings in 2013 [1], presented the approach for linking metric with symbolic information and interfaces for traditional planners and cognitive reasoning. Here we discuss the design of "Phase 2" of this world model, which extends the Phase 1 design API, data structures, and reviews the use of the Common World Model as part of a semantic navigation use case.

  10. Verbal fluency, clustering, and switching in patients with psychosis following traumatic brain injury (PFTBI).

    PubMed

    Batty, Rachel; Francis, Andrew; Thomas, Neil; Hopwood, Malcolm; Ponsford, Jennie; Johnston, Lisa; Rossell, Susan

    2015-06-30

    Verbal fluency in patients with psychosis following traumatic brain injury (PFTBI) has been reported as comparable to healthy participants. This finding is counterintuitive given the prominent fluency impairments demonstrated post-traumatic brain injury (TBI) and in psychotic disorders, e.g. schizophrenia. We investigated phonemic (executive) fluency (3 letters: 'F' 'A' and 'S'), and semantic fluency (1 category: fruits and/or vegetables) in four matched groups; PFTBI (N=10), TBI (N=10), schizophrenia (N=23), and healthy controls (N=23). Words produced (minus perseverations and errors), and clustering and switching scores were compared for the two fluency types across the groups. The results confirmed that PFTBI patients do show impaired fluency, aligned with existing evidence in TBI and schizophrenia. PFTBI patients produced the least amount of words on the phonemic fluency ('A') trial and total score, and demonstrated reduced switching on both phonemic and semantic tasks. No significant differences in clustering performance were found. Importantly, the pattern of results suggested that PFTBI patients share deficits with their brain-injured (primarily executive), and psychotic (executive and semantic), counterparts, and that these are exacerbated by their dual-diagnosis. These findings add to a very limited literature by providing novel evidence of the nature of fluency impairments in dually-diagnosed PFTBI. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. A Rewriting Logic Approach to Type Inference

    NASA Astrophysics Data System (ADS)

    Ellison, Chucky; Şerbănuţă, Traian Florin; Roşu, Grigore

    Meseguer and Roşu proposed rewriting logic semantics (RLS) as a programing language definitional framework that unifies operational and algebraic denotational semantics. RLS has already been used to define a series of didactic and real languages, but its benefits in connection with defining and reasoning about type systems have not been fully investigated. This paper shows how the same RLS style employed for giving formal definitions of languages can be used to define type systems. The same term-rewriting mechanism used to execute RLS language definitions can now be used to execute type systems, giving type checkers or type inferencers. The proposed approach is exemplified by defining the Hindley-Milner polymorphic type inferencer mathcal{W} as a rewrite logic theory and using this definition to obtain a type inferencer by executing it in a rewriting logic engine. The inferencer obtained this way compares favorably with other definitions or implementations of mathcal{W}. The performance of the executable definition is within an order of magnitude of that of highly optimized implementations of type inferencers, such as that of OCaml.

  12. Exceptional lexical skills but executive language deficits in school starters and young adults with Turners syndrome: implications for X chromosome effects on brain function.

    PubMed

    Temple, Christine M; Shephard, Elizabeth E

    2012-03-01

    TS school starters had enhanced receptive and expressive language on standardised assessment (CELF-P) and enhanced rhyme judgements, spoonerisms, and lexical decision, indicating enhanced phonological skills and word representations. There was marginal but consistent advantage across lexico-semantic tasks. On executive tasks, speeded naming of numbers was impaired but not pictures. Young TS adults had enhanced naming and receptive vocabulary, indicating enhanced semantic skills. There were consistent deficits in executive language: phonemic oral fluency, rhyme fluency, speeded naming of pictures, numbers and colours; sentence completion requiring supression of prepotent responses. Haploinsufficiency of X-chromosome drives mechanisms that affect the anatomical and neurochemical development of the brain, resulting in enhanced temporal lobe aspects of language. These strengths co-exist with impaired development of frontal lobe executive language systems. This means not only that these elements of language can decouple in development but that their very independence is driven by mechanisms linked to the X-chromosome. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Linking Disparate Datasets of the Earth Sciences with the SemantEco Annotator

    NASA Astrophysics Data System (ADS)

    Seyed, P.; Chastain, K.; McGuinness, D. L.

    2013-12-01

    Use of Semantic Web technologies for data management in the Earth sciences (and beyond) has great potential but is still in its early stages, since the challenges of translating data into a more explicit or semantic form for immediate use within applications has not been fully addressed. In this abstract we help address this challenge by introducing the SemantEco Annotator, which enables anyone, regardless of expertise, to semantically annotate tabular Earth Science data and translate it into linked data format, while applying the logic inherent in community-standard vocabularies to guide the process. The Annotator was conceived under a desire to unify dataset content from a variety of sources under common vocabularies, for use in semantically-enabled web applications. Our current use case employs linked data generated by the Annotator for use in the SemantEco environment, which utilizes semantics to help users explore, search, and visualize water or air quality measurement and species occurrence data through a map-based interface. The generated data can also be used immediately to facilitate discovery and search capabilities within 'big data' environments. The Annotator provides a method for taking information about a dataset, that may only be known to its maintainers, and making it explicit, in a uniform and machine-readable fashion, such that a person or information system can more easily interpret the underlying structure and meaning. Its primary mechanism is to enable a user to formally describe how columns of a tabular dataset relate and/or describe entities. For example, if a user identifies columns for latitude and longitude coordinates, we can infer the data refers to a point that can be plotted on a map. Further, it can be made explicit that measurements of 'nitrate' and 'NO3-' are of the same entity through vocabulary assignments, thus more easily utilizing data sets that use different nomenclatures. The Annotator provides an extensive and searchable library of vocabularies to assist the user in locating terms to describe observed entities, their properties, and relationships. The Annotator leverages vocabulary definitions of these concepts to guide the user in describing data in a logically consistent manner. The vocabularies made available through the Annotator are open, as is the Annotator itself. We have taken a step towards making semantic annotation/translation of data more accessible. Our vision for the Annotator is as a tool that can be integrated into a semantic data 'workbench' environment, which would allow semantic annotation of a variety of data formats, using standard vocabularies. These vocabularies involved enable search for similar datasets, and integration with any semantically-enabled applications for analysis and visualization.

  14. Age-Related Brain Activation Changes during Rule Repetition in Word-Matching.

    PubMed

    Methqal, Ikram; Pinsard, Basile; Amiri, Mahnoush; Wilson, Maximiliano A; Monchi, Oury; Provost, Jean-Sebastien; Joanette, Yves

    2017-01-01

    Objective: The purpose of this study was to explore the age-related brain activation changes during a word-matching semantic-category-based task, which required either repeating or changing a semantic rule to be applied. In order to do so, a word-semantic rule-based task was adapted from the Wisconsin Sorting Card Test, involving the repeated feedback-driven selection of given pairs of words based on semantic category-based criteria. Method: Forty healthy adults (20 younger and 20 older) performed a word-matching task while undergoing a fMRI scan in which they were required to pair a target word with another word from a group of three words. The required pairing is based on three word-pair semantic rules which correspond to different levels of semantic control demands: functional relatedness, moderately typical-relatedness (which were considered as low control demands), and atypical-relatedness (high control demands). The sorting period consisted of a continuous execution of the same sorting rule and an inferred trial-by-trial feedback was given. Results: Behavioral performance revealed increases in response times and decreases of correct responses according to the level of semantic control demands (functional vs. typical vs. atypical) for both age groups (younger and older) reflecting graded differences in the repetition of the application of a given semantic rule. Neuroimaging findings of significant brain activation showed two main results: (1) Greater task-related activation changes for the repetition of the application of atypical rules relative to typical and functional rules, and (2) Changes (older > younger) in the inferior prefrontal regions for functional rules and more extensive and bilateral activations for typical and atypical rules. Regarding the inter-semantic rules comparison, only task-related activation differences were observed for functional > typical (e.g., inferior parietal and temporal regions bilaterally) and atypical > typical (e.g., prefrontal, inferior parietal, posterior temporal, and subcortical regions). Conclusion: These results suggest that healthy cognitive aging relies on the adaptive changes of inferior prefrontal resources involved in the repetitive execution of semantic rules, thus reflecting graded differences in support of task demands.

  15. Language networks associated with computerized semantic indices.

    PubMed

    Pakhomov, Serguei V S; Jones, David T; Knopman, David S

    2015-01-01

    Tests of generative semantic verbal fluency are widely used to study organization and representation of concepts in the human brain. Previous studies demonstrated that clustering and switching behavior during verbal fluency tasks is supported by multiple brain mechanisms associated with semantic memory and executive control. Previous work relied on manual assessments of semantic relatedness between words and grouping of words into semantic clusters. We investigated a computational linguistic approach to measuring the strength of semantic relatedness between words based on latent semantic analysis of word co-occurrences in a subset of a large online encyclopedia. We computed semantic clustering indices and compared them to brain network connectivity measures obtained with task-free fMRI in a sample consisting of healthy participants and those differentially affected by cognitive impairment. We found that semantic clustering indices were associated with brain network connectivity in distinct areas including fronto-temporal, fronto-parietal and fusiform gyrus regions. This study shows that computerized semantic indices complement traditional assessments of verbal fluency to provide a more complete account of the relationship between brain and verbal behavior involved organization and retrieval of lexical information from memory. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Concepts, Control, and Context: A Connectionist Account of Normal and Disordered Semantic Cognition

    PubMed Central

    2018-01-01

    Semantic cognition requires conceptual representations shaped by verbal and nonverbal experience and executive control processes that regulate activation of knowledge to meet current situational demands. A complete model must also account for the representation of concrete and abstract words, of taxonomic and associative relationships, and for the role of context in shaping meaning. We present the first major attempt to assimilate all of these elements within a unified, implemented computational framework. Our model combines a hub-and-spoke architecture with a buffer that allows its state to be influenced by prior context. This hybrid structure integrates the view, from cognitive neuroscience, that concepts are grounded in sensory-motor representation with the view, from computational linguistics, that knowledge is shaped by patterns of lexical co-occurrence. The model successfully codes knowledge for abstract and concrete words, associative and taxonomic relationships, and the multiple meanings of homonyms, within a single representational space. Knowledge of abstract words is acquired through (a) their patterns of co-occurrence with other words and (b) acquired embodiment, whereby they become indirectly associated with the perceptual features of co-occurring concrete words. The model accounts for executive influences on semantics by including a controlled retrieval mechanism that provides top-down input to amplify weak semantic relationships. The representational and control elements of the model can be damaged independently, and the consequences of such damage closely replicate effects seen in neuropsychological patients with loss of semantic representation versus control processes. Thus, the model provides a wide-ranging and neurally plausible account of normal and impaired semantic cognition. PMID:29733663

  17. Semantic integration of gene expression analysis tools and data sources using software connectors

    PubMed Central

    2013-01-01

    Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data. PMID:24341380

  18. Semantic integration of gene expression analysis tools and data sources using software connectors.

    PubMed

    Miyazaki, Flávia A; Guardia, Gabriela D A; Vêncio, Ricardo Z N; de Farias, Cléver R G

    2013-10-25

    The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heterogeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.

  19. The impact of intelligence on memory and executive functions of children with temporal lobe epilepsy: Methodological concerns with clinical relevance.

    PubMed

    Rzezak, Patricia; Guimarães, Catarina A; Guerreiro, Marilisa M; Valente, Kette D

    2017-05-01

    Patients with TLE are prone to have lower IQ scores than healthy controls. Nevertheless, the impact of IQ differences is not usually considered in studies that compared the cognitive functioning of children with and without epilepsy. This study aimed to determine the effect of using IQ as a covariate on memory and attentional/executive functions of children with TLE. Thirty-eight children and adolescents with TLE and 28 healthy controls paired as to age, gender, and sociodemographic factors were evaluated with a comprehensive neuropsychological battery for memory and executive functions. The authors conducted three analyses to verify the impact of IQ scores on the other cognitive domains. First, we compared performance on cognitive tests without controlling for IQ differences between groups. Second, we performed the same analyses, but we included IQ as a confounding factor. Finally, we evaluated the predictive value of IQ on cognitive functioning. Although patients had IQ score in the normal range, they showed lower IQ scores than controls (p = 0.001). When we did not consider IQ in the analyses, patients had worse performance in verbal and visual memory (short and long-term), semantic memory, sustained, divided and selective attention, mental flexibility and mental tracking for semantic information. By using IQ as a covariate, patients showed worse performance only in verbal memory (long-term), semantic memory, sustained and divided attention and in mental flexibility. IQ was a predictor factor of verbal and visual memory (immediate and delayed), working memory, mental flexibility and mental tracking for semantic information. Intelligence level had a significant impact on memory and executive functioning of children and adolescents with TLE without intellectual disability. This finding opens the discussion of whether IQ scores should be considered when interpreting the results of differences in cognitive performance of patients with epilepsy compared to healthy volunteers. Copyright © 2017 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.

  20. Category and design fluency in mild cognitive impairment: Performance, strategy use, and neural correlates.

    PubMed

    Peter, Jessica; Kaiser, Jannis; Landerer, Verena; Köstering, Lena; Kaller, Christoph P; Heimbach, Bernhard; Hüll, Michael; Bormann, Tobias; Klöppel, Stefan

    2016-12-01

    The exploration and retrieval of words during category fluency involves different strategies to improve or maintain performance. Deficits in that task, which are common in patients with amnestic mild cognitive impairment (aMCI), mirror either impaired semantic memory or dysfunctional executive control mechanisms. Relating category fluency to tasks that place greater demands on either semantic knowledge or executive functions might help to determine the underlying cognitive process. The aims of this study were to compare performance and strategy use of 20 patients with aMCI to 30 healthy elderly controls (HC) and to identify the dominant component (either executive or semantic) for better task performance in category fluency. Thus, the relationship between category fluency, design fluency and naming was examined. As fluency tasks have been associated with the superior frontal gyrus (SFG), the inferior frontal gyrus (IFG), and the temporal pole, we further explored the relationship between gray matter volume in these areas and both performance and strategy use. Patients with aMCI showed significantly lower performance and significantly less strategy use during fluency tasks compared to HC. However, both groups equally improved their performance when repeatedly confronted with the same task. In aMCI, performance during category fluency was significantly predicted by design fluency performance, while in HC, it was significantly predicted by naming performance. In HC, volume of the SFG significantly predicted both category and design fluency performance, and strategy use during design fluency. In aMCI, the SFG and the IFG predicted performance during both category and design fluency. The IFG significantly predicted strategy use during category fluency in both groups. The reduced category fluency performance in aMCI seems to be primarily due to dysfunctional executive control mechanisms rather than impaired semantic knowledge. This finding is directly relevant to patients in the different stages of Alzheimer's disease as it links the known semantic fluency deficit in this population to executive functions. Although patients with aMCI are impaired in both performance and strategy use compared to HC, they are able to increase performance over time. However, only HC were able to significantly improve the utilization of fluency strategies in both category and design fluency over time. HC seem to rely more heavily on the SFG during fluency tasks, while in patients with aMCI additional frontal brain areas are involved, possibly reflecting compensational processes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Semantic encoding and retrieval in the left inferior prefrontal cortex: a functional MRI study of task difficulty and process specificity.

    PubMed

    Demb, J B; Desmond, J E; Wagner, A D; Vaidya, C J; Glover, G H; Gabrieli, J D

    1995-09-01

    Prefrontal cortical function was examined during semantic encoding and repetition priming using functional magnetic resonance imaging (fMRI), a noninvasive technique for localizing regional changes in blood oxygenation, a correlate of neural activity. Words studied in a semantic (deep) encoding condition were better remembered than words studied in both easier and more difficult nonsemantic (shallow) encoding conditions, with difficulty indexed by response time. The left inferior prefrontal cortex (LIPC) (Brodmann's areas 45, 46, 47) showed increased activation during semantic encoding relative to nonsemantic encoding regardless of the relative difficulty of the nonsemantic encoding task. Therefore, LIPC activation appears to be related to semantic encoding and not task difficulty. Semantic encoding decisions are performed faster the second time words are presented. This represents semantic repetition priming, a facilitation in semantic processing for previously encoded words that is not dependent on intentional recollection. The same LIPC area activated during semantic encoding showed decreased activation during repeated semantic encoding relative to initial semantic encoding of the same words. This decrease in activation during repeated encoding was process specific; it occurred when words were semantically reprocessed but not when words were nonsemantically reprocessed. The results were apparent in both individual and averaged functional maps. These findings suggest that the LIPC is part of a semantic executive system that contributes to the on-line retrieval of semantic information.

  2. iPad: Semantic annotation and markup of radiological images.

    PubMed

    Rubin, Daniel L; Rodriguez, Cesar; Shah, Priyanka; Beaulieu, Chris

    2008-11-06

    Radiological images contain a wealth of information,such as anatomy and pathology, which is often not explicit and computationally accessible. Information schemes are being developed to describe the semantic content of images, but such schemes can be unwieldy to operationalize because there are few tools to enable users to capture structured information easily as part of the routine research workflow. We have created iPad, an open source tool enabling researchers and clinicians to create semantic annotations on radiological images. iPad hides the complexity of the underlying image annotation information model from users, permitting them to describe images and image regions using a graphical interface that maps their descriptions to structured ontologies semi-automatically. Image annotations are saved in a variety of formats,enabling interoperability among medical records systems, image archives in hospitals, and the Semantic Web. Tools such as iPad can help reduce the burden of collecting structured information from images, and it could ultimately enable researchers and physicians to exploit images on a very large scale and glean the biological and physiological significance of image content.

  3. Foundations for Streaming Model Transformations by Complex Event Processing.

    PubMed

    Dávid, István; Ráth, István; Varró, Dániel

    2018-01-01

    Streaming model transformations represent a novel class of transformations to manipulate models whose elements are continuously produced or modified in high volume and with rapid rate of change. Executing streaming transformations requires efficient techniques to recognize activated transformation rules over a live model and a potentially infinite stream of events. In this paper, we propose foundations of streaming model transformations by innovatively integrating incremental model query, complex event processing (CEP) and reactive (event-driven) transformation techniques. Complex event processing allows to identify relevant patterns and sequences of events over an event stream. Our approach enables event streams to include model change events which are automatically and continuously populated by incremental model queries. Furthermore, a reactive rule engine carries out transformations on identified complex event patterns. We provide an integrated domain-specific language with precise semantics for capturing complex event patterns and streaming transformations together with an execution engine, all of which is now part of the Viatra reactive transformation framework. We demonstrate the feasibility of our approach with two case studies: one in an advanced model engineering workflow; and one in the context of on-the-fly gesture recognition.

  4. Semantic fluency in deaf children who use spoken and signed language in comparison with hearing peers

    PubMed Central

    Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.

    2017-01-01

    Abstract Background Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language delays. Aims We compared deaf and hearing children's performance on a semantic fluency task. Optimal performance on this task requires a systematic search of the mental lexicon, the retrieval of words within a subcategory and, when that subcategory is exhausted, switching to a new subcategory. We compared retrieval patterns between groups, and also compared the responses of deaf children who used British Sign Language (BSL) with those who used spoken English. We investigated how semantic fluency performance related to children's expressive vocabulary and executive function skills, and also retested semantic fluency in the majority of the children nearly 2 years later, in order to investigate how much progress they had made in that time. Methods & Procedures Participants were deaf children aged 6–11 years (N = 106, comprising 69 users of spoken English, 29 users of BSL and eight users of Sign Supported English—SSE) compared with hearing children (N = 120) of the same age who used spoken English. Semantic fluency was tested for the category ‘animals’. We coded for errors, clusters (e.g., ‘pets’, ‘farm animals’) and switches. Participants also completed the Expressive One‐Word Picture Vocabulary Test and a battery of six non‐verbal executive function tasks. In addition, we collected follow‐up semantic fluency data for 70 deaf and 74 hearing children, nearly 2 years after they were first tested. Outcomes & Results Deaf children, whether using spoken or signed language, produced fewer items in the semantic fluency task than hearing children, but they showed similar patterns of responses for items most commonly produced, clustering of items into subcategories and switching between subcategories. Both vocabulary and executive function scores predicted the number of correct items produced. Follow‐up data from deaf participants showed continuing delays relative to hearing children 2 years later. Conclusions & Implications We conclude that semantic fluency can be used experimentally to investigate lexical organization in deaf children, and that it potentially has clinical utility across the heterogeneous deaf population. We present normative data to aid clinicians who wish to use this task with deaf children. PMID:28691260

  5. Semantic SenseLab: implementing the vision of the Semantic Web in neuroscience

    PubMed Central

    Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi

    2011-01-01

    Summary Objective Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Methods Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. Conclusion We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/ PMID:20006477

  6. Semantic Document Model to Enhance Data and Knowledge Interoperability

    NASA Astrophysics Data System (ADS)

    Nešić, Saša

    To enable document data and knowledge to be efficiently shared and reused across application, enterprise, and community boundaries, desktop documents should be completely open and queryable resources, whose data and knowledge are represented in a form understandable to both humans and machines. At the same time, these are the requirements that desktop documents need to satisfy in order to contribute to the visions of the Semantic Web. With the aim of achieving this goal, we have developed the Semantic Document Model (SDM), which turns desktop documents into Semantic Documents as uniquely identified and semantically annotated composite resources, that can be instantiated into human-readable (HR) and machine-processable (MP) forms. In this paper, we present the SDM along with an RDF and ontology-based solution for the MP document instance. Moreover, on top of the proposed model, we have built the Semantic Document Management System (SDMS), which provides a set of services that exploit the model. As an application example that takes advantage of SDMS services, we have extended MS Office with a set of tools that enables users to transform MS Office documents (e.g., MS Word and MS PowerPoint) into Semantic Documents, and to search local and distant semantic document repositories for document content units (CUs) over Semantic Web protocols.

  7. Semantic SenseLab: Implementing the vision of the Semantic Web in neuroscience.

    PubMed

    Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi

    2010-01-01

    Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/. 2009 Elsevier B.V. All rights reserved.

  8. From Science to e-Science to Semantic e-Science: A Heliosphysics Case Study

    NASA Technical Reports Server (NTRS)

    Narock, Thomas; Fox, Peter

    2011-01-01

    The past few years have witnessed unparalleled efforts to make scientific data web accessible. The Semantic Web has proven invaluable in this effort; however, much of the literature is devoted to system design, ontology creation, and trials and tribulations of current technologies. In order to fully develop the nascent field of Semantic e-Science we must also evaluate systems in real-world settings. We describe a case study within the field of Heliophysics and provide a comparison of the evolutionary stages of data discovery, from manual to semantically enable. We describe the socio-technical implications of moving toward automated and intelligent data discovery. In doing so, we highlight how this process enhances what is currently being done manually in various scientific disciplines. Our case study illustrates that Semantic e-Science is more than just semantic search. The integration of search with web services, relational databases, and other cyberinfrastructure is a central tenet of our case study and one that we believe has applicability as a generalized research area within Semantic e-Science. This case study illustrates a specific example of the benefits, and limitations, of semantically replicating data discovery. We show examples of significant reductions in time and effort enable by Semantic e-Science; yet, we argue that a "complete" solution requires integrating semantic search with other research areas such as data provenance and web services.

  9. Executive Functions in Older Adults with Autism Spectrum Disorder: Objective Performance and Subjective Complaints

    ERIC Educational Resources Information Center

    Davids, Roeliena C.; Groen, Yvonne; Berg, Ina J.; Tucha, Oliver M.; van Balkom, Ingrid D.

    2016-01-01

    Although deficits in Executive Functioning (EF) are reported frequently in young individuals with Autism Spectrum Disorders (ASD), they remain relatively unexplored later in life (>50 years). We studied objective performance on EF measures (Tower of London, Zoo map, phonetic/semantic fluency) as well as subjective complaints (self- and proxy…

  10. Semantic processing of EHR data for clinical research.

    PubMed

    Sun, Hong; Depraetere, Kristof; De Roo, Jos; Mels, Giovanni; De Vloed, Boris; Twagirumukiza, Marc; Colaert, Dirk

    2015-12-01

    There is a growing need to semantically process and integrate clinical data from different sources for clinical research. This paper presents an approach to integrate EHRs from heterogeneous resources and generate integrated data in different data formats or semantics to support various clinical research applications. The proposed approach builds semantic data virtualization layers on top of data sources, which generate data in the requested semantics or formats on demand. This approach avoids upfront dumping to and synchronizing of the data with various representations. Data from different EHR systems are first mapped to RDF data with source semantics, and then converted to representations with harmonized domain semantics where domain ontologies and terminologies are used to improve reusability. It is also possible to further convert data to application semantics and store the converted results in clinical research databases, e.g. i2b2, OMOP, to support different clinical research settings. Semantic conversions between different representations are explicitly expressed using N3 rules and executed by an N3 Reasoner (EYE), which can also generate proofs of the conversion processes. The solution presented in this paper has been applied to real-world applications that process large scale EHR data. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Semantics-enabled service discovery framework in the SIMDAT pharma grid.

    PubMed

    Qu, Cangtao; Zimmermann, Falk; Kumpf, Kai; Kamuzinzi, Richard; Ledent, Valérie; Herzog, Robert

    2008-03-01

    We present the design and implementation of a semantics-enabled service discovery framework in the data Grids for process and product development using numerical simulation and knowledge discovery (SIMDAT) Pharma Grid, an industry-oriented Grid environment for integrating thousands of Grid-enabled biological data services and analysis services. The framework consists of three major components: the Web ontology language (OWL)-description logic (DL)-based biological domain ontology, OWL Web service ontology (OWL-S)-based service annotation, and semantic matchmaker based on the ontology reasoning. Built upon the framework, workflow technologies are extensively exploited in the SIMDAT to assist biologists in (semi)automatically performing in silico experiments. We present a typical usage scenario through the case study of a biological workflow: IXodus.

  12. Learning the Language of Healthcare Enabling Semantic Web Technology in CHCS

    DTIC Science & Technology

    2013-09-01

    tuples”, (subject, predicate, object), to relate data and achieve semantic interoperability . Other similar technologies exist, but their... Semantic Healthcare repository [5]. Ultimately, both of our data approaches were successful. However, our current test system is based on the CPRS demo...to extract system dependencies and workflows; to extract semantically related patient data ; and to browse patient- centric views into the system . We

  13. The Semantic Web in Teacher Education

    ERIC Educational Resources Information Center

    Czerkawski, Betül Özkan

    2014-01-01

    The Semantic Web enables increased collaboration among computers and people by organizing unstructured data on the World Wide Web. Rather than a separate body, the Semantic Web is a functional extension of the current Web made possible by defining relationships among websites and other online content. When explicitly defined, these relationships…

  14. Composable Framework Support for Software-FMEA Through Model Execution

    NASA Astrophysics Data System (ADS)

    Kocsis, Imre; Patricia, Andras; Brancati, Francesco; Rossi, Francesco

    2016-08-01

    Performing Failure Modes and Effect Analysis (FMEA) during software architecture design is becoming a basic requirement in an increasing number of domains; however, due to the lack of standardized early design phase model execution, classic SW-FMEA approaches carry significant risks and are human effort-intensive even in processes that use Model-Driven Engineering.Recently, modelling languages with standardized executable semantics have emerged. Building on earlier results, this paper describes framework support for generating executable error propagation models from such models during software architecture design. The approach carries the promise of increased precision, decreased risk and more automated execution for SW-FMEA during dependability- critical system development.

  15. Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems

    NASA Astrophysics Data System (ADS)

    Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard

    Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.

  16. SPARQLGraph: a web-based platform for graphically querying biological Semantic Web databases.

    PubMed

    Schweiger, Dominik; Trajanoski, Zlatko; Pabinger, Stephan

    2014-08-15

    Semantic Web has established itself as a framework for using and sharing data across applications and database boundaries. Here, we present a web-based platform for querying biological Semantic Web databases in a graphical way. SPARQLGraph offers an intuitive drag & drop query builder, which converts the visual graph into a query and executes it on a public endpoint. The tool integrates several publicly available Semantic Web databases, including the databases of the just recently released EBI RDF platform. Furthermore, it provides several predefined template queries for answering biological questions. Users can easily create and save new query graphs, which can also be shared with other researchers. This new graphical way of creating queries for biological Semantic Web databases considerably facilitates usability as it removes the requirement of knowing specific query languages and database structures. The system is freely available at http://sparqlgraph.i-med.ac.at.

  17. A Metadata Action Language

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Clancy, Dan (Technical Monitor)

    2001-01-01

    The data management problem comprises data processing and data tracking. Data processing is the creation of new data based on existing data sources. Data tracking consists of storing metadata descriptions of available data. This paper addresses the data management problem by casting it as an AI planning problem. Actions are data-processing commands, plans are dataflow programs and goals are metadata descriptions of desired data products. Data manipulation is simply plan generation and execution, and a key component of data tracking is inferring the effects of an observed plan. We introduce a new action language for data management domains, called ADILM. We discuss the connection between data processing and information integration and show how a language for the latter must be modified to support the former. The paper also discusses information gathering within a data-processing framework, and show how ADILM metadata expressions are a generalization of Local Completeness.

  18. Ground Operations Aerospace Language (GOAL) textbook

    NASA Technical Reports Server (NTRS)

    Dickison, L. R.

    1973-01-01

    The textbook provides a semantical explanation accompanying a complete set of GOAL syntax diagrams, system concepts, language component interaction, and general language concepts necessary for efficient language implementation/execution.

  19. Exceptional Lexical Skills but Executive Language Deficits in School Starters and Young Adults with Turners Syndrome: Implications for X Chromosome Effects on Brain Function

    ERIC Educational Resources Information Center

    Temple, Christine M.; Shephard, Elizabeth E.

    2012-01-01

    TS school starters had enhanced receptive and expressive language on standardised assessment (CELF-P) and enhanced rhyme judgements, spoonerisms, and lexical decision, indicating enhanced phonological skills and word representations. There was marginal but consistent advantage across lexico-semantic tasks. On executive tasks, speeded naming of…

  20. ERP evidence suggests executive dysfunction in ecstasy polydrug users.

    PubMed

    Roberts, C A; Fairclough, S H; Fisk, J E; Tames, F; Montgomery, C

    2013-08-01

    Deficits in executive functions such as access to semantic/long-term memory have been shown in ecstasy users in previous research. Equally, there have been many reports of equivocal findings in this area. The current study sought to further investigate behavioural and electro-physiological measures of this executive function in ecstasy users. Twenty ecstasy-polydrug users, 20 non-ecstasy-polydrug users and 20 drug-naïve controls were recruited. Participants completed background questionnaires about their drug use, sleep quality, fluid intelligence and mood state. Each individual also completed a semantic retrieval task whilst 64 channel Electroencephalography (EEG) measures were recorded. Analysis of Variance (ANOVA) revealed no between-group differences in behavioural performance on the task. Mixed ANOVA on event-related potential (ERP) components P2, N2 and P3 revealed significant between-group differences in the N2 component. Subsequent exploratory univariate ANOVAs on the N2 component revealed marginally significant between-group differences, generally showing greater negativity at occipito-parietal electrodes in ecstasy users compared to drug-naïve controls. Despite absence of behavioural differences, differences in N2 magnitude are evidence of abnormal executive functioning in ecstasy-polydrug users.

  1. EIIS: An Educational Information Intelligent Search Engine Supported by Semantic Services

    ERIC Educational Resources Information Center

    Huang, Chang-Qin; Duan, Ru-Lin; Tang, Yong; Zhu, Zhi-Ting; Yan, Yong-Jian; Guo, Yu-Qing

    2011-01-01

    The semantic web brings a new opportunity for efficient information organization and search. To meet the special requirements of the educational field, this paper proposes an intelligent search engine enabled by educational semantic support service, where three kinds of searches are integrated into Educational Information Intelligent Search (EIIS)…

  2. A Fault Oblivious Extreme-Scale Execution Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKie, Jim

    The FOX project, funded under the ASCR X-stack I program, developed systems software and runtime libraries for a new approach to the data and work distribution for massively parallel, fault oblivious application execution. Our work was motivated by the premise that exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today’s machines. To deliver the capability of exascale hardware, the systems software must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massivemore » data analysis in a highly unreliable hardware environment with billions of threads of execution. Our OS research has prototyped new methods to provide efficient resource sharing, synchronization, and protection in a many-core compute node. We have experimented with alternative task/dataflow programming models and shown scalability in some cases to hundreds of thousands of cores. Much of our software is in active development through open source projects. Concepts from FOX are being pursued in next generation exascale operating systems. Our OS work focused on adaptive, application tailored OS services optimized for multi → many core processors. We developed a new operating system NIX that supports role-based allocation of cores to processes which was released to open source. We contributed to the IBM FusedOS project, which promoted the concept of latency-optimized and throughput-optimized cores. We built a task queue library based on distributed, fault tolerant key-value store and identified scaling issues. A second fault tolerant task parallel library was developed, based on the Linda tuple space model, that used low level interconnect primitives for optimized communication. We designed fault tolerance mechanisms for task parallel computations employing work stealing for load balancing that scaled to the largest existing supercomputers. Finally, we implemented the Elastic Building Blocks runtime, a library to manage object-oriented distributed software components. To support the research, we won two INCITE awards for time on Intrepid (BG/P) and Mira (BG/Q). Much of our work has had impact in the OS and runtime community through the ASCR Exascale OS/R workshop and report, leading to the research agenda of the Exascale OS/R program. Our project was, however, also affected by attrition of multiple PIs. While the PIs continued to participate and offer guidance as time permitted, losing these key individuals was unfortunate both for the project and for the DOE HPC community.« less

  3. Functional MRI evidence for the decline of word retrieval and generation during normal aging.

    PubMed

    Baciu, M; Boudiaf, N; Cousin, E; Perrone-Bertolotti, M; Pichat, C; Fournet, N; Chainay, H; Lamalle, L; Krainik, A

    2016-02-01

    This fMRI study aimed to explore the effect of normal aging on word retrieval and generation. The question addressed is whether lexical production decline is determined by a direct mechanism, which concerns the language operations or is rather indirectly induced by a decline of executive functions. Indeed, the main hypothesis was that normal aging does not induce loss of lexical knowledge, but there is only a general slowdown in retrieval mechanisms involved in lexical processing, due to possible decline of the executive functions. We used three tasks (verbal fluency, object naming, and semantic categorization). Two groups of participants were tested (Young, Y and Aged, A), without cognitive and psychiatric impairment and showing similar levels of vocabulary. Neuropsychological testing revealed that older participants had lower executive function scores, longer processing speeds, and tended to have lower verbal fluency scores. Additionally, older participants showed higher scores for verbal automatisms and overlearned information. In terms of behavioral data, older participants performed as accurate as younger adults, but they were significantly slower for the semantic categorization and were less fluent for verbal fluency task. Functional MRI analyses suggested that older adults did not simply activate fewer brain regions involved in word production, but they actually showed an atypical pattern of activation. Significant correlations between the BOLD (Blood Oxygen Level Dependent) signal of aging-related (A > Y) regions and cognitive scores suggested that this atypical pattern of the activation may reveal several compensatory mechanisms (a) to overcome the slowdown in retrieval, due to the decline of executive functions and processing speed and (b) to inhibit verbal automatic processes. The BOLD signal measured in some other aging-dependent regions did not correlate with the behavioral and neuropsychological scores, and the overactivation of these uncorrelated regions would simply reveal dedifferentiation that occurs with aging. Altogether, our results suggest that normal aging is associated with a more difficult access to lexico-semantic operations and representations by a slowdown in executive functions, without any conceptual loss.

  4. The processing of actions and action-words in amyotrophic lateral sclerosis patients.

    PubMed

    Papeo, Liuba; Cecchetto, Cinzia; Mazzon, Giulia; Granello, Giulia; Cattaruzza, Tatiana; Verriello, Lorenzo; Eleopra, Roberto; Rumiati, Raffaella I

    2015-03-01

    Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disease with prime consequences on the motor function and concomitant cognitive changes, most frequently in the domain of executive functions. Moreover, poorer performance with action-verbs versus object-nouns has been reported in ALS patients, raising the hypothesis that the motor dysfunction deteriorates the semantic representation of actions. Using action-verbs and manipulable-object nouns sharing semantic relationship with the same motor representations, the verb-noun difference was assessed in a group of 21 ALS-patients with severely impaired motor behavior, and compared with a normal sample's performance. ALS-group performed better on nouns than verbs, both in production (action and object naming) and comprehension (word-picture matching). This observation implies that the interpretation of the verb-noun difference in ALS cannot be accounted by the relatedness of verbs to motor representations, but has to consider the role of other semantic and/or morpho-phonological dimensions that distinctively define the two grammatical classes. Moreover, this difference in the ALS-group was not greater than the noun-verb difference in the normal sample. The mental representation of actions also involves an executive-control component to organize, in logical/temporal order, the individual motor events (or sub-goals) that form a purposeful action. We assessed this ability with action sequencing tasks, requiring participants to re-construct a purposeful action from the scrambled presentation of its constitutive motor events, shown in the form of photographs or short sentences. In those tasks, ALS-group's performance was significantly poorer than controls'. Thus, the executive dysfunction manifested in the sequencing deficit -but not the selective verb deficit- appears as a consistent feature of the cognitive profile associated with ALS. We suggest that ALS can offer a valuable model to study the relationship between (frontal) motor centers and the executive-control machinery housed in the frontal brain, and the implications of executive dysfunctions in tasks such as action processing. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Capturing multidimensionality in stroke aphasia: mapping principal behavioural components to neural structures

    PubMed Central

    Butler, Rebecca A.

    2014-01-01

    Stroke aphasia is a multidimensional disorder in which patient profiles reflect variation along multiple behavioural continua. We present a novel approach to separating the principal aspects of chronic aphasic performance and isolating their neural bases. Principal components analysis was used to extract core factors underlying performance of 31 participants with chronic stroke aphasia on a large, detailed battery of behavioural assessments. The rotated principle components analysis revealed three key factors, which we labelled as phonology, semantic and executive/cognition on the basis of the common elements in the tests that loaded most strongly on each component. The phonology factor explained the most variance, followed by the semantic factor and then the executive-cognition factor. The use of principle components analysis rendered participants’ scores on these three factors orthogonal and therefore ideal for use as simultaneous continuous predictors in a voxel-based correlational methodology analysis of high resolution structural scans. Phonological processing ability was uniquely related to left posterior perisylvian regions including Heschl’s gyrus, posterior middle and superior temporal gyri and superior temporal sulcus, as well as the white matter underlying the posterior superior temporal gyrus. The semantic factor was uniquely related to left anterior middle temporal gyrus and the underlying temporal stem. The executive-cognition factor was not correlated selectively with the structural integrity of any particular region, as might be expected in light of the widely-distributed and multi-functional nature of the regions that support executive functions. The identified phonological and semantic areas align well with those highlighted by other methodologies such as functional neuroimaging and neurostimulation. The use of principle components analysis allowed us to characterize the neural bases of participants’ behavioural performance more robustly and selectively than the use of raw assessment scores or diagnostic classifications because principle components analysis extracts statistically unique, orthogonal behavioural components of interest. As such, in addition to improving our understanding of lesion–symptom mapping in stroke aphasia, the same approach could be used to clarify brain–behaviour relationships in other neurological disorders. PMID:25348632

  6. Putting time into proof outlines

    NASA Technical Reports Server (NTRS)

    Schneider, Fred B.; Bloom, Bard; Marzullo, Keith

    1993-01-01

    A logic for reasoning about timing properties of concurrent programs is presented. The logic is based on Hoare-style proof outlines and can handle maximal parallelism as well as certain resource-constrained execution environments. The correctness proof for a mutual exclusion protocol that uses execution timings in a subtle way illustrates the logic in action. A soundness proof using structural operational semantics is outlined in the appendix.

  7. Designing Collaborative E-Learning Environments Based upon Semantic Wiki: From Design Models to Application Scenarios

    ERIC Educational Resources Information Center

    Li, Yanyan; Dong, Mingkai; Huang, Ronghuai

    2011-01-01

    The knowledge society requires life-long learning and flexible learning environment that enables fast, just-in-time and relevant learning, aiding the development of communities of knowledge, linking learners and practitioners with experts. Based upon semantic wiki, a combination of wiki and Semantic Web technology, this paper designs and develops…

  8. Retrieval and Monitoring Processes during Visual Working Memory: An ERP Study of the Benefit of Visual Semantics

    PubMed Central

    Orme, Elizabeth; Brown, Louise A.; Riby, Leigh M.

    2017-01-01

    In this study, we examined electrophysiological indices of episodic remembering whilst participants recalled novel shapes, with and without semantic content, within a visual working memory paradigm. The components of interest were the parietal episodic (PE; 400–800 ms) and late posterior negativity (LPN; 500–900 ms), as these have previously been identified as reliable markers of recollection and post-retrieval monitoring, respectively. Fifteen young adults completed a visual matrix patterns task, assessing memory for low and high semantic visual representations. Matrices with either low semantic or high semantic content (containing familiar visual forms) were briefly presented to participants for study (1500 ms), followed by a retention interval (6000 ms) and finally a same/different recognition phase. The event-related potentials of interest were tracked from the onset of the recognition test stimuli. Analyses revealed equivalent amplitude for the earlier PE effect for the processing of both low and high semantic stimulus types. However, the LPN was more negative-going for the processing of the low semantic stimuli. These data are discussed in terms of relatively ‘pure’ and complete retrieval of high semantic items, where support can readily be recruited from semantic memory. However, for the low semantic items additional executive resources, as indexed by the LPN, are recruited when memory monitoring and uncertainty exist in order to recall previously studied items more effectively. PMID:28725203

  9. Retrieval and Monitoring Processes during Visual Working Memory: An ERP Study of the Benefit of Visual Semantics.

    PubMed

    Orme, Elizabeth; Brown, Louise A; Riby, Leigh M

    2017-01-01

    In this study, we examined electrophysiological indices of episodic remembering whilst participants recalled novel shapes, with and without semantic content, within a visual working memory paradigm. The components of interest were the parietal episodic (PE; 400-800 ms) and late posterior negativity (LPN; 500-900 ms), as these have previously been identified as reliable markers of recollection and post-retrieval monitoring, respectively. Fifteen young adults completed a visual matrix patterns task, assessing memory for low and high semantic visual representations. Matrices with either low semantic or high semantic content (containing familiar visual forms) were briefly presented to participants for study (1500 ms), followed by a retention interval (6000 ms) and finally a same/different recognition phase. The event-related potentials of interest were tracked from the onset of the recognition test stimuli. Analyses revealed equivalent amplitude for the earlier PE effect for the processing of both low and high semantic stimulus types. However, the LPN was more negative-going for the processing of the low semantic stimuli. These data are discussed in terms of relatively 'pure' and complete retrieval of high semantic items, where support can readily be recruited from semantic memory. However, for the low semantic items additional executive resources, as indexed by the LPN, are recruited when memory monitoring and uncertainty exist in order to recall previously studied items more effectively.

  10. Does cognitive performance map to categorical diagnoses of schizophrenia, schizoaffective disorder and bipolar disorder? A discriminant functions analysis.

    PubMed

    Van Rheenen, Tamsyn E; Bryce, Shayden; Tan, Eric J; Neill, Erica; Gurvich, Caroline; Louise, Stephanie; Rossell, Susan L

    2016-03-01

    Despite known overlaps in the pattern of cognitive impairments in individuals with bipolar disorder (BD), schizophrenia (SZ) and schizoaffective disorder (SZA), few studies have examined the extent to which cognitive performance validates traditional diagnostic boundaries in these groups. Individuals with SZ (n=49), schizoaffective disorder (n=33) and BD (n=35) completed a battery of cognitive tests measuring the domains of processing speed, immediate memory, semantic memory, learning, working memory, executive function and sustained attention. A discriminant functions analysis revealed a significant function comprising semantic memory, immediate memory and processing speed that maximally separated patients with SZ from those with BD. Initial classification scores on the basis of this function showed modest diagnostic accuracy, owing in part to the misclassification of SZA patients as having SZ. When SZA patients were removed from the model, a second cross-validated classifier yielded slightly improved diagnostic accuracy and a single function solution, of which semantic memory loaded most heavily. A cluster of non-executive cognitive processes appears to have some validity in mapping onto traditional nosological boundaries. However, since semantic memory performance was the primary driver of the discrimination between BD and SZ, it is possible that performance differences between the disorders in this cognitive domain in particular, index separate underlying aetiologies. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. MENTOR: an enabler for interoperable intelligent systems

    NASA Astrophysics Data System (ADS)

    Sarraipa, João; Jardim-Goncalves, Ricardo; Steiger-Garcao, Adolfo

    2010-07-01

    A community with knowledge organisation based on ontologies will enable an increase in the computational intelligence of its information systems. However, due to the worldwide diversity of communities, a high number of knowledge representation elements, which are not semantically coincident, have appeared representing the same segment of reality, becoming a barrier to business communications. Even if a domain community uses the same kind of technologies in its information systems, such as ontologies, it doesn't solve its semantics differences. In order to solve this interoperability problem, a solution is to use a reference ontology as an intermediary in the communications between the community enterprises and the outside, while allowing the enterprises to keep their own ontology and semantics unchanged internally. This work proposes MENTOR, a methodology to support the development of a common reference ontology for a group of organisations sharing the same business domain. This methodology is based on the mediator ontology (MO) concept, which assists the semantic transformations among each enterprise's ontology and the referential one. The MO enables each organisation to keep its own terminology, glossary and ontological structures, while providing seamless communication and interaction with the others.

  12. Climate Model Diagnostic Analyzer

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Pan, Lei; Zhai, Chengxing; Tang, Benyang; Kubar, Terry; Zhang, Zia; Wang, Wei

    2015-01-01

    The comprehensive and innovative evaluation of climate models with newly available global observations is critically needed for the improvement of climate model current-state representation and future-state predictability. A climate model diagnostic evaluation process requires physics-based multi-variable analyses that typically involve large-volume and heterogeneous datasets, making them both computation- and data-intensive. With an exploratory nature of climate data analyses and an explosive growth of datasets and service tools, scientists are struggling to keep track of their datasets, tools, and execution/study history, let alone sharing them with others. In response, we have developed a cloud-enabled, provenance-supported, web-service system called Climate Model Diagnostic Analyzer (CMDA). CMDA enables the physics-based, multivariable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. At the same time, CMDA provides a crowd-sourcing space where scientists can organize their work efficiently and share their work with others. CMDA is empowered by many current state-of-the-art software packages in web service, provenance, and semantic search.

  13. Integrating clinical research with the Healthcare Enterprise: from the RE-USE project to the EHR4CR platform.

    PubMed

    El Fadly, AbdenNaji; Rance, Bastien; Lucas, Noël; Mead, Charles; Chatellier, Gilles; Lastic, Pierre-Yves; Jaulent, Marie-Christine; Daniel, Christel

    2011-12-01

    There are different approaches for repurposing clinical data collected in the Electronic Healthcare Record (EHR) for use in clinical research. Semantic integration of "siloed" applications across domain boundaries is the raison d'être of the standards-based profiles developed by the Integrating the Healthcare Enterprise (IHE) initiative - an initiative by healthcare professionals and industry promoting the coordinated use of established standards such as DICOM and HL7 to address specific clinical needs in support of optimal patient care. In particular, the combination of two IHE profiles - the integration profile "Retrieve Form for Data Capture" (RFD), and the IHE content profile "Clinical Research Document" (CRD) - offers a straightforward approach to repurposing EHR data by enabling the pre-population of the case report forms (eCRF) used for clinical research data capture by Clinical Data Management Systems (CDMS) with previously collected EHR data. Implement an alternative solution of the RFD-CRD integration profile centered around two approaches: (i) Use of the EHR as the single-source data-entry and persistence point in order to ensure that all the clinical data for a given patient could be found in a single source irrespective of the data collection context, i.e. patient care or clinical research; and (ii) Maximize the automatic pre-population process through the use of a semantic interoperability services that identify duplicate or semantically-equivalent eCRF/EHR data elements as they were collected in the EHR context. The RE-USE architecture and associated profiles are focused on defining a set of scalable, standards-based, IHE-compliant profiles that can enable single-source data collection/entry and cross-system data reuse through semantic integration. Specifically, data reuse is realized through the semantic mapping of data collection fields in electronic Case Report Forms (eCRFs) to data elements previously defined as part of patient care-centric templates in the EHR context. The approach was evaluated in the context of a multi-center clinical trial conducted in a large, multi-disciplinary hospital with an installed EHR. Data elements of seven eCRFs used in a multi-center clinical trial were mapped to data elements of patient care-centric templates in use in the EHR at the George Pompidou hospital. 13.4% of the data elements of the eCRFs were found to be represented in EHR templates and were therefore candidate for pre-population. During the execution phase of the clinical study, the semantic mapping architecture enabled data persisted in the EHR context as part of clinical care to be used to pre-populate eCRFS for use without secondary data entry. To ensure that the pre-populated data is viable for use in the clinical research context, all pre-populated eCRF data needs to be first approved by a trial investigator prior to being persisted in a research data store within a CDMS. Single-source data entry in the clinical care context for use in the clinical research context - a process enabled through the use of the EHR as single point of data entry, can - if demonstrated to be a viable strategy - not only significantly reduce data collection efforts while simultaneously increasing data collection accuracy secondary to elimination of transcription or double-entry errors between the two contexts but also ensure that all the clinical data for a given patient, irrespective of the data collection context, are available in the EHR for decision support and treatment planning. The RE-USE approach used mapping algorithms to identify semantic coherence between clinical care and clinical research data elements and pre-populate eCRFs. The RE-USE project utilized SNOMED International v.3.5 as its "pivot reference terminology" to support EHR-to-eCRF mapping, a decision that likely enhanced the "recall" of the mapping algorithms. The RE-USE results demonstrate the difficult challenges involved in semantic integration between the clinical care and clinical research contexts. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. The Relationship Between Educational Years and Phonemic Verbal Fluency (PVF) and Semantic Verbal Fluency (SVF) Tasks in Spanish Patients Diagnosed With Schizophrenia, Bipolar Disorder, and Psychotic Bipolar Disorder

    PubMed Central

    García-Laredo, Eduardo; Maestú, Fernando; Castellanos, Miguel Ángel; Molina, Juan D.; Peréz-Moreno, Elisa

    2015-01-01

    Abstract Semantic and verbal fluency tasks are widely used as a measure of frontal capacities. It has been well described in literature that patients affected by schizophrenic and bipolar disorders present a worse execution in these tasks. Some authors have also noted the importance of educational years. Our objective is to analyze whether the effect of cognitive malfunction caused by apathology is superior to the expected effect of years of education in phonemic verbal fluency (PVF) and semantic verbal fluency (SVF) task execution. A total of 62 individuals took part in this study, out of which 23 were patients with schizophrenic paranoid disorder, 11 suffered from bipolar disorder with psychotic symptomatology, 13 suffered from bipolar disorder without psychotic symptomatology, and 15 participants were nonpathological individuals. All participants were evaluated with the PVF and SVF tests (animals and tools). The performance/execution results were analyzed with a mixed-model ANCOVA, with educational years as a covariable. The effect of education seems to be more determined by PVF FAS tests than by SVF. With PVF FAS tasks, the expected effect of pathology disappears when the covariable EDUCATION is introduced. With SVF tasks, the effect continues to be significant, even though the EDUACTION covariable dims such effect. These results suggest that SVF tests (animals category) are better evaluation tools as they are less dependent on the patients’ education than PVF FAS tests. PMID:26426640

  15. The Relationship Between Educational Years and Phonemic Verbal Fluency (PVF) and Semantic Verbal Fluency (SVF) Tasks in Spanish Patients Diagnosed With Schizophrenia, Bipolar Disorder, and Psychotic Bipolar Disorder.

    PubMed

    García-Laredo, Eduardo; Maestú, Fernando; Castellanos, Miguel Ángel; Molina, Juan D; Peréz-Moreno, Elisa

    2015-09-01

    Semantic and verbal fluency tasks are widely used as a measure of frontal capacities. It has been well described in literature that patients affected by schizophrenic and bipolar disorders present a worse execution in these tasks. Some authors have also noted the importance of educational years. Our objective is to analyze whether the effect of cognitive malfunction caused by apathology is superior to the expected effect of years of education in phonemic verbal fluency (PVF) and semantic verbal fluency (SVF) task execution. A total of 62 individuals took part in this study, out of which 23 were patients with schizophrenic paranoid disorder, 11 suffered from bipolar disorder with psychotic symptomatology, 13 suffered from bipolar disorder without psychotic symptomatology, and 15 participants were nonpathological individuals. All participants were evaluated with the PVF and SVF tests (animals and tools). The performance/execution results were analyzed with a mixed-model ANCOVA, with educational years as a covariable. The effect of education seems to be more determined by PVF FAS tests than by SVF. With PVF FAS tasks, the expected effect of pathology disappears when the covariable EDUCATION is introduced. With SVF tasks, the effect continues to be significant, even though the EDUACTION covariable dims such effect. These results suggest that SVF tests (animals category) are better evaluation tools as they are less dependent on the patients' education than PVF FAS tests.

  16. Parametric effects of syntactic-semantic conflict in Broca's area during sentence processing.

    PubMed

    Thothathiri, Malathi; Kim, Albert; Trueswell, John C; Thompson-Schill, Sharon L

    2012-03-01

    The hypothesized role of Broca's area in sentence processing ranges from domain-general executive function to domain-specific computation that is specific to certain syntactic structures. We examined this issue by manipulating syntactic structure and conflict between syntactic and semantic cues in a sentence processing task. Functional neuroimaging revealed that activation within several Broca's area regions of interest reflected the parametric variation in syntactic-semantic conflict. These results suggest that Broca's area supports sentence processing by mediating between multiple incompatible constraints on sentence interpretation, consistent with this area's well-known role in conflict resolution in other linguistic and non-linguistic tasks. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. A logical model of cooperating rule-based systems

    NASA Technical Reports Server (NTRS)

    Bailin, Sidney C.; Moore, John M.; Hilberg, Robert H.; Murphy, Elizabeth D.; Bahder, Shari A.

    1989-01-01

    A model is developed to assist in the planning, specification, development, and verification of space information systems involving distributed rule-based systems. The model is based on an analysis of possible uses of rule-based systems in control centers. This analysis is summarized as a data-flow model for a hypothetical intelligent control center. From this data-flow model, the logical model of cooperating rule-based systems is extracted. This model consists of four layers of increasing capability: (1) communicating agents, (2) belief-sharing knowledge sources, (3) goal-sharing interest areas, and (4) task-sharing job roles.

  18. Differentiation of perceptual and semantic subsequent memory effects using an orthographic paradigm.

    PubMed

    Kuo, Michael C C; Liu, Karen P Y; Ting, Kin Hung; Chan, Chetwyn C H

    2012-11-27

    This study aimed to differentiate perceptual and semantic encoding processes using subsequent memory effects (SMEs) elicited by the recognition of orthographs of single Chinese characters. Participants studied a series of Chinese characters perceptually (by inspecting orthographic components) or semantically (by determining the object making sounds), and then made studied or unstudied judgments during the recognition phase. Recognition performance in terms of d-prime measure in the semantic condition was higher, though not significant, than that of the perceptual condition. The between perceptual-semantic condition differences in SMEs at P550 and late positive component latencies (700-1000ms) were not significant in the frontal area. An additional analysis identified larger SME in the semantic condition during 600-1000ms in the frontal pole regions. These results indicate that coordination and incorporation of orthographic information into mental representation is essential to both task conditions. The differentiation was also revealed in earlier SMEs (perceptual>semantic) at N3 (240-360ms) latency, which is a novel finding. The left-distributed N3 was interpreted as more efficient processing of meaning with semantically learned characters. Frontal pole SMEs indicated strategic processing by executive functions, which would further enhance memory. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Linked Registries: Connecting Rare Diseases Patient Registries through a Semantic Web Layer

    PubMed Central

    González-Castro, Lorena; Carta, Claudio; van der Horst, Eelke; Lopes, Pedro; Kaliyaperumal, Rajaram; Thompson, Mark; Thompson, Rachel; Queralt-Rosinach, Núria; Lopez, Estrella; Wood, Libby; Robertson, Agata; Lamanna, Claudia; Gilling, Mette; Orth, Michael; Merino-Martinez, Roxana; Taruscio, Domenica; Lochmüller, Hanns

    2017-01-01

    Patient registries are an essential tool to increase current knowledge regarding rare diseases. Understanding these data is a vital step to improve patient treatments and to create the most adequate tools for personalized medicine. However, the growing number of disease-specific patient registries brings also new technical challenges. Usually, these systems are developed as closed data silos, with independent formats and models, lacking comprehensive mechanisms to enable data sharing. To tackle these challenges, we developed a Semantic Web based solution that allows connecting distributed and heterogeneous registries, enabling the federation of knowledge between multiple independent environments. This semantic layer creates a holistic view over a set of anonymised registries, supporting semantic data representation, integrated access, and querying. The implemented system gave us the opportunity to answer challenging questions across disperse rare disease patient registries. The interconnection between those registries using Semantic Web technologies benefits our final solution in a way that we can query single or multiple instances according to our needs. The outcome is a unique semantic layer, connecting miscellaneous registries and delivering a lightweight holistic perspective over the wealth of knowledge stemming from linked rare disease patient registries. PMID:29214177

  20. Linked Registries: Connecting Rare Diseases Patient Registries through a Semantic Web Layer.

    PubMed

    Sernadela, Pedro; González-Castro, Lorena; Carta, Claudio; van der Horst, Eelke; Lopes, Pedro; Kaliyaperumal, Rajaram; Thompson, Mark; Thompson, Rachel; Queralt-Rosinach, Núria; Lopez, Estrella; Wood, Libby; Robertson, Agata; Lamanna, Claudia; Gilling, Mette; Orth, Michael; Merino-Martinez, Roxana; Posada, Manuel; Taruscio, Domenica; Lochmüller, Hanns; Robinson, Peter; Roos, Marco; Oliveira, José Luís

    2017-01-01

    Patient registries are an essential tool to increase current knowledge regarding rare diseases. Understanding these data is a vital step to improve patient treatments and to create the most adequate tools for personalized medicine. However, the growing number of disease-specific patient registries brings also new technical challenges. Usually, these systems are developed as closed data silos, with independent formats and models, lacking comprehensive mechanisms to enable data sharing. To tackle these challenges, we developed a Semantic Web based solution that allows connecting distributed and heterogeneous registries, enabling the federation of knowledge between multiple independent environments. This semantic layer creates a holistic view over a set of anonymised registries, supporting semantic data representation, integrated access, and querying. The implemented system gave us the opportunity to answer challenging questions across disperse rare disease patient registries. The interconnection between those registries using Semantic Web technologies benefits our final solution in a way that we can query single or multiple instances according to our needs. The outcome is a unique semantic layer, connecting miscellaneous registries and delivering a lightweight holistic perspective over the wealth of knowledge stemming from linked rare disease patient registries.

  1. Semantic and self-referential processing of positive and negative trait adjectives in older adults

    PubMed Central

    Glisky, Elizabeth L.; Marquine, Maria J.

    2008-01-01

    The beneficial effects of self-referential processing on memory have been demonstrated in numerous experiments with younger adults but have rarely been studied in older individuals. In the present study we tested young people, younger-older adults, and older-older adults in a self-reference paradigm, and compared self-referential processing to general semantic processing. Findings indicated that older adults over the age of 75 and those with below average episodic memory function showed a decreased benefit from both semantic and self-referential processing relative to a structural baseline condition. However, these effects appeared to be confined to the shared semantic processes for the two conditions, leaving the added advantage for self-referential processing unaffected These results suggest that reference to the self engages qualitatively different processes compared to general semantic processing. These processes seem relatively impervious to age and to declining memory and executive function, suggesting that they might provide a particularly useful way for older adults to improve their memories. PMID:18608973

  2. A Framework for Modeling Workflow Execution by an Interdisciplinary Healthcare Team.

    PubMed

    Kezadri-Hamiaz, Mounira; Rosu, Daniela; Wilk, Szymon; Kuziemsky, Craig; Michalowski, Wojtek; Carrier, Marc

    2015-01-01

    The use of business workflow models in healthcare is limited because of insufficient capture of complexities associated with behavior of interdisciplinary healthcare teams that execute healthcare workflows. In this paper we present a novel framework that builds on the well-founded business workflow model formalism and related infrastructures and introduces a formal semantic layer that describes selected aspects of team dynamics and supports their real-time operationalization.

  3. Developing a Domain Ontology: the Case of Water Cycle and Hydrology

    NASA Astrophysics Data System (ADS)

    Gupta, H.; Pozzi, W.; Piasecki, M.; Imam, B.; Houser, P.; Raskin, R.; Ramachandran, R.; Martinez Baquero, G.

    2008-12-01

    A semantic web ontology enables semantic data integration and semantic smart searching. Several organizations have attempted to implement smart registration and integration or searching using ontologies. These are the NOESIS (NSF project: LEAD) and HydroSeek (NSF project: CUAHS HIS) data discovery engines and the NSF project GEON. All three applications use ontologies to discover data from multiple sources and projects. The NASA WaterNet project was established to identify creative, innovative ways to bridge NASA research results to real world applications, linking decision support needs to available data, observations, and modeling capability. WaterNet (NASA project) utilized the smart query tool Noesis as a testbed to test whether different ontologies (and different catalog searches) could be combined to match resources with user needs. NOESIS contains the upper level SWEET ontology that accepts plug in domain ontologies to refine user search queries, reducing the burden of multiple keyword searches. Another smart search interface was that developed for CUAHSI, HydroSeek, that uses a multi-layered concept search ontology, tagging variables names from any number of data sources to specific leaf and higher level concepts on which the search is executed. This approach has proven to be quite successful in mitigating semantic heterogeneity as the user does not need to know the semantic specifics of each data source system but just uses a set of common keywords to discover the data for a specific temporal and geospatial domain. This presentation will show tests with Noesis and Hydroseek lead to the conclusion that the construction of a complex, and highly heterogeneous water cycle ontology requires multiple ontology modules. To illustrate the complexity and heterogeneity of a water cycle ontology, Hydroseek successfully utilizes WaterOneFlow to integrate data across multiple different data collections, such as USGS NWIS. However,different methodologies are employed by the Earth Science, the Hydrological, and Hydraulic Engineering Communities, and each community employs models that require different input data. If a sub-domain ontology is created for each of these,describing water balance calculations, then the resulting structure of the semantic network describing these various terms can be rather complex, heterogeneous, and overlapping, and will require "mapping" between equivalent terms in the ontologies, along with the development of an upper level conceptual or domain ontology to utilize and link to those already in existence.

  4. Parallel program debugging with flowback analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Jongdeok.

    1989-01-01

    This thesis describes the design and implementation of an integrated debugging system for parallel programs running on shared memory multi-processors. The goal of the debugging system is to present to the programmer a graphical view of the dynamic program dependences while keeping the execution-time overhead low. The author first describes the use of flowback analysis to provide information on causal relationship between events in a programs' execution without re-executing the program for debugging. Execution time overhead is kept low by recording only a small amount of trace during a program's execution. He uses semantic analysis and a technique called incrementalmore » tracing to keep the time and space overhead low. As part of the semantic analysis, he uses a static program dependence graph structure that reduces the amount of work done at compile time and takes advantage of the dynamic information produced during execution time. The cornerstone of the incremental tracing concept is to generate a coarse trace during execution and fill incrementally, during the interactive portion of the debugging session, the gap between the information gathered in the coarse trace and the information needed to do the flowback analysis using the coarse trace. Then, he describes how to extend the flowback analysis to parallel programs. The flowback analysis can span process boundaries; i.e., the most recent modification to a shared variable might be traced to a different process than the one that contains the current reference. The static and dynamic program dependence graphs of the individual processes are tied together with synchronization and data dependence information to form complete graphs that represent the entire program.« less

  5. Don’t be Too Strict with Yourself! Rigid Negative Self-Representation in Healthy Subjects Mimics the Neurocognitive Profile of Depression for Autobiographical Memory

    PubMed Central

    Sperduti, Marco; Martinelli, Pénélope; Kalenzaga, Sandrine; Devauchelle, Anne-Dominique; Lion, Stéphanie; Malherbe, Caroline; Gallarda, Thierry; Amado, Isabelle; Krebs, Marie-Odile; Oppenheim, Catherine; Piolino, Pascale

    2013-01-01

    Autobiographical memory (AM) comprises representation of both specific (episodic) and generic (semantic) personal information. Depression is characterized by a shift from episodic to semantic AM retrieval. According to theoretical models, this process (“overgeneralization”), would be linked to reduced executive resources. Moreover, “overgeneral” memories, accompanied by a negativity bias in depression, lead to a pervasive negative self-representation. As executive functions and AM specificity are also closely intricate among “non-clinical” populations, “overgeneral” memories could result in depressive emotional responses. Consequently, our hypothesis was that the neurocognitive profile of healthy subjects showing a rigid negative self-image would mimic that of patients. Executive functions and self-image were measured and brain activity was recorded, by means of fMRI, during episodic AMs retrieval in young healthy subjects. The results show an inverse correlation, that is, a more rigid and negative self-image produces lower performances in both executive and specific memories. Moreover, higher negative self-image is associated with decreased activity in the left ventro-lateral prefrontal and in the anterior cingulate cortex, repeatedly shown to exhibit altered functioning in depression. Activity in these regions, on the contrary, positively correlates with executive and memory performances, in line with their role in executive functions and AM retrieval. These findings suggest that rigid negative self-image could represent a marker or a vulnerability trait of depression by being linked to reduced executive function efficiency and episodic AM decline. These results are encouraging for psychotherapeutic approaches aimed at cognitive flexibility in depression and other psychiatric disorders. PMID:23734107

  6. The Semantic eScience Framework

    NASA Astrophysics Data System (ADS)

    McGuinness, Deborah; Fox, Peter; Hendler, James

    2010-05-01

    The goal of this effort is to design and implement a configurable and extensible semantic eScience framework (SESF). Configuration requires research into accommodating different levels of semantic expressivity and user requirements from use cases. Extensibility is being achieved in a modular approach to the semantic encodings (i.e. ontologies) performed in community settings, i.e. an ontology framework into which specific applications all the way up to communities can extend the semantics for their needs.We report on how we are accommodating the rapid advances in semantic technologies and tools and the sustainable software path for the future (certain) technical advances. In addition to a generalization of the current data science interface, we will present plans for an upper-level interface suitable for use by clearinghouses, and/or educational portals, digital libraries, and other disciplines.SESF builds upon previous work in the Virtual Solar-Terrestrial Observatory. The VSTO utilizes leading edge knowledge representation, query and reasoning techniques to support knowledge-enhanced search, data access, integration, and manipulation. It encodes term meanings and their inter-relationships in ontologies anduses these ontologies and associated inference engines to semantically enable the data services. The Semantically-Enabled Science Data Integration (SESDI) project implemented data integration capabilities among three sub-disciplines; solar radiation, volcanic outgassing and atmospheric structure using extensions to existingmodular ontolgies and used the VSTO data framework, while adding smart faceted search and semantic data registrationtools. The Semantic Provenance Capture in Data Ingest Systems (SPCDIS) has added explanation provenance capabilities to an observational data ingest pipeline for images of the Sun providing a set of tools to answer diverseend user questions such as ``Why does this image look bad?. http://tw.rpi.edu/portal/SESF

  7. The Semantic eScience Framework

    NASA Astrophysics Data System (ADS)

    Fox, P. A.; McGuinness, D. L.

    2009-12-01

    The goal of this effort is to design and implement a configurable and extensible semantic eScience framework (SESF). Configuration requires research into accommodating different levels of semantic expressivity and user requirements from use cases. Extensibility is being achieved in a modular approach to the semantic encodings (i.e. ontologies) performed in community settings, i.e. an ontology framework into which specific applications all the way up to communities can extend the semantics for their needs.We report on how we are accommodating the rapid advances in semantic technologies and tools and the sustainable software path for the future (certain) technical advances. In addition to a generalization of the current data science interface, we will present plans for an upper-level interface suitable for use by clearinghouses, and/or educational portals, digital libraries, and other disciplines.SESF builds upon previous work in the Virtual Solar-Terrestrial Observatory. The VSTO utilizes leading edge knowledge representation, query and reasoning techniques to support knowledge-enhanced search, data access, integration, and manipulation. It encodes term meanings and their inter-relationships in ontologies anduses these ontologies and associated inference engines to semantically enable the data services. The Semantically-Enabled Science Data Integration (SESDI) project implemented data integration capabilities among three sub-disciplines; solar radiation, volcanic outgassing and atmospheric structure using extensions to existingmodular ontolgies and used the VSTO data framework, while adding smart faceted search and semantic data registrationtools. The Semantic Provenance Capture in Data Ingest Systems (SPCDIS) has added explanation provenance capabilities to an observational data ingest pipeline for images of the Sun providing a set of tools to answer diverseend user questions such as ``Why does this image look bad?.

  8. Prototyping scalable digital signal processing systems for radio astronomy using dataflow models

    NASA Astrophysics Data System (ADS)

    Sane, N.; Ford, J.; Harris, A. I.; Bhattacharyya, S. S.

    2012-05-01

    There is a growing trend toward using high-level tools for design and implementation of radio astronomy digital signal processing (DSP) systems. Such tools, for example, those from the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER), are usually platform-specific, and lack high-level, platform-independent, portable, scalable application specifications. This limits the designer's ability to experiment with designs at a high-level of abstraction and early in the development cycle. We address some of these issues using a model-based design approach employing dataflow models. We demonstrate this approach by applying it to the design of a tunable digital downconverter (TDD) used for narrow-bandwidth spectroscopy. Our design is targeted toward an FPGA platform, called the Interconnect Break-out Board (IBOB), that is available from the CASPER. We use the term TDD to refer to a digital downconverter for which the decimation factor and center frequency can be reconfigured without the need for regenerating the hardware code. Such a design is currently not available in the CASPER DSP library. The work presented in this paper focuses on two aspects. First, we introduce and demonstrate a dataflow-based design approach using the dataflow interchange format (DIF) tool for high-level application specification, and we integrate this approach with the CASPER tool flow. Secondly, we explore the trade-off between the flexibility of TDD designs and the low hardware cost of fixed-configuration digital downconverter (FDD) designs that use the available CASPER DSP library. We further explore this trade-off in the context of a two-stage downconversion scheme employing a combination of TDD or FDD designs.

  9. Analysis and visualization of disease courses in a semantically-enabled cancer registry.

    PubMed

    Esteban-Gil, Angel; Fernández-Breis, Jesualdo Tomás; Boeker, Martin

    2017-09-29

    Regional and epidemiological cancer registries are important for cancer research and the quality management of cancer treatment. Many technological solutions are available to collect and analyse data for cancer registries nowadays. However, the lack of a well-defined common semantic model is a problem when user-defined analyses and data linking to external resources are required. The objectives of this study are: (1) design of a semantic model for local cancer registries; (2) development of a semantically-enabled cancer registry based on this model; and (3) semantic exploitation of the cancer registry for analysing and visualising disease courses. Our proposal is based on our previous results and experience working with semantic technologies. Data stored in a cancer registry database were transformed into RDF employing a process driven by OWL ontologies. The semantic representation of the data was then processed to extract semantic patient profiles, which were exploited by means of SPARQL queries to identify groups of similar patients and to analyse the disease timelines of patients. Based on the requirements analysis, we have produced a draft of an ontology that models the semantics of a local cancer registry in a pragmatic extensible way. We have implemented a Semantic Web platform that allows transforming and storing data from cancer registries in RDF. This platform also permits users to formulate incremental user-defined queries through a graphical user interface. The query results can be displayed in several customisable ways. The complex disease timelines of individual patients can be clearly represented. Different events, e.g. different therapies and disease courses, are presented according to their temporal and causal relations. The presented platform is an example of the parallel development of ontologies and applications that take advantage of semantic web technologies in the medical field. The semantic structure of the representation renders it easy to analyse key figures of the patients and their evolution at different granularity levels.

  10. COEUS: “semantic web in a box” for biomedical applications

    PubMed Central

    2012-01-01

    Background As the “omics” revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter’s complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. Results COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a “semantic web in a box” approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. Conclusions The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/. PMID:23244467

  11. COEUS: "semantic web in a box" for biomedical applications.

    PubMed

    Lopes, Pedro; Oliveira, José Luís

    2012-12-17

    As the "omics" revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter's complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a "semantic web in a box" approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/.

  12. The role of semantic memory in short-term recall: effect of strategic retrieval ability in an elderly population.

    PubMed

    Larigauderie, Pascale; Michaud, Aurelie; Vicente, Siobhan

    2011-03-01

    The present paper examines the relationship between two classic phenomena: semantic effects in short-term recall (STR) tasks, which are interpreted as indicating the involvement of long-term memory (LTM) in the functioning of short-term memory, on the one hand, and the existence of individual differences amongst elderly people in strategic retrieval ability (i.e., the ability to activate representations in LTM in a controlled way) on the other hand. Forty elderly participants completed a STR task under four different conditions which were thought to differentially involve LTM representations. Several executive functions, among which the strategic retrieval ability, were evaluated. The results showed that the participants who obtained the best performances in terms of strategic retrieval ability, and only in this executive ability, also exhibited better performances in the STR task, in particular when this task was performed under conditions which favored the use of LTM.

  13. Extension of Alvis compiler front-end

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wypych, Michał; Szpyrka, Marcin; Matyasik, Piotr, E-mail: mwypych@agh.edu.pl, E-mail: mszpyrka@agh.edu.pl, E-mail: ptm@agh.edu.pl

    2015-12-31

    Alvis is a formal modelling language that enables possibility of verification of distributed concurrent systems. An Alvis model semantics finds expression in an LTS graph (labelled transition system). Execution of any language statement is expressed as a transition between formally defined states of such a model. An LTS graph is generated using a middle-stage Haskell representation of an Alvis model. Moreover, Haskell is used as a part of the Alvis language and is used to define parameters’ types and operations on them. Thanks to the compiler’s modular construction many aspects of compilation of an Alvis model may be modified. Providingmore » new plugins for Alvis Compiler that support languages like Java or C makes possible using these languages as a part of Alvis instead of Haskell. The paper presents the compiler internal model and describes how the default specification language can be altered by new plugins.« less

  14. A Semantically Enabled Portal for Facilitating the Public Service Provision

    NASA Astrophysics Data System (ADS)

    Loutas, Nikolaos; Giantsiou, Lemonia; Peristeras, Vassilios; Tarabanis, Konstantinos

    During the past years, governments have made significant efforts to improve both their internal processes and the services that they provide to citizens and businesses. These led to several successful e-Government applications (e.g., see www.epractice.eu). One of the most popular tools that was used by governments in order to modernize their services and make them accessible is e-Government portals, e.g., (Drigas et al. 2005), (Fang 2002). The main goals of such portals are: To make available complete, easy to understand, and structured information about public services and public administration's modus operandi, which will assist citizens during the service provision process. To facilitate the electronic execution of public services. Nevertheless, most of such efforts did not succeed. Gartner argues that most e-Government strategies have not achieved their objectives and have failed to trigger sustainable government transformation to greater efficiency and citizen-centricity (DiMaio 2007).

  15. Parallel State Space Construction for a Model Checking Based on Maximality Semantics

    NASA Astrophysics Data System (ADS)

    El Abidine Bouneb, Zine; Saīdouni, Djamel Eddine

    2009-03-01

    The main limiting factor of the model checker integrated in the concurrency verification environment FOCOVE [1, 2], which use the maximality based labeled transition system (noted MLTS) as a true concurrency model[3, 4], is currently the amount of available physical memory. Many techniques have been developed to reduce the size of a state space. An interesting technique among them is the alpha equivalence reduction. Distributed memory execution environment offers yet another choice. The main contribution of the paper is to show that the parallel state space construction algorithm proposed in [5], which is based on interleaving semantics using LTS as semantic model, may be adapted easily to the distributed implementation of the alpha equivalence reduction for the maximality based labeled transition systems.

  16. Proceedings: Sisal `93

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feo, J.T.

    1993-10-01

    This report contain papers on: Programmability and performance issues; The case of an iterative partial differential equation solver; Implementing the kernal of the Australian Region Weather Prediction Model in Sisal; Even and quarter-even prime length symmetric FFTs and their Sisal Implementations; Top-down thread generation for Sisal; Overlapping communications and computations on NUMA architechtures; Compiling technique based on dataflow analysis for funtional programming language Valid; Copy elimination for true multidimensional arrays in Sisal 2.0; Increasing parallelism for an optimization that reduces copying in IF2 graphs; Caching in on Sisal; Cache performance of Sisal Vs. FORTRAN; FFT algorithms on a shared-memory multiprocessor;more » A parallel implementation of nonnumeric search problems in Sisal; Computer vision algorithms in Sisal; Compilation of Sisal for a high-performance data driven vector processor; Sisal on distributed memory machines; A virtual shared addressing system for distributed memory Sisal; Developing a high-performance FFT algorithm in Sisal for a vector supercomputer; Implementation issues for IF2 on a static data-flow architechture; and Systematic control of parallelism in array-based data-flow computation. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.« less

  17. Performance enhancement of various real-time image processing techniques via speculative execution

    NASA Astrophysics Data System (ADS)

    Younis, Mohamed F.; Sinha, Purnendu; Marlowe, Thomas J.; Stoyenko, Alexander D.

    1996-03-01

    In real-time image processing, an application must satisfy a set of timing constraints while ensuring the semantic correctness of the system. Because of the natural structure of digital data, pure data and task parallelism have been used extensively in real-time image processing to accelerate the handling time of image data. These types of parallelism are based on splitting the execution load performed by a single processor across multiple nodes. However, execution of all parallel threads is mandatory for correctness of the algorithm. On the other hand, speculative execution is an optimistic execution of part(s) of the program based on assumptions on program control flow or variable values. Rollback may be required if the assumptions turn out to be invalid. Speculative execution can enhance average, and sometimes worst-case, execution time. In this paper, we target various image processing techniques to investigate applicability of speculative execution. We identify opportunities for safe and profitable speculative execution in image compression, edge detection, morphological filters, and blob recognition.

  18. Combined semantic and similarity search in medical image databases

    NASA Astrophysics Data System (ADS)

    Seifert, Sascha; Thoma, Marisa; Stegmaier, Florian; Hammon, Matthias; Kramer, Martin; Huber, Martin; Kriegel, Hans-Peter; Cavallaro, Alexander; Comaniciu, Dorin

    2011-03-01

    The current diagnostic process at hospitals is mainly based on reviewing and comparing images coming from multiple time points and modalities in order to monitor disease progression over a period of time. However, for ambiguous cases the radiologist deeply relies on reference literature or second opinion. Although there is a vast amount of acquired images stored in PACS systems which could be reused for decision support, these data sets suffer from weak search capabilities. Thus, we present a search methodology which enables the physician to fulfill intelligent search scenarios on medical image databases combining ontology-based semantic and appearance-based similarity search. It enabled the elimination of 12% of the top ten hits which would arise without taking the semantic context into account.

  19. Semantic strategy training increases memory performance and brain activity in patients with prefrontal cortex lesions.

    PubMed

    Miotto, Eliane C; Savage, Cary R; Evans, Jonathan J; Wilson, Barbara A; Martin, Maria G M; Balardin, Joana B; Barros, Fabio G; Garrido, Griselda; Teixeira, Manoel J; Amaro Junior, Edson

    2013-03-01

    Memory deficit is a frequent cognitive disorder following acquired prefrontal cortex lesions. In the present study, we investigated the brain correlates of a short semantic strategy training and memory performance of patients with distinct prefrontal cortex lesions using fMRI and cognitive tests. Twenty-one adult patients with post-acute prefrontal cortex (PFC) lesions, twelve with left dorsolateral PFC (LPFC) and nine with bilateral orbitofrontal cortex (BOFC) were assessed before and after a short cognitive semantic training using a verbal memory encoding paradigm during scanning and neuropsychological tests outside the scanner. After the semantic strategy training both groups of patients showed significant behavioral improvement in verbal memory recall and use of semantic strategies. In the LPFC group, greater activity in left inferior and medial frontal gyrus, precentral gyrus and insula was found after training. For the BOFC group, a greater activation was found in the left parietal cortex, right cingulated and precuneus after training. The activation of these specific areas in the memory and executive networks following cognitive training was associated to compensatory brain mechanisms and application of the semantic strategy. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Toward cognitive pipelines of medical assistance algorithms.

    PubMed

    Philipp, Patrick; Maleshkova, Maria; Katic, Darko; Weber, Christian; Götz, Michael; Rettinger, Achim; Speidel, Stefanie; Kämpgen, Benedikt; Nolden, Marco; Wekerle, Anna-Laura; Dillmann, Rüdiger; Kenngott, Hannes; Müller, Beat; Studer, Rudi

    2016-09-01

    Assistance algorithms for medical tasks have great potential to support physicians with their daily work. However, medicine is also one of the most demanding domains for computer-based support systems, since medical assistance tasks are complex and the practical experience of the physician is crucial. Recent developments in the area of cognitive computing appear to be well suited to tackle medicine as an application domain. We propose a system based on the idea of cognitive computing and consisting of auto-configurable medical assistance algorithms and their self-adapting combination. The system enables automatic execution of new algorithms, given they are made available as Medical Cognitive Apps and are registered in a central semantic repository. Learning components can be added to the system to optimize the results in the cases when numerous Medical Cognitive Apps are available for the same task. Our prototypical implementation is applied to the areas of surgical phase recognition based on sensor data and image progressing for tumor progression mappings. Our results suggest that such assistance algorithms can be automatically configured in execution pipelines, candidate results can be automatically scored and combined, and the system can learn from experience. Furthermore, our evaluation shows that the Medical Cognitive Apps are providing the correct results as they did for local execution and run in a reasonable amount of time. The proposed solution is applicable to a variety of medical use cases and effectively supports the automated and self-adaptive configuration of cognitive pipelines based on medical interpretation algorithms.

  1. Effects of working memory load on processing of sounds and meanings of words in aphasia

    PubMed Central

    Martin, Nadine; Kohen, Francine; Kalinyak-Fliszar, Michelene; Soveri, Anna; Laine, Matti

    2011-01-01

    Background Language performance in aphasia can vary depending on several variables such as stimulus characteristics and task demands. This study focuses on the degree of verbal working memory (WM) load inherent in the language task and how this variable affects language performance by individuals with aphasia. Aims The first aim was to identify the effects of increased verbal WM load on the performance of judgments of semantic similarity (synonymy) and phonological similarity (rhyming). The second aim was to determine if any of the following abilities could modulate the verbal WM load effect: semantic or phonological access, semantic or phonological short-term memory (STM) and any of the following executive processing abilities: inhibition, verbal WM updating, and set shifting. Method and Procedures Thirty-one individuals with aphasia and 11 controls participated in this study. They were administered a synonymy judgment task and a rhyming judgment task under high and low verbal WM load conditions that were compared to each other. In a second set of analyses, multiple regression was used to identify which factors (as noted above) modulated the verbal WM load effect. Outcome and Results For participants with aphasia, increased verbal WM load significantly reduced accuracy of performance on synonymy and rhyming judgments. Better performance in the low verbal WM load conditions was evident even after correcting for chance. The synonymy task included concrete and abstract word triplets. When these were examined separately, the verbal WM load effect was significant for the abstract words, but not the concrete words. The same pattern was observed in the performance of the control participants. Additionally, the second set of analyses revealed that semantic STM and one executive function, inhibition ability, emerged as the strongest predictors of the verbal WM load effect in these judgment tasks for individuals with aphasia. Conclusions The results of this study have important implications for diagnosis and treatment of aphasia. As the roles of verbal STM capacity, executive functions and verbal WM load in language processing are better understood, measurements of these variables can be incorporated into our diagnostic protocols. Moreover, if cognitive abilities such as STM and executive functions support language processing and their impairment adversely affects language function, treating them directly in the context of language tasks should translate into improved language function. PMID:22544993

  2. Concept-oriented indexing of video databases: toward semantic sensitive retrieval and browsing.

    PubMed

    Fan, Jianping; Luo, Hangzai; Elmagarmid, Ahmed K

    2004-07-01

    Digital video now plays an important role in medical education, health care, telemedicine and other medical applications. Several content-based video retrieval (CBVR) systems have been proposed in the past, but they still suffer from the following challenging problems: semantic gap, semantic video concept modeling, semantic video classification, and concept-oriented video database indexing and access. In this paper, we propose a novel framework to make some advances toward the final goal to solve these problems. Specifically, the framework includes: 1) a semantic-sensitive video content representation framework by using principal video shots to enhance the quality of features; 2) semantic video concept interpretation by using flexible mixture model to bridge the semantic gap; 3) a novel semantic video-classifier training framework by integrating feature selection, parameter estimation, and model selection seamlessly in a single algorithm; and 4) a concept-oriented video database organization technique through a certain domain-dependent concept hierarchy to enable semantic-sensitive video retrieval and browsing.

  3. DI: An interactive debugging interpreter for applicative languages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skedzielewski, S.K.; Yates, R.K.; Oldehoeft, R.R.

    1987-03-12

    The DI interpreter is both a debugger and interpreter of SISLAL programs. Its use as a program interpreter is only a small part of its role; it is designed to be a tool for studying compilation techniques for applicative languages. DI interprets dataflow graphs expressed in the IF1 and IF2 languages, and is heavily instrumented to report the activity of dynamic storage activity, reference counting, copying and updating of structured data values. It also aids the SISAL language evaluation by providing an interim execution vehicle for SISAL programs. DI provides determinate, sequential interpretation of graph nodes for sequential and parallelmore » operations in a canonical order. As a debugging aid, DI allows tracing, breakpointing, and interactive display of program data values. DI handles creation of SISAL and IF1 error values for each data type and propagates them according to a well-defined algebra. We have begun to implement IF1 optimizers and have measured the improvements with DI.« less

  4. Improving integrative searching of systems chemical biology data using semantic annotation.

    PubMed

    Chen, Bin; Ding, Ying; Wild, David J

    2012-03-08

    Systems chemical biology and chemogenomics are considered critical, integrative disciplines in modern biomedical research, but require data mining of large, integrated, heterogeneous datasets from chemistry and biology. We previously developed an RDF-based resource called Chem2Bio2RDF that enabled querying of such data using the SPARQL query language. Whilst this work has proved useful in its own right as one of the first major resources in these disciplines, its utility could be greatly improved by the application of an ontology for annotation of the nodes and edges in the RDF graph, enabling a much richer range of semantic queries to be issued. We developed a generalized chemogenomics and systems chemical biology OWL ontology called Chem2Bio2OWL that describes the semantics of chemical compounds, drugs, protein targets, pathways, genes, diseases and side-effects, and the relationships between them. The ontology also includes data provenance. We used it to annotate our Chem2Bio2RDF dataset, making it a rich semantic resource. Through a series of scientific case studies we demonstrate how this (i) simplifies the process of building SPARQL queries, (ii) enables useful new kinds of queries on the data and (iii) makes possible intelligent reasoning and semantic graph mining in chemogenomics and systems chemical biology. Chem2Bio2OWL is available at http://chem2bio2rdf.org/owl. The document is available at http://chem2bio2owl.wikispaces.com.

  5. OlyMPUS - The Ontology-based Metadata Portal for Unified Semantics

    NASA Astrophysics Data System (ADS)

    Huffer, E.; Gleason, J. L.

    2015-12-01

    The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS), funded by the NASA Earth Science Technology Office Advanced Information Systems Technology program, is an end-to-end system designed to support data consumers and data providers, enabling the latter to register their data sets and provision them with the semantically rich metadata that drives the Ontology-Driven Interactive Search Environment for Earth Sciences (ODISEES). OlyMPUS leverages the semantics and reasoning capabilities of ODISEES to provide data producers with a semi-automated interface for producing the semantically rich metadata needed to support ODISEES' data discovery and access services. It integrates the ODISEES metadata search system with multiple NASA data delivery tools to enable data consumers to create customized data sets for download to their computers, or for NASA Advanced Supercomputing (NAS) facility registered users, directly to NAS storage resources for access by applications running on NAS supercomputers. A core function of NASA's Earth Science Division is research and analysis that uses the full spectrum of data products available in NASA archives. Scientists need to perform complex analyses that identify correlations and non-obvious relationships across all types of Earth System phenomena. Comprehensive analytics are hindered, however, by the fact that many Earth science data products are disparate and hard to synthesize. Variations in how data are collected, processed, gridded, and stored, create challenges for data interoperability and synthesis, which are exacerbated by the sheer volume of available data. Robust, semantically rich metadata can support tools for data discovery and facilitate machine-to-machine transactions with services such as data subsetting, regridding, and reformatting. Such capabilities are critical to enabling the research activities integral to NASA's strategic plans. However, as metadata requirements increase and competing standards emerge, metadata provisioning becomes increasingly burdensome to data producers. The OlyMPUS system helps data providers produce semantically rich metadata, making their data more accessible to data consumers, and helps data consumers quickly discover and download the right data for their research.

  6. C-Speak Aphasia Alternative Communication Program for People with Severe Aphasia: Importance of Executive Functioning and Semantic Knowledge

    PubMed Central

    Nicholas, Marjorie; Sinotte, Michele P.; Helm-Estabrooks, Nancy

    2011-01-01

    Learning how to use a computer-based communication system can be challenging for people with severe aphasia even if the system is not word-based. This study explored cognitive and linguistic factors relative to how they affected individual patients’ ability to communicate expressively using C-Speak Aphasia, (CSA), an alternative communication computer program that is primarily picture-based. Ten individuals with severe non-fluent aphasia received at least six months of training with CSA. To assess carryover of training, untrained functional communication tasks (i.e., answering autobiographical questions, describing pictures, making telephone calls, describing a short video, and two writing tasks) were repeatedly probed in two conditions: 1) using CSA in addition to natural forms of communication, and 2) using only natural forms of communication, e.g., speaking, writing, gesturing, drawing. Four of the ten participants communicated more information on selected probe tasks using CSA than they did without the computer. Response to treatment also was examined in relation to baseline measures of non-linguistic executive function skills, pictorial semantic abilities, and auditory comprehension. Only nonlinguistic executive function skills were significantly correlated with treatment response. PMID:21506045

  7. Integrated Japanese Dependency Analysis Using a Dialog Context

    NASA Astrophysics Data System (ADS)

    Ikegaya, Yuki; Noguchi, Yasuhiro; Kogure, Satoru; Itoh, Toshihiko; Konishi, Tatsuhiro; Kondo, Makoto; Asoh, Hideki; Takagi, Akira; Itoh, Yukihiro

    This paper describes how to perform syntactic parsing and semantic analysis in a dialog system. The paper especially deals with how to disambiguate potentially ambiguous sentences using the contextual information. Although syntactic parsing and semantic analysis are often studied independently of each other, correct parsing of a sentence often requires the semantic information on the input and/or the contextual information prior to the input. Accordingly, we merge syntactic parsing with semantic analysis, which enables syntactic parsing taking advantage of the semantic content of an input and its context. One of the biggest problems of semantic analysis is how to interpret dependency structures. We employ a framework for semantic representations that circumvents the problem. Within the framework, the meaning of any predicate is converted into a semantic representation which only permits a single type of predicate: an identifying predicate "aru". The semantic representations are expressed as sets of "attribute-value" pairs, and those semantic representations are stored in the context information. Our system disambiguates syntactic/semantic ambiguities of inputs referring to the attribute-value pairs in the context information. We have experimentally confirmed the effectiveness of our approach; specifically, the experiment confirmed high accuracy of parsing and correctness of generated semantic representations.

  8. A Formal Theory for Modular ERDF Ontologies

    NASA Astrophysics Data System (ADS)

    Analyti, Anastasia; Antoniou, Grigoris; Damásio, Carlos Viegas

    The success of the Semantic Web is impossible without any form of modularity, encapsulation, and access control. In an earlier paper, we extended RDF graphs with weak and strong negation, as well as derivation rules. The ERDF #n-stable model semantics of the extended RDF framework (ERDF) is defined, extending RDF(S) semantics. In this paper, we propose a framework for modular ERDF ontologies, called modular ERDF framework, which enables collaborative reasoning over a set of ERDF ontologies, while support for hidden knowledge is also provided. In particular, the modular ERDF stable model semantics of modular ERDF ontologies is defined, extending the ERDF #n-stable model semantics. Our proposed framework supports local semantics and different points of view, local closed-world and open-world assumptions, and scoped negation-as-failure. Several complexity results are provided.

  9. Metadata based management and sharing of distributed biomedical data

    PubMed Central

    Vergara-Niedermayr, Cristobal; Liu, Peiya

    2014-01-01

    Biomedical research data sharing is becoming increasingly important for researchers to reuse experiments, pool expertise and validate approaches. However, there are many hurdles for data sharing, including the unwillingness to share, lack of flexible data model for providing context information, difficulty to share syntactically and semantically consistent data across distributed institutions, and high cost to provide tools to share the data. SciPort is a web-based collaborative biomedical data sharing platform to support data sharing across distributed organisations. SciPort provides a generic metadata model to flexibly customise and organise the data. To enable convenient data sharing, SciPort provides a central server based data sharing architecture with a one-click data sharing from a local server. To enable consistency, SciPort provides collaborative distributed schema management across distributed sites. To enable semantic consistency, SciPort provides semantic tagging through controlled vocabularies. SciPort is lightweight and can be easily deployed for building data sharing communities. PMID:24834105

  10. Latency in Distributed Acquisition and Rendering for Telepresence Systems.

    PubMed

    Ohl, Stephan; Willert, Malte; Staadt, Oliver

    2015-12-01

    Telepresence systems use 3D techniques to create a more natural human-centered communication over long distances. This work concentrates on the analysis of latency in telepresence systems where acquisition and rendering are distributed. Keeping latency low is important to immerse users in the virtual environment. To better understand latency problems and to identify the source of such latency, we focus on the decomposition of system latency into sub-latencies. We contribute a model of latency and show how it can be used to estimate latencies in a complex telepresence dataflow network. To compare the estimates with real latencies in our prototype, we modify two common latency measurement methods. This presented methodology enables the developer to optimize the design, find implementation issues and gain deeper knowledge about specific sources of latency.

  11. Building a biomedical semantic network in Wikipedia with Semantic Wiki Links

    PubMed Central

    Good, Benjamin M.; Clarke, Erik L.; Loguercio, Salvatore; Su, Andrew I.

    2012-01-01

    Wikipedia is increasingly used as a platform for collaborative data curation, but its current technical implementation has significant limitations that hinder its use in biocuration applications. Specifically, while editors can easily link between two articles in Wikipedia to indicate a relationship, there is no way to indicate the nature of that relationship in a way that is computationally accessible to the system or to external developers. For example, in addition to noting a relationship between a gene and a disease, it would be useful to differentiate the cases where genetic mutation or altered expression causes the disease. Here, we introduce a straightforward method that allows Wikipedia editors to embed computable semantic relations directly in the context of current Wikipedia articles. In addition, we demonstrate two novel applications enabled by the presence of these new relationships. The first is a dynamically generated information box that can be rendered on all semantically enhanced Wikipedia articles. The second is a prototype gene annotation system that draws its content from the gene-centric articles on Wikipedia and exposes the new semantic relationships to enable previously impossible, user-defined queries. Database URL: http://en.wikipedia.org/wiki/Portal:Gene_Wiki PMID:22434829

  12. Building a biomedical semantic network in Wikipedia with Semantic Wiki Links.

    PubMed

    Good, Benjamin M; Clarke, Erik L; Loguercio, Salvatore; Su, Andrew I

    2012-01-01

    Wikipedia is increasingly used as a platform for collaborative data curation, but its current technical implementation has significant limitations that hinder its use in biocuration applications. Specifically, while editors can easily link between two articles in Wikipedia to indicate a relationship, there is no way to indicate the nature of that relationship in a way that is computationally accessible to the system or to external developers. For example, in addition to noting a relationship between a gene and a disease, it would be useful to differentiate the cases where genetic mutation or altered expression causes the disease. Here, we introduce a straightforward method that allows Wikipedia editors to embed computable semantic relations directly in the context of current Wikipedia articles. In addition, we demonstrate two novel applications enabled by the presence of these new relationships. The first is a dynamically generated information box that can be rendered on all semantically enhanced Wikipedia articles. The second is a prototype gene annotation system that draws its content from the gene-centric articles on Wikipedia and exposes the new semantic relationships to enable previously impossible, user-defined queries. DATABASE URL: http://en.wikipedia.org/wiki/Portal:Gene_Wiki.

  13. Recommendation of standardized health learning contents using archetypes and semantic web technologies.

    PubMed

    Legaz-García, María del Carmen; Martínez-Costa, Catalina; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2012-01-01

    Linking Electronic Healthcare Records (EHR) content to educational materials has been considered a key international recommendation to enable clinical engagement and to promote patient safety. This would suggest citizens to access reliable information available on the web and to guide them properly. In this paper, we describe an approach in that direction, based on the use of dual model EHR standards and standardized educational contents. The recommendation method will be based on the semantic coverage of the learning content repository for a particular archetype, which will be calculated by applying semantic web technologies like ontologies and semantic annotations.

  14. A logical approach to semantic interoperability in healthcare.

    PubMed

    Bird, Linda; Brooks, Colleen; Cheong, Yu Chye; Tun, Nwe Ni

    2011-01-01

    Singapore is in the process of rolling out a number of national e-health initiatives, including the National Electronic Health Record (NEHR). A critical enabler in the journey towards semantic interoperability is a Logical Information Model (LIM) that harmonises the semantics of the information structure with the terminology. The Singapore LIM uses a combination of international standards, including ISO 13606-1 (a reference model for electronic health record communication), ISO 21090 (healthcare datatypes), and SNOMED CT (healthcare terminology). The LIM is accompanied by a logical design approach, used to generate interoperability artifacts, and incorporates mechanisms for achieving unidirectional and bidirectional semantic interoperability.

  15. Semantically-enabled sensor plug & play for the sensor web.

    PubMed

    Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian

    2011-01-01

    Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC's Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research.

  16. Semantically-Enabled Sensor Plug & Play for the Sensor Web

    PubMed Central

    Bröring, Arne; Maúe, Patrick; Janowicz, Krzysztof; Nüst, Daniel; Malewski, Christian

    2011-01-01

    Environmental sensors have continuously improved by becoming smaller, cheaper, and more intelligent over the past years. As consequence of these technological advancements, sensors are increasingly deployed to monitor our environment. The large variety of available sensor types with often incompatible protocols complicates the integration of sensors into observing systems. The standardized Web service interfaces and data encodings defined within OGC’s Sensor Web Enablement (SWE) framework make sensors available over the Web and hide the heterogeneous sensor protocols from applications. So far, the SWE framework does not describe how to integrate sensors on-the-fly with minimal human intervention. The driver software which enables access to sensors has to be implemented and the measured sensor data has to be manually mapped to the SWE models. In this article we introduce a Sensor Plug & Play infrastructure for the Sensor Web by combining (1) semantic matchmaking functionality, (2) a publish/subscribe mechanism underlying the SensorWeb, as well as (3) a model for the declarative description of sensor interfaces which serves as a generic driver mechanism. We implement and evaluate our approach by applying it to an oil spill scenario. The matchmaking is realized using existing ontologies and reasoning engines and provides a strong case for the semantic integration capabilities provided by Semantic Web research. PMID:22164033

  17. Jointly Constructing Semantic Waves: Implications for Teacher Training

    ERIC Educational Resources Information Center

    Macnaught, Lucy; Maton, Karl; Martin, J. R.; Matruglio, Erika

    2013-01-01

    This paper addresses how teachers can be trained to enable cumulative knowledge-building. It focuses on the final intervention stage of the "Disciplinarity, Knowledge and Schooling" ("DISKS") project at the University of Sydney. In this special issue, Maton identifies "semantic waves" as a crucial characteristic of…

  18. Plan Execution Interchange Language (PLEXIL)

    NASA Technical Reports Server (NTRS)

    Estlin, Tara; Jonsson, Ari; Pasareanu, Corina; Simmons, Reid; Tso, Kam; Verma, Vandi

    2006-01-01

    Plan execution is a cornerstone of spacecraft operations, irrespective of whether the plans to be executed are generated on board the spacecraft or on the ground. Plan execution frameworks vary greatly, due to both different capabilities of the execution systems, and relations to associated decision-making frameworks. The latter dependency has made the reuse of execution and planning frameworks more difficult, and has all but precluded information sharing between different execution and decision-making systems. As a step in the direction of addressing some of these issues, a general plan execution language, called the Plan Execution Interchange Language (PLEXIL), is being developed. PLEXIL is capable of expressing concepts used by many high-level automated planners and hence provides an interface to multiple planners. PLEXIL includes a domain description that specifies command types, expansions, constraints, etc., as well as feedback to the higher-level decision-making capabilities. This document describes the grammar and semantics of PLEXIL. It includes a graphical depiction of this grammar and illustrative rover scenarios. It also outlines ongoing work on implementing a universal execution system, based on PLEXIL, using state-of-the-art rover functional interfaces and planners as test cases.

  19. A mechanism for efficient debugging of parallel programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, B.P.; Choi, J.D.

    1988-01-01

    This paper addresses the design and implementation of an integrated debugging system for parallel programs running on shared memory multi-processors (SMMP). The authors describe the use of flowback analysis to provide information on causal relationships between events in a program's execution without re-executing the program for debugging. The authors introduce a mechanism called incremental tracing that, by using semantic analyses of the debugged program, makes the flowback analysis practical with only a small amount of trace generated during execution. The extend flowback analysis to apply to parallel programs and describe a method to detect race conditions in the interactions ofmore » the co-operating processes.« less

  20. Lightweight Specifications for Parallel Correctness

    DTIC Science & Technology

    2012-12-05

    Galenson, Benjamin Hindman, Thibaud Hottelier, Pallavi Joshi, Ben- jamin Lipshitz, Leo Meyerovich, Mayur Naik, Chang-Seo Park, and Philip Reames — many...violating executions. We discuss some of these errors in detail in the CHAPTER 5. SPECIFYING AND CHECKING SEMANTIC ATOMICITY 84 Benchmark Approx. LoC

  1. Controlled semantic cognition relies upon dynamic and flexible interactions between the executive 'semantic control' and hub-and-spoke 'semantic representation' systems.

    PubMed

    Chiou, Rocco; Humphreys, Gina F; Jung, JeYoung; Lambon Ralph, Matthew A

    2018-06-01

    Built upon a wealth of neuroimaging, neurostimulation, and neuropsychology data, a recent proposal set forth a framework termed controlled semantic cognition (CSC) to account for how the brain underpins the ability to flexibly use semantic knowledge (Lambon Ralph et al., 2017; Nature Reviews Neuroscience). In CSC, the 'semantic control' system, underpinned predominantly by the prefrontal cortex, dynamically monitors and modulates the 'semantic representation' system that consists of a 'hub' (anterior temporal lobe, ATL) and multiple 'spokes' (modality-specific areas). CSC predicts that unfamiliar and exacting semantic tasks should intensify communication between the 'control' and 'representation' systems, relative to familiar and less taxing tasks. In the present study, we used functional magnetic resonance imaging (fMRI) to test this hypothesis. Participants paired unrelated concepts by canonical colours (a less accustomed task - e.g., pairing ketchup with fire-extinguishers due to both being red) or paired well-related concepts by semantic relationship (a typical task - e.g., ketchup is related to mustard). We found the 'control' system was more engaged by atypical than typical pairing. While both tasks activated the ATL 'hub', colour pairing additionally involved occipitotemporal 'spoke' regions abutting areas of hue perception. Furthermore, we uncovered a gradient along the ventral temporal cortex, transitioning from the caudal 'spoke' zones preferring canonical colour processing to the rostral 'hub' zones preferring semantic relationship. Functional connectivity also differed between the tasks: Compared with semantic pairing, colour pairing relied more upon the inferior frontal gyrus, a key node of the control system, driving enhanced connectivity with occipitotemporal 'spoke'. Together, our findings characterise the interaction within the neural architecture of semantic cognition - the control system dynamically heightens its connectivity with relevant components of the representation system, in response to different semantic contents and difficulty levels. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  2. Towards Text Copyright Detection Using Metadata in Web Applications

    ERIC Educational Resources Information Center

    Poulos, Marios; Korfiatis, Nikolaos; Bokos, George

    2011-01-01

    Purpose: This paper aims to present the semantic content identifier (SCI), a permanent identifier, computed through a linear-time onion-peeling algorithm that enables the extraction of semantic features from a text, and the integration of this information within the permanent identifier. Design/methodology/approach: The authors employ SCI to…

  3. SemantGeo: Powering Ecological and Environment Data Discovery and Search with Standards-Based Geospatial Reasoning

    NASA Astrophysics Data System (ADS)

    Seyed, P.; Ashby, B.; Khan, I.; Patton, E. W.; McGuinness, D. L.

    2013-12-01

    Recent efforts to create and leverage standards for geospatial data specification and inference include the GeoSPARQL standard, Geospatial OWL ontologies (e.g., GAZ, Geonames), and RDF triple stores that support GeoSPARQL (e.g., AllegroGraph, Parliament) that use RDF instance data for geospatial features of interest. However, there remains a gap on how best to fuse software engineering best practices and GeoSPARQL within semantic web applications to enable flexible search driven by geospatial reasoning. In this abstract we introduce the SemantGeo module for the SemantEco framework that helps fill this gap, enabling scientists find data using geospatial semantics and reasoning. SemantGeo provides multiple types of geospatial reasoning for SemantEco modules. The server side implementation uses the Parliament SPARQL Endpoint accessed via a Tomcat servlet. SemantGeo uses the Google Maps API for user-specified polygon construction and JsTree for providing containment and categorical hierarchies for search. SemantGeo uses GeoSPARQL for spatial reasoning alone and in concert with RDFS/OWL reasoning capabilities to determine, e.g., what geofeatures are within, partially overlap with, or within a certain distance from, a given polygon. We also leverage qualitative relationships defined by the Gazetteer ontology that are composites of spatial relationships as well as administrative designations or geophysical phenomena. We provide multiple mechanisms for exploring data, such as polygon (map-based) and named-feature (hierarchy-based) selection, that enable flexible search constraints using boolean combination of selections. JsTree-based hierarchical search facets present named features and include a 'part of' hierarchy (e.g., measurement-site-01, Lake George, Adirondack Region, NY State) and type hierarchies (e.g., nodes in the hierarchy for WaterBody, Park, MeasurementSite), depending on the ';axis of choice' option selected. Using GeoSPARQL and aforementioned ontology, these hierarchies are constrained based on polygon selection, where the corresponding polygons of the contained features are visually rendered to assist exploration. Once measurement sites are plotted based on initial search, subsequent searches using JsTree selections can extend the previous based on nearby waterbodies in some semantic relationship of interest. For example, ';tributary of' captures water bodies that flow into the current one, and extending the original search to include tributaries of the observed water body is useful to environmental scientists for isolating the source of characteristic levels, including pollutants. Ultimately any SemantEco module can leverage SemantGeo's underlying APIs, leveraged in a deployment of SemantEco that combines EPA and USGS water quality data, and one customized for searching data available from the Darrin Freshwater Institute. Future work will address generating RDF geometry data from shape files, aligning RDF data sources to better leverage qualitative and spatial relationships, and validating newly generated RDF data adhering to the GeoSPARQL standard.

  4. SAS- Semantic Annotation Service for Geoscience resources on the web

    NASA Astrophysics Data System (ADS)

    Elag, M.; Kumar, P.; Marini, L.; Li, R.; Jiang, P.

    2015-12-01

    There is a growing need for increased integration across the data and model resources that are disseminated on the web to advance their reuse across different earth science applications. Meaningful reuse of resources requires semantic metadata to realize the semantic web vision for allowing pragmatic linkage and integration among resources. Semantic metadata associates standard metadata with resources to turn them into semantically-enabled resources on the web. However, the lack of a common standardized metadata framework as well as the uncoordinated use of metadata fields across different geo-information systems, has led to a situation in which standards and related Standard Names abound. To address this need, we have designed SAS to provide a bridge between the core ontologies required to annotate resources and information systems in order to enable queries and analysis over annotation from a single environment (web). SAS is one of the services that are provided by the Geosematnic framework, which is a decentralized semantic framework to support the integration between models and data and allow semantically heterogeneous to interact with minimum human intervention. Here we present the design of SAS and demonstrate its application for annotating data and models. First we describe how predicates and their attributes are extracted from standards and ingested in the knowledge-base of the Geosemantic framework. Then we illustrate the application of SAS in annotating data managed by SEAD and annotating simulation models that have web interface. SAS is a step in a broader approach to raise the quality of geoscience data and models that are published on the web and allow users to better search, access, and use of the existing resources based on standard vocabularies that are encoded and published using semantic technologies.

  5. More than a feeling: The bidirectional convergence of semantic visual object and somatosensory processing.

    PubMed

    Ekstrand, Chelsea; Neudorf, Josh; Lorentz, Eric; Gould, Layla; Mickleborough, Marla; Borowsky, Ron

    2017-11-01

    Prevalent theories of semantic processing assert that the sensorimotor system plays a functional role in the semantic processing of manipulable objects. While motor execution has been shown to impact object processing, involvement of the somatosensory system has remained relatively unexplored. Therefore, we developed two novel priming paradigms. In Experiment 1, participants received a vibratory hand prime (on half the trials) prior to viewing a picture of either an object interacted primarily with the hand (e.g., a cup) or the foot (e.g., a soccer ball) and reported how they would interact with it. In Experiment 2, the same objects became the prime and participants were required to identify whether the vibratory stimulation occurred to their hand or foot. In both experiments, somatosensory priming effects arose for the hand objects, while foot objects showed no priming benefits. These results suggest that object semantic knowledge bidirectionally converges with the somatosensory system. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Graph-Based Semantic Web Service Composition for Healthcare Data Integration.

    PubMed

    Arch-Int, Ngamnij; Arch-Int, Somjit; Sonsilphong, Suphachoke; Wanchai, Paweena

    2017-01-01

    Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement.

  7. Graph-Based Semantic Web Service Composition for Healthcare Data Integration

    PubMed Central

    2017-01-01

    Within the numerous and heterogeneous web services offered through different sources, automatic web services composition is the most convenient method for building complex business processes that permit invocation of multiple existing atomic services. The current solutions in functional web services composition lack autonomous queries of semantic matches within the parameters of web services, which are necessary in the composition of large-scale related services. In this paper, we propose a graph-based Semantic Web Services composition system consisting of two subsystems: management time and run time. The management-time subsystem is responsible for dependency graph preparation in which a dependency graph of related services is generated automatically according to the proposed semantic matchmaking rules. The run-time subsystem is responsible for discovering the potential web services and nonredundant web services composition of a user's query using a graph-based searching algorithm. The proposed approach was applied to healthcare data integration in different health organizations and was evaluated according to two aspects: execution time measurement and correctness measurement. PMID:29065602

  8. Approaching semantic interoperability in Health Level Seven

    PubMed Central

    Alschuler, Liora

    2010-01-01

    ‘Semantic Interoperability’ is a driving objective behind many of Health Level Seven's standards. The objective in this paper is to take a step back, and consider what semantic interoperability means, assess whether or not it has been achieved, and, if not, determine what concrete next steps can be taken to get closer. A framework for measuring semantic interoperability is proposed, using a technique called the ‘Single Logical Information Model’ framework, which relies on an operational definition of semantic interoperability and an understanding that interoperability improves incrementally. Whether semantic interoperability tomorrow will enable one computer to talk to another, much as one person can talk to another person, is a matter for speculation. It is assumed, however, that what gets measured gets improved, and in that spirit this framework is offered as a means to improvement. PMID:21106995

  9. PLAStiCC: Predictive Look-Ahead Scheduling for Continuous dataflows on Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumbhare, Alok; Simmhan, Yogesh; Prasanna, Viktor K.

    2014-05-27

    Scalable stream processing and continuous dataflow systems are gaining traction with the rise of big data due to the need for processing high velocity data in near real time. Unlike batch processing systems such as MapReduce and workflows, static scheduling strategies fall short for continuous dataflows due to the variations in the input data rates and the need for sustained throughput. The elastic resource provisioning of cloud infrastructure is valuable to meet the changing resource needs of such continuous applications. However, multi-tenant cloud resources introduce yet another dimension of performance variability that impacts the application’s throughput. In this paper wemore » propose PLAStiCC, an adaptive scheduling algorithm that balances resource cost and application throughput using a prediction-based look-ahead approach. It not only addresses variations in the input data rates but also the underlying cloud infrastructure. In addition, we also propose several simpler static scheduling heuristics that operate in the absence of accurate performance prediction model. These static and adaptive heuristics are evaluated through extensive simulations using performance traces obtained from public and private IaaS clouds. Our results show an improvement of up to 20% in the overall profit as compared to the reactive adaptation algorithm.« less

  10. Acute effects of exergames on cognitive function of institutionalized older persons: a single-blinded, randomized and controlled pilot study.

    PubMed

    Monteiro-Junior, Renato Sobral; da Silva Figueiredo, Luiz Felipe; Maciel-Pinheiro, Paulo de Tarso; Abud, Erick Lohan Rodrigues; Braga, Ana Elisa Mendes Montalvão; Barca, Maria Lage; Engedal, Knut; Nascimento, Osvaldo José M; Deslandes, Andrea Camaz; Laks, Jerson

    2017-06-01

    Improvements on balance, gait and cognition are some of the benefits of exergames. Few studies have investigated the cognitive effects of exergames in institutionalized older persons. To assess the acute effect of a single session of exergames on cognition of institutionalized older persons. Nineteen institutionalized older persons were randomly allocated to Wii (WG, n = 10, 86 ± 7 year, two males) or control groups (CG, n = 9, 86 ± 5 year, one male). The WG performed six exercises with virtual reality, whereas CG performed six exercises without virtual reality. Verbal fluency test (VFT), digit span forward and digit span backward were used to evaluate semantic memory/executive function, short-term memory and work memory, respectively, before and after exergames and Δ post- to pre-session (absolute) and Δ % (relative) were calculated. Parametric (t independent test) and nonparametric (Mann-Whitney test) statistics and effect size were applied to tests for efficacy. VFT was statistically significant within WG (-3.07, df = 9, p = 0.013). We found no statistically significant differences between the two groups (p > 0.05). Effect size between groups of Δ % (median = 21 %) showed moderate effect for WG (0.63). Our data show moderate improvement of semantic memory/executive function due to exergames session. It is possible that cognitive brain areas are activated during exergames, increasing clinical response. A single session of exergames showed no significant improvement in short-term memory, working memory and semantic memory/executive function. The effect size for verbal fluency was promising, and future studies on this issue should be developed. RBR-6rytw2.

  11. Integrating semantic dimension into openEHR archetypes for the management of cerebral palsy electronic medical records.

    PubMed

    Ellouze, Afef Samet; Bouaziz, Rafik; Ghorbel, Hanen

    2016-10-01

    Integrating semantic dimension into clinical archetypes is necessary once modeling medical records. First, it enables semantic interoperability and, it offers applying semantic activities on clinical data and provides a higher design quality of Electronic Medical Record (EMR) systems. However, to obtain these advantages, designers need to use archetypes that cover semantic features of clinical concepts involved in their specific applications. In fact, most of archetypes filed within open repositories are expressed in the Archetype Definition Language (ALD) which allows defining only the syntactic structure of clinical concepts weakening semantic activities on the EMR content in the semantic web environment. This paper focuses on the modeling of an EMR prototype for infants affected by Cerebral Palsy (CP), using the dual model approach and integrating semantic web technologies. Such a modeling provides a better delivery of quality of care and ensures semantic interoperability between all involved therapies' information systems. First, data to be documented are identified and collected from the involved therapies. Subsequently, data are analyzed and arranged into archetypes expressed in accordance of ADL. During this step, open archetype repositories are explored, in order to find the suitable archetypes. Then, ADL archetypes are transformed into archetypes expressed in OWL-DL (Ontology Web Language - Description Language). Finally, we construct an ontological source related to these archetypes enabling hence their annotation to facilitate data extraction and providing possibility to exercise semantic activities on such archetypes. Semantic dimension integration into EMR modeled in accordance to the archetype approach. The feasibility of our solution is shown through the development of a prototype, baptized "CP-SMS", which ensures semantic exploitation of CP EMR. This prototype provides the following features: (i) creation of CP EMR instances and their checking by using a knowledge base which we have constructed by interviews with domain experts, (ii) translation of initially CP ADL archetypes into CP OWL-DL archetypes, (iii) creation of an ontological source which we can use to annotate obtained archetypes and (vi) enrichment and supply of the ontological source and integration of semantic relations by providing hence fueling the ontology with new concepts, ensuring consistency and eliminating ambiguity between concepts. The degree of semantic interoperability that could be reached between EMR systems depends strongly on the quality of the used archetypes. Thus, the integration of semantic dimension in archetypes modeling process is crucial. By creating an ontological source and annotating archetypes, we create a supportive platform ensuring semantic interoperability between archetypes-based EMR-systems. Copyright © 2016. Published by Elsevier Inc.

  12. LDRD final report :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brost, Randolph C.; McLendon, William Clarence,

    2013-01-01

    Modeling geospatial information with semantic graphs enables search for sites of interest based on relationships between features, without requiring strong a priori models of feature shape or other intrinsic properties. Geospatial semantic graphs can be constructed from raw sensor data with suitable preprocessing to obtain a discretized representation. This report describes initial work toward extending geospatial semantic graphs to include temporal information, and initial results applying semantic graph techniques to SAR image data. We describe an efficient graph structure that includes geospatial and temporal information, which is designed to support simultaneous spatial and temporal search queries. We also report amore » preliminary implementation of feature recognition, semantic graph modeling, and graph search based on input SAR data. The report concludes with lessons learned and suggestions for future improvements.« less

  13. What is in a contour map? A region-based logical formalization of contour semantics

    USGS Publications Warehouse

    Usery, E. Lynn; Hahmann, Torsten

    2015-01-01

    This paper analyses and formalizes contour semantics in a first-order logic ontology that forms the basis for enabling computational common sense reasoning about contour information. The elicited contour semantics comprises four key concepts – contour regions, contour lines, contour values, and contour sets – and their subclasses and associated relations, which are grounded in an existing qualitative spatial ontology. All concepts and relations are illustrated and motivated by physical-geographic features identifiable on topographic contour maps. The encoding of the semantics of contour concepts in first-order logic and a derived conceptual model as basis for an OWL ontology lay the foundation for fully automated, semantically-aware qualitative and quantitative reasoning about contours.

  14. The value of the Semantic Web in the laboratory.

    PubMed

    Frey, Jeremy G

    2009-06-01

    The Semantic Web is beginning to impact on the wider chemical and physical sciences, beyond the earlier adopted bio-informatics. While useful in large-scale data driven science with automated processing, these technologies can also help integrate the work of smaller scale laboratories producing diverse data. The semantics aid the discovery, reliable re-use of data, provide improved provenance and facilitate automated processing by increased resilience to changes in presentation and reduced ambiguity. The Semantic Web, its tools and collections are not yet competitive with well-established solutions to current problems. It is in the reduced cost of instituting solutions to new problems that the versatility of Semantic Web-enabled data and resources will make their mark once the more general-purpose tools are more available.

  15. Hybrid ontology for semantic information retrieval model using keyword matching indexing system.

    PubMed

    Uthayan, K R; Mala, G S Anandha

    2015-01-01

    Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology.

  16. [Semantic verbal fluency of animals in amnesia-type mild cognitive impairment].

    PubMed

    Lopez-Higes, Ramón; Prados, José M; del Rio, David; Galindo-Fuentes, Marta; Reinoso, Ana Isabel; Lozano-Ibanez, Montserrat

    2014-06-01

    The quantitative and qualitative analysis of the semantic verbal fluency task has revealed that people with dementia produced fewer words and smaller semantic clustering than people without dementia. However, in people with amnestic mild cognitive impairment (aMCI), research has shown conflicting results regarding the amount and number of semantic clusters that are made. The aim of this study was to provide new data to this controversial issue. Twenty-two older adults diagnosed with aMCI (8 men and 14 women) and 43 older adults (7 men and 36 women) with normal cognitive functioning that served as control group, participated in this study. All patients were evaluated at the Center for Prevention of Cognitive Decline of Madrid (Spain), completing the verbal fluency test (animals) besides other neuropsychological tests. As expected, animal production was lower in the aMCI group than in the control group, but no differences were observed either in the average size of the semantic clusters or the number of switches between them. The results are consistent with previous research suggesting aMCI is not only characterized by episodic memory and working memory deficits. Semantic memory decline is also present. However, the data do not clarify how strategic executive processes are involved, as seems to be in Alzheimer's disease.

  17. A coordinate-based ALE functional MRI meta-analysis of brain activation during verbal fluency tasks in healthy control subjects

    PubMed Central

    2014-01-01

    Background The processing of verbal fluency tasks relies on the coordinated activity of a number of brain areas, particularly in the frontal and temporal lobes of the left hemisphere. Recent studies using functional magnetic resonance imaging (fMRI) to study the neural networks subserving verbal fluency functions have yielded divergent results especially with respect to a parcellation of the inferior frontal gyrus for phonemic and semantic verbal fluency. We conducted a coordinate-based activation likelihood estimation (ALE) meta-analysis on brain activation during the processing of phonemic and semantic verbal fluency tasks involving 28 individual studies with 490 healthy volunteers. Results For phonemic as well as for semantic verbal fluency, the most prominent clusters of brain activation were found in the left inferior/middle frontal gyrus (LIFG/MIFG) and the anterior cingulate gyrus. BA 44 was only involved in the processing of phonemic verbal fluency tasks, BA 45 and 47 in the processing of phonemic and semantic fluency tasks. Conclusions Our comparison of brain activation during the execution of either phonemic or semantic verbal fluency tasks revealed evidence for spatially different activation in BA 44, but not other regions of the LIFG/LMFG (BA 9, 45, 47) during phonemic and semantic verbal fluency processing. PMID:24456150

  18. Hybrid Ontology for Semantic Information Retrieval Model Using Keyword Matching Indexing System

    PubMed Central

    Uthayan, K. R.; Anandha Mala, G. S.

    2015-01-01

    Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into information retrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for information retrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology. PMID:25922851

  19. Nodes on ropes: a comprehensive data and control flow for steering ensemble simulations.

    PubMed

    Waser, Jürgen; Ribičić, Hrvoje; Fuchs, Raphael; Hirsch, Christian; Schindler, Benjamin; Blöschl, Günther; Gröller, M Eduard

    2011-12-01

    Flood disasters are the most common natural risk and tremendous efforts are spent to improve their simulation and management. However, simulation-based investigation of actions that can be taken in case of flood emergencies is rarely done. This is in part due to the lack of a comprehensive framework which integrates and facilitates these efforts. In this paper, we tackle several problems which are related to steering a flood simulation. One issue is related to uncertainty. We need to account for uncertain knowledge about the environment, such as levee-breach locations. Furthermore, the steering process has to reveal how these uncertainties in the boundary conditions affect the confidence in the simulation outcome. Another important problem is that the simulation setup is often hidden in a black-box. We expose system internals and show that simulation steering can be comprehensible at the same time. This is important because the domain expert needs to be able to modify the simulation setup in order to include local knowledge and experience. In the proposed solution, users steer parameter studies through the World Lines interface to account for input uncertainties. The transport of steering information to the underlying data-flow components is handled by a novel meta-flow. The meta-flow is an extension to a standard data-flow network, comprising additional nodes and ropes to abstract parameter control. The meta-flow has a visual representation to inform the user about which control operations happen. Finally, we present the idea to use the data-flow diagram itself for visualizing steering information and simulation results. We discuss a case-study in collaboration with a domain expert who proposes different actions to protect a virtual city from imminent flooding. The key to choosing the best response strategy is the ability to compare different regions of the parameter space while retaining an understanding of what is happening inside the data-flow system. © 2011 IEEE

  20. Varieties of semantic ‘access’ deficit in Wernicke’s aphasia and semantic aphasia

    PubMed Central

    Robson, Holly; Lambon Ralph, Matthew A.; Jefferies, Elizabeth

    2015-01-01

    Comprehension deficits are common in stroke aphasia, including in cases with (i) semantic aphasia, characterized by poor executive control of semantic processing across verbal and non-verbal modalities; and (ii) Wernicke’s aphasia, associated with poor auditory–verbal comprehension and repetition, plus fluent speech with jargon. However, the varieties of these comprehension problems, and their underlying causes, are not well understood. Both patient groups exhibit some type of semantic ‘access’ deficit, as opposed to the ‘storage’ deficits observed in semantic dementia. Nevertheless, existing descriptions suggest that these patients might have different varieties of ‘access’ impairment—related to difficulty resolving competition (in semantic aphasia) versus initial activation of concepts from sensory inputs (in Wernicke’s aphasia). We used a case series design to compare patients with Wernicke’s aphasia and those with semantic aphasia on Warrington’s paradigmatic assessment of semantic ‘access’ deficits. In these verbal and non-verbal matching tasks, a small set of semantically-related items are repeatedly presented over several cycles so that the target on one trial becomes a distractor on another (building up interference and eliciting semantic ‘blocking’ effects). Patients with Wernicke’s aphasia and semantic aphasia were distinguished according to lesion location in the temporal cortex, but in each group, some individuals had additional prefrontal damage. Both of these aspects of lesion variability—one that mapped onto classical ‘syndromes’ and one that did not—predicted aspects of the semantic ‘access’ deficit. Both semantic aphasia and Wernicke’s aphasia cases showed multimodal semantic impairment, although as expected, the Wernicke’s aphasia group showed greater deficits on auditory-verbal than picture judgements. Distribution of damage in the temporal lobe was crucial for predicting the initially ‘beneficial’ effects of stimulus repetition: cases with Wernicke’s aphasia showed initial improvement with repetition of words and pictures, while in semantic aphasia, semantic access was initially good but declined in the face of competition from previous targets. Prefrontal damage predicted the ‘harmful’ effects of repetition: the ability to reselect both word and picture targets in the face of mounting competition was linked to left prefrontal damage in both groups. Therefore, patients with semantic aphasia and Wernicke’s aphasia have partially distinct impairment of semantic ‘access’ but, across these syndromes, prefrontal lesions produce declining comprehension with repetition in both verbal and non-verbal tasks. PMID:26454668

  1. Varieties of semantic 'access' deficit in Wernicke's aphasia and semantic aphasia.

    PubMed

    Thompson, Hannah E; Robson, Holly; Lambon Ralph, Matthew A; Jefferies, Elizabeth

    2015-12-01

    Comprehension deficits are common in stroke aphasia, including in cases with (i) semantic aphasia, characterized by poor executive control of semantic processing across verbal and non-verbal modalities; and (ii) Wernicke's aphasia, associated with poor auditory-verbal comprehension and repetition, plus fluent speech with jargon. However, the varieties of these comprehension problems, and their underlying causes, are not well understood. Both patient groups exhibit some type of semantic 'access' deficit, as opposed to the 'storage' deficits observed in semantic dementia. Nevertheless, existing descriptions suggest that these patients might have different varieties of 'access' impairment-related to difficulty resolving competition (in semantic aphasia) versus initial activation of concepts from sensory inputs (in Wernicke's aphasia). We used a case series design to compare patients with Wernicke's aphasia and those with semantic aphasia on Warrington's paradigmatic assessment of semantic 'access' deficits. In these verbal and non-verbal matching tasks, a small set of semantically-related items are repeatedly presented over several cycles so that the target on one trial becomes a distractor on another (building up interference and eliciting semantic 'blocking' effects). Patients with Wernicke's aphasia and semantic aphasia were distinguished according to lesion location in the temporal cortex, but in each group, some individuals had additional prefrontal damage. Both of these aspects of lesion variability-one that mapped onto classical 'syndromes' and one that did not-predicted aspects of the semantic 'access' deficit. Both semantic aphasia and Wernicke's aphasia cases showed multimodal semantic impairment, although as expected, the Wernicke's aphasia group showed greater deficits on auditory-verbal than picture judgements. Distribution of damage in the temporal lobe was crucial for predicting the initially 'beneficial' effects of stimulus repetition: cases with Wernicke's aphasia showed initial improvement with repetition of words and pictures, while in semantic aphasia, semantic access was initially good but declined in the face of competition from previous targets. Prefrontal damage predicted the 'harmful' effects of repetition: the ability to reselect both word and picture targets in the face of mounting competition was linked to left prefrontal damage in both groups. Therefore, patients with semantic aphasia and Wernicke's aphasia have partially distinct impairment of semantic 'access' but, across these syndromes, prefrontal lesions produce declining comprehension with repetition in both verbal and non-verbal tasks. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain.

  2. Managing Risk in Mobile Applications with Formal Security Policies

    DTIC Science & Technology

    2013-04-01

    Alternatively, Breaux and Powers (2009) found the Business Process Modeling Notation ( BPMN ), a declarative language for describing business processes, to be...the Business Process Execution Language (BPEL), preferred as the candidate formal semantics for BPMN , only works for limited classes of BPMN models

  3. Semantic memory in developmental amnesia.

    PubMed

    Elward, Rachael L; Vargha-Khadem, Faraneh

    2018-04-30

    Patients with developmental amnesia resulting from bilateral hippocampal atrophy associated with neonatal hypoxia-ischaemia typically show relatively preserved semantic memory and factual knowledge about the natural world despite severe impairments in episodic memory. Understanding the neural and mnemonic processes that enable this context-free semantic knowledge to be acquired throughout development without the support of the contextualised episodic memory system is a serious challenge. This review describes the clinical presentation of patients with developmental amnesia, contrasts its features with those reported for adult-onset hippocampal amnesia, and analyses the effects of variables that influence the learning of new semantic information. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  4. Neural correlates of pantomiming familiar and unfamiliar tools: action semantics versus mechanical problem solving?

    PubMed

    Vingerhoets, Guy; Vandekerckhove, Elisabeth; Honoré, Pieterjan; Vandemaele, Pieter; Achten, Eric

    2011-06-01

    This study aims to reveal the neural correlates of planning and executing tool use pantomimes and explores the brain's response to pantomiming the use of unfamiliar tools. Sixteen right-handed volunteers planned and executed pantomimes of equally graspable familiar and unfamiliar tools while undergoing fMRI. During the planning of these pantomimes, we found bilateral temporo-occipital and predominantly left hemispheric frontal and parietal activation. The execution of the pantomimes produced additional activation in frontal and sensorimotor regions. In the left posterior parietal region both familiar and unfamiliar tool pantomimes elicit peak activity in the anterior portion of the lateral bank of the intraparietal sulcus--A region associated with the representation of action goals. The cerebral activation during these pantomimes is remarkably similar for familiar and unfamiliar tools, and direct comparisons revealed only few differences. First, the left cuneus is significantly active during the planning of pantomimes of unfamiliar tools, reflecting increased visual processing of the novel objects. Second, executing (but not planning) familiar tool pantomimes showed significant activation on the convex portion of the inferior parietal lobule, a region believed to serve as a repository for skilled object-related gestures. Given the striking similarity in brain activation while pantomiming familiar and unfamiliar tools, we argue that normal subjects use both action semantics and function from structure inferences simultaneously and interactively to give rise to flexible object-to-goal directed behavior. Copyright © 2010 Wiley-Liss, Inc.

  5. The differential effects of ecstasy/polydrug use on executive components: shifting, inhibition, updating and access to semantic memory.

    PubMed

    Montgomery, Catharine; Fisk, John E; Newcombe, Russell; Murphy, Phillip N

    2005-10-01

    Recent theoretical models suggest that the central executive may not be a unified structure. The present study explored the nature of central executive deficits in ecstasy users. In study 1, 27 ecstasy users and 34 non-users were assessed using tasks to tap memory updating (computation span; letter updating) and access to long-term memory (a semantic fluency test and the Chicago Word Fluency Test). In study 2, 51 ecstasy users and 42 non-users completed tasks that assess mental set switching (number/letter and plus/minus) and inhibition (random letter generation). MANOVA revealed that ecstasy users performed worse on both tasks used to assess memory updating and on tasks to assess access to long-term memory (C- and S-letter fluency). However, notwithstanding the significant ecstasy group-related effects, indices of cocaine and cannabis use were also significantly correlated with most of the executive measures. Unexpectedly, in study 2, ecstasy users performed significantly better on the inhibition task, producing more letters than non-users. No group differences were observed on the switching tasks. Correlations between indices of ecstasy use and number of letters produced were significant. The present study provides further support for ecstasy/polydrug-related deficits in memory updating and in access to long-term memory. The surplus evident on the inhibition task should be treated with some caution, as this was limited to a single measure and has not been supported by our previous work.

  6. An Interactive Multimedia Learning Environment for VLSI Built with COSMOS

    ERIC Educational Resources Information Center

    Angelides, Marios C.; Agius, Harry W.

    2002-01-01

    This paper presents Bigger Bits, an interactive multimedia learning environment that teaches students about VLSI within the context of computer electronics. The system was built with COSMOS (Content Oriented semantic Modelling Overlay Scheme), which is a modelling scheme that we developed for enabling the semantic content of multimedia to be used…

  7. Time Travel: The Role of Temporality in Enabling Semantic Waves in Secondary School Teaching

    ERIC Educational Resources Information Center

    Matruglio, Erika; Maton, Karl; Martin, J. R.

    2013-01-01

    Based on the theoretical understandings from Legitimation Code Theory (Maton, 2013) and Systemic Functional Linguistics (Martin, 2013) underpinning the research discussed in this special issue, this paper focuses on classroom pedagogy to illustrate an important strategy for making semantic waves in History teaching, namely "temporal shifting". We…

  8. Supporting Student Research with Semantic Technologies and Digital Archives

    ERIC Educational Resources Information Center

    Martinez-Garcia, Agustina; Corti, Louise

    2012-01-01

    This article discusses how the idea of higher education students as producers of knowledge rather than consumers can be operationalised by means of student research projects, in which processes of research archiving and analysis are enabled through the use of semantic technologies. It discusses how existing digital repository frameworks can be…

  9. Abstraction and generalization in statistical learning: implications for the relationship between semantic types and episodic tokens

    PubMed Central

    2017-01-01

    Statistical approaches to emergent knowledge have tended to focus on the process by which experience of individual episodes accumulates into generalizable experience across episodes. However, there is a seemingly opposite, but equally critical, process that such experience affords: the process by which, from a space of types (e.g. onions—a semantic class that develops through exposure to individual episodes involving individual onions), we can perceive or create, on-the-fly, a specific token (a specific onion, perhaps one that is chopped) in the absence of any prior perceptual experience with that specific token. This article reviews a selection of statistical learning studies that lead to the speculation that this process—the generation, on the basis of semantic memory, of a novel episodic representation—is itself an instance of a statistical, in fact associative, process. The article concludes that the same processes that enable statistical abstraction across individual episodes to form semantic memories also enable the generation, from those semantic memories, of representations that correspond to individual tokens, and of novel episodic facts about those tokens. Statistical learning is a window onto these deeper processes that underpin cognition. This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences’. PMID:27872378

  10. Abstraction and generalization in statistical learning: implications for the relationship between semantic types and episodic tokens.

    PubMed

    Altmann, Gerry T M

    2017-01-05

    Statistical approaches to emergent knowledge have tended to focus on the process by which experience of individual episodes accumulates into generalizable experience across episodes. However, there is a seemingly opposite, but equally critical, process that such experience affords: the process by which, from a space of types (e.g. onions-a semantic class that develops through exposure to individual episodes involving individual onions), we can perceive or create, on-the-fly, a specific token (a specific onion, perhaps one that is chopped) in the absence of any prior perceptual experience with that specific token. This article reviews a selection of statistical learning studies that lead to the speculation that this process-the generation, on the basis of semantic memory, of a novel episodic representation-is itself an instance of a statistical, in fact associative, process. The article concludes that the same processes that enable statistical abstraction across individual episodes to form semantic memories also enable the generation, from those semantic memories, of representations that correspond to individual tokens, and of novel episodic facts about those tokens. Statistical learning is a window onto these deeper processes that underpin cognition.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).

  11. A cloud-based semantic wiki for user training in healthcare process management.

    PubMed

    Papakonstantinou, D; Poulymenopoulou, M; Malamateniou, F; Vassilacopoulos, G

    2011-01-01

    Successful healthcare process design requires active participation of users who are familiar with the cooperative and collaborative nature of healthcare delivery, expressed in terms of healthcare processes. Hence, a reusable, flexible, agile and adaptable training material is needed with the objective to enable users instill their knowledge and expertise in healthcare process management and (re)configuration activities. To this end, social software, such as a wiki, could be used as it supports cooperation and collaboration anytime, anywhere and combined with semantic web technology that enables structuring pieces of information for easy retrieval, reuse and exchange between different systems and tools. In this paper a semantic wiki is presented as a means for developing training material for healthcare providers regarding healthcare process management. The semantic wiki should act as a collective online memory containing training material that is accessible to authorized users, thus enhancing the training process with collaboration and cooperation capabilities. It is proposed that the wiki is stored in a secure virtual private cloud that is accessible from anywhere, be it an excessively open environment, while meeting the requirements of redundancy, high performance and autoscaling.

  12. Effects of perceptual and semantic cues on ERP modulations associated with prospective memory.

    PubMed

    Cousens, Ross; Cutmore, Timothy; Wang, Ya; Wilson, Jennifer; Chan, Raymond C K; Shum, David H K

    2015-10-01

    Prospective memory involves the formation and execution of intended actions and is essential for autonomous living. In this study (N=32), the effect of the nature of PM cues (semantic versus perceptual) on established event-related potentials (ERPs) elicited in PM tasks (N300 and prospective positivity) was investigated. PM cues defined by their perceptual features clearly elicited the N300 and prospective positivity whereas PM cues defined by semantic relatedness elicited prospective positivity. This calls into question the view that the N300 is a marker of general processes underlying detection of PM cues, but supports existing research showing that prospective positivity represents general post-retrieval processes that follow detection of PM cues. Continued refinement of ERP paradigms for understanding the neural correlates of PM is needed. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Compiler analysis for irregular problems in FORTRAN D

    NASA Technical Reports Server (NTRS)

    Vonhanxleden, Reinhard; Kennedy, Ken; Koelbel, Charles; Das, Raja; Saltz, Joel

    1992-01-01

    We developed a dataflow framework which provides a basis for rigorously defining strategies to make use of runtime preprocessing methods for distributed memory multiprocessors. In many programs, several loops access the same off-processor memory locations. Our runtime support gives us a mechanism for tracking and reusing copies of off-processor data. A key aspect of our compiler analysis strategy is to determine when it is safe to reuse copies of off-processor data. Another crucial function of the compiler analysis is to identify situations which allow runtime preprocessing overheads to be amortized. This dataflow analysis will make it possible to effectively use the results of interprocedural analysis in our efforts to reduce interprocessor communication and the need for runtime preprocessing.

  14. Artificial Intelligence-Based Semantic Internet of Things in a User-Centric Smart City

    PubMed Central

    Guo, Kun; Lu, Yueming; Gao, Hui; Cao, Ruohan

    2018-01-01

    Smart city (SC) technologies can provide appropriate services according to citizens’ demands. One of the key enablers in a SC is the Internet of Things (IoT) technology, which enables a massive number of devices to connect with each other. However, these devices usually come from different manufacturers with different product standards, which confront interactive control problems. Moreover, these devices will produce large amounts of data, and efficiently analyzing these data for intelligent services. In this paper, we propose a novel artificial intelligence-based semantic IoT (AI-SIoT) hybrid service architecture to integrate heterogeneous IoT devices to support intelligent services. In particular, the proposed architecture is empowered by semantic and AI technologies, which enable flexible connections among heterogeneous devices. The AI technology can support very implement efficient data analysis and make accurate decisions on service provisions in various kinds. Furthermore, we also present several practical use cases of the proposed AI-SIoT architecture and the opportunities and challenges to implement the proposed AI-SIoT for future SCs are also discussed. PMID:29701679

  15. Artificial Intelligence-Based Semantic Internet of Things in a User-Centric Smart City.

    PubMed

    Guo, Kun; Lu, Yueming; Gao, Hui; Cao, Ruohan

    2018-04-26

    Smart city (SC) technologies can provide appropriate services according to citizens’ demands. One of the key enablers in a SC is the Internet of Things (IoT) technology, which enables a massive number of devices to connect with each other. However, these devices usually come from different manufacturers with different product standards, which confront interactive control problems. Moreover, these devices will produce large amounts of data, and efficiently analyzing these data for intelligent services. In this paper, we propose a novel artificial intelligence-based semantic IoT (AI-SIoT) hybrid service architecture to integrate heterogeneous IoT devices to support intelligent services. In particular, the proposed architecture is empowered by semantic and AI technologies, which enable flexible connections among heterogeneous devices. The AI technology can support very implement efficient data analysis and make accurate decisions on service provisions in various kinds. Furthermore, we also present several practical use cases of the proposed AI-SIoT architecture and the opportunities and challenges to implement the proposed AI-SIoT for future SCs are also discussed.

  16. Linking DICOM pixel data with radiology reports using automatic semantic annotation

    NASA Astrophysics Data System (ADS)

    Pathak, Sayan D.; Kim, Woojin; Munasinghe, Indeera; Criminisi, Antonio; White, Steve; Siddiqui, Khan

    2012-02-01

    Improved access to DICOM studies to both physicians and patients is changing the ways medical imaging studies are visualized and interpreted beyond the confines of radiologists' PACS workstations. While radiologists are trained for viewing and image interpretation, a non-radiologist physician relies on the radiologists' reports. Consequently, patients historically have been typically informed about their imaging findings via oral communication with their physicians, even though clinical studies have shown that patients respond to physician's advice significantly better when the individual patients are shown their own actual data. Our previous work on automated semantic annotation of DICOM Computed Tomography (CT) images allows us to further link radiology report with the corresponding images, enabling us to bridge the gap between image data with the human interpreted textual description of the corresponding imaging studies. The mapping of radiology text is facilitated by natural language processing (NLP) based search application. When combined with our automated semantic annotation of images, it enables navigation in large DICOM studies by clicking hyperlinked text in the radiology reports. An added advantage of using semantic annotation is the ability to render the organs to their default window level setting thus eliminating another barrier to image sharing and distribution. We believe such approaches would potentially enable the consumer to have access to their imaging data and navigate them in an informed manner.

  17. Mediation, Alignment, and Information Services for Semantic interoperability (MAISSI): A Trade Study

    DTIC Science & Technology

    2007-06-01

    Modeling Notation ( BPMN ) • Business Process Definition Metamodel (BPDM) A Business Process (BP) is a defined sequence of steps to be executed in...enterprise applications, to evaluate the capabilities of suppliers, and to compare against the competition. BPMN standardizes flowchart diagrams that

  18. iSMART: Ontology-based Semantic Query of CDA Documents

    PubMed Central

    Liu, Shengping; Ni, Yuan; Mei, Jing; Li, Hanyu; Xie, Guotong; Hu, Gang; Liu, Haifeng; Hou, Xueqiao; Pan, Yue

    2009-01-01

    The Health Level 7 Clinical Document Architecture (CDA) is widely accepted as the format for electronic clinical document. With the rich ontological references in CDA documents, the ontology-based semantic query could be performed to retrieve CDA documents. In this paper, we present iSMART (interactive Semantic MedicAl Record reTrieval), a prototype system designed for ontology-based semantic query of CDA documents. The clinical information in CDA documents will be extracted into RDF triples by a declarative XML to RDF transformer. An ontology reasoner is developed to infer additional information by combining the background knowledge from SNOMED CT ontology. Then an RDF query engine is leveraged to enable the semantic queries. This system has been evaluated using the real clinical documents collected from a large hospital in southern China. PMID:20351883

  19. Text-Content-Analysis based on the Syntactic Correlations between Ontologies

    NASA Astrophysics Data System (ADS)

    Tenschert, Axel; Kotsiopoulos, Ioannis; Koller, Bastian

    The work presented in this chapter is concerned with the analysis of semantic knowledge structures, represented in the form of Ontologies, through which Service Level Agreements (SLAs) are enriched with new semantic data. The objective of the enrichment process is to enable SLA negotiation in a way that is much more convenient for a Service Users. For this purpose the deployment of an SLA-Management-System as well as the development of an analyzing procedure for Ontologies is required. This chapter will refer to the BREIN, the FinGrid and the LarKC projects. The analyzing procedure examines the syntactic correlations of several Ontologies whose focus lies in the field of mechanical engineering. A method of analyzing text and content is developed as part of this procedure. In order to so, we introduce a formalism as well as a method for understanding content. The analysis and methods are integrated to an SLA Management System which enables a Service User to interact with the system as a service by negotiating the user requests and including the semantic knowledge. Through negotiation between Service User and Service Provider the analysis procedure considers the user requests by extending the SLAs with semantic knowledge. Through this the economic use of an SLA-Management-System is increased by the enhancement of SLAs with semantic knowledge structures. The main focus of this chapter is the analyzing procedure, respectively the Text-Content-Analysis, which provides the mentioned semantic knowledge structures.

  20. Convergence of semantics and emotional expression within the IFG pars orbitalis.

    PubMed

    Belyk, Michel; Brown, Steven; Lim, Jessica; Kotz, Sonja A

    2017-08-01

    Humans communicate through a combination of linguistic and emotional channels, including propositional speech, writing, sign language, music, but also prosodic, facial, and gestural expression. These channels can be interpreted separately or they can be integrated to multimodally convey complex meanings. Neural models of the perception of semantics and emotion include nodes for both functions in the inferior frontal gyrus pars orbitalis (IFGorb). However, it is not known whether this convergence involves a common functional zone or instead specialized subregions that process semantics and emotion separately. To address this, we performed Kernel Density Estimation meta-analyses of published neuroimaging studies of the perception of semantics or emotion that reported activation in the IFGorb. The results demonstrated that the IFGorb contains two zones with distinct functional profiles. A lateral zone, situated immediately ventral to Broca's area, was implicated in both semantics and emotion. Another zone, deep within the ventral frontal operculum, was engaged almost exclusively by studies of emotion. Follow-up analysis using Meta-Analytic Connectivity Modeling demonstrated that both zones were frequently co-activated with a common network of sensory, motor, and limbic structures, although the lateral zone had a greater association with prefrontal cortical areas involved in executive function. The status of the lateral IFGorb as a point of convergence between the networks for processing semantic and emotional content across modalities of communication is intriguing since this structure is preserved across primates with limited semantic abilities. Hence, the IFGorb may have initially evolved to support the comprehension of emotional signals, being later co-opted to support semantic communication in humans by forming new connections with brain regions that formed the human semantic network. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. The effects of age and education on executive functioning and oral naming performance in greek cypriot adults: the neurocognitive study for the aging.

    PubMed

    Constantinidou, Fofi; Christodoulou, Marianna; Prokopiou, Juliana

    2012-01-01

    Age, educational experiences, language and culture can affect linguistic-cognitive performance. This is the first systematic study investigating linguistic-cognitive aging in Greek Cypriot adults focusing on executive functioning (EF) and oral naming performance. Three hundred and fifty-nine participants were included, a group of young-old, aged 60-75 years (n = 231), and a group of old-old participants, aged 76 years and older (n = 128). Participants in each age group were divided into three education groups: 0-4 years (n = 50), 5-9 years (n = 198), and 10 years of education and higher (n = 111). Participants were administered 5 measures of EF along with measures of receptive vocabulary and confrontational naming. There was a significant relationship between the EF composite score and all language measures. MANOVA (α = 0.05) indicated significant age and education effects on most measures of EF and language. Performance on receptive vocabulary and cognitive shift remained stable across age groups, but was mediated by education. Education plays a significant role on all measures requiring semantic organization, speed of information processing, cognitive shift, mental flexibility, receptive vocabulary and confrontational naming. Furthermore, strategic thinking has a role in semantic knowledge, word retrieval and semantic access in healthy aging. We conclude with clinical implications and assessment considerations in aphasia. Copyright © 2012 S. Karger AG, Basel.

  2. The effect of motion content in action naming by Parkinson's disease patients.

    PubMed

    Herrera, Elena; Rodríguez-Ferreiro, Javier; Cuetos, Fernando

    2012-07-01

    The verb-specific impairment present in patients with motion-related neurological diseases has been argued to support the hypothesis that the processing of words referring to motion depends on neural activity in regions involved in motor planning and execution. We presented a group of Parkinson's disease (PD) patients with an action-naming task in order to test whether the prevalence of motion-related semantic content in different verbs influences their accuracy. Forty-nine PD patients and 19 healthy seniors participated in the study. All of PD participants underwent a neurological and neuropsychological assessment to rule out dementia. Subjective ratings of the motion content level of 100 verbs were obtained from 14 young voluntaries. Then, pictures corresponding to two subsets of 25 verbs with significantly different degrees of motor component were selected to be used in an action-naming task. Stimuli lists were matched on visual and psycholinguistic characteristics. ANOVA analysis reveals differences between groups. PD patients obtained poor results in response to pictures with high motor content compared to those with low motor association. Nevertheless, this effect did not appear on the control group. The general linear mixed model analytic approach was applied to explore the influence of the degree of motion-related semantic content of each verb in the accuracy scores of the participants. The performance of PD patients appeared to be negatively affected by the level of motion-related semantic content associated to each verb. Our results provide compelling evidence of the relevance of brain areas related to planning and execution of movements in the retrieval of motion-related semantic content. Copyright © 2010 Elsevier Srl. All rights reserved.

  3. Ontology-based geospatial data query and integration

    USGS Publications Warehouse

    Zhao, T.; Zhang, C.; Wei, M.; Peng, Z.-R.

    2008-01-01

    Geospatial data sharing is an increasingly important subject as large amount of data is produced by a variety of sources, stored in incompatible formats, and accessible through different GIS applications. Past efforts to enable sharing have produced standardized data format such as GML and data access protocols such as Web Feature Service (WFS). While these standards help enabling client applications to gain access to heterogeneous data stored in different formats from diverse sources, the usability of the access is limited due to the lack of data semantics encoded in the WFS feature types. Past research has used ontology languages to describe the semantics of geospatial data but ontology-based queries cannot be applied directly to legacy data stored in databases or shapefiles, or to feature data in WFS services. This paper presents a method to enable ontology query on spatial data available from WFS services and on data stored in databases. We do not create ontology instances explicitly and thus avoid the problems of data replication. Instead, user queries are rewritten to WFS getFeature requests and SQL queries to database. The method also has the benefits of being able to utilize existing tools of databases, WFS, and GML while enabling query based on ontology semantics. ?? 2008 Springer-Verlag Berlin Heidelberg.

  4. A semantically rich and standardised approach enhancing discovery of sensor data and metadata

    NASA Astrophysics Data System (ADS)

    Kokkinaki, Alexandra; Buck, Justin; Darroch, Louise

    2016-04-01

    The marine environment plays an essential role in the earth's climate. To enhance the ability to monitor the health of this important system, innovative sensors are being produced and combined with state of the art sensor technology. As the number of sensors deployed is continually increasing,, it is a challenge for data users to find the data that meet their specific needs. Furthermore, users need to integrate diverse ocean datasets originating from the same or even different systems. Standards provide a solution to the above mentioned challenges. The Open Geospatial Consortium (OGC) has created Sensor Web Enablement (SWE) standards that enable different sensor networks to establish syntactic interoperability. When combined with widely accepted controlled vocabularies, they become semantically rich and semantic interoperability is achievable. In addition, Linked Data is the recommended best practice for exposing, sharing and connecting information on the Semantic Web using Uniform Resource Identifiers (URIs), Resource Description Framework (RDF) and RDF Query Language (SPARQL). As part of the EU-funded SenseOCEAN project, the British Oceanographic Data Centre (BODC) is working on the standardisation of sensor metadata enabling 'plug and play' sensor integration. Our approach combines standards, controlled vocabularies and persistent URIs to publish sensor descriptions, their data and associated metadata as 5 star Linked Data and OGC SWE (SensorML, Observations & Measurements) standard. Thus sensors become readily discoverable, accessible and useable via the web. Content and context based searching is also enabled since sensors descriptions are understood by machines. Additionally, sensor data can be combined with other sensor or Linked Data datasets to form knowledge. This presentation will describe the work done in BODC to achieve syntactic and semantic interoperability in the sensor domain. It will illustrate the reuse and extension of the Semantic Sensor Network (SSN) ontology to Linked Sensor Ontology (LSO) and the steps taken to combine OGC SWE with the Linked Data approach through alignment and embodiment of other ontologies. It will then explain how data and models were annotated with controlled vocabularies to establish unambiguous semantics and interconnect them with data from different sources. Finally, it will introduce the RDF triple store where the sensor descriptions and metadata are stored and can be queried through the standard query language SPARQL. Providing different flavours of machine readable interpretations of sensors, sensor data and metadata enhances discoverability but most importantly allows seamless aggregation of information from different networks that will finally produce knowledge.

  5. Using Semantic Templates to Study Vulnerabilities Recorded in Large Software Repositories

    ERIC Educational Resources Information Center

    Wu, Yan

    2011-01-01

    Software vulnerabilities allow an attacker to reduce a system's Confidentiality, Availability, and Integrity by exposing information, executing malicious code, and undermine system functionalities that contribute to the overall system purpose and need. With new vulnerabilities discovered everyday in a variety of applications and user environments,…

  6. The Episodic Buffer in Children with Intellectual Disabilities: An Exploratory Study

    ERIC Educational Resources Information Center

    Henry, Lucy A.

    2010-01-01

    Performance on three verbal measures (story recall, paired associated learning, category fluency) designed to assess the integration of long-term semantic and linguistic knowledge, phonological working memory and executive resources within the proposed "episodic buffer" of working memory (Baddeley, 2007) was assessed in children with intellectual…

  7. A Semantic Grid Oriented to E-Tourism

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao Ming

    With increasing complexity of tourism business models and tasks, there is a clear need of the next generation e-Tourism infrastructure to support flexible automation, integration, computation, storage, and collaboration. Currently several enabling technologies such as semantic Web, Web service, agent and grid computing have been applied in the different e-Tourism applications, however there is no a unified framework to be able to integrate all of them. So this paper presents a promising e-Tourism framework based on emerging semantic grid, in which a number of key design issues are discussed including architecture, ontologies structure, semantic reconciliation, service and resource discovery, role based authorization and intelligent agent. The paper finally provides the implementation of the framework.

  8. Automatic translation of MPI source into a latency-tolerant, data-driven form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Tan; Cicotti, Pietro; Bylaska, Eric

    Hiding communication behind useful computation is an important performance programming technique but remains an inscrutable programming exercise even for the expert. We present Bamboo, a code transformation framework that can realize communication overlap in applications written in MPI without the need to intrusively modify the source code. We reformulate MPI source into a task dependency graph representation, which partially orders the tasks, enabling the program to execute in a data-driven fashion under the control of an external runtime system. Experimental results demonstrate that Bamboo significantly reduces communication delays while requiring only modest amounts of programmer annotation for a variety ofmore » applications and platforms, including those employing co-processors and accelerators. Moreover, Bamboo’s performance meets or exceeds that of labor-intensive hand coding. As a result, the translator is more than a means of hiding communication costs automatically; it demonstrates the utility of semantic level optimization against a well-known library.« less

  9. Automatic translation of MPI source into a latency-tolerant, data-driven form

    DOE PAGES

    Nguyen, Tan; Cicotti, Pietro; Bylaska, Eric; ...

    2017-03-06

    Hiding communication behind useful computation is an important performance programming technique but remains an inscrutable programming exercise even for the expert. We present Bamboo, a code transformation framework that can realize communication overlap in applications written in MPI without the need to intrusively modify the source code. We reformulate MPI source into a task dependency graph representation, which partially orders the tasks, enabling the program to execute in a data-driven fashion under the control of an external runtime system. Experimental results demonstrate that Bamboo significantly reduces communication delays while requiring only modest amounts of programmer annotation for a variety ofmore » applications and platforms, including those employing co-processors and accelerators. Moreover, Bamboo’s performance meets or exceeds that of labor-intensive hand coding. As a result, the translator is more than a means of hiding communication costs automatically; it demonstrates the utility of semantic level optimization against a well-known library.« less

  10. Automatic translation of MPI source into a latency-tolerant, data-driven form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Tan; Cicotti, Pietro; Bylaska, Eric

    Hiding communication behind useful computation is an important performance programming technique but remains an inscrutable programming exercise even for the expert. We present Bamboo, a code transformation framework that can realize communication overlap in applications written in MPI without the need to intrusively modify the source code. Bamboo reformulates MPI source into the form of a task dependency graph that expresses a partial ordering among tasks, enabling the program to execute in a data-driven fashion under the control of an external runtime system. Experimental results demonstrate that Bamboo significantly reduces communication delays while requiring only modest amounts of programmer annotationmore » for a variety of applications and platforms, including those employing co-processors and accelerators. Moreover, Bamboo's performance meets or exceeds that of labor-intensive hand coding. The translator is more than a means of hiding communication costs automatically; it demonstrates the utility of semantic level optimization against a wellknown library.« less

  11. Crowd-Sourcing (Semantically) Structured Multilingual Educational Content (CoSMEC)

    ERIC Educational Resources Information Center

    Tarasowa, Darya; Auer, Sören; Khalili, Ali; Unbehauen, Jörg

    2014-01-01

    The support of multilingual content becomes crucial for educational platforms due to the benefits it offers. In this paper we propose a concept that allows content authors to use the power of the crowd to create (semantically) structured multilingual educational content out of their material. To enable the collaboration of the crowd, we expand our…

  12. Academic Libraries and the Semantic Web: What the Future May Hold for Research-Supporting Library Catalogues

    ERIC Educational Resources Information Center

    Campbell, D. Grant; Fast, Karl V.

    2004-01-01

    This paper examines how future metadata capabilities could enable academic libraries to exploit information on the emerging Semantic Web in their library catalogues. Whereas current metadata architectures treat the Web as a simple means of interchanging bibliographic data that have been created by libraries, this paper suggests that academic…

  13. Legacy2Drupal - Conversion of an existing oceanographic relational database to a semantically enabled Drupal content management system

    NASA Astrophysics Data System (ADS)

    Maffei, A. R.; Chandler, C. L.; Work, T.; Allen, J.; Groman, R. C.; Fox, P. A.

    2009-12-01

    Content Management Systems (CMSs) provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have previously designed customized schemas for their metadata. The WHOI Ocean Informatics initiative and the NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) have jointly sponsored a project to port an existing, relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal6 Content Management System. The goal was to translate all the existing database tables, input forms, website reports, and other features present in the existing system to employ Drupal CMS features. The replacement features include Drupal content types, CCK node-reference fields, themes, RDB, SPARQL, workflow, and a number of other supporting modules. Strategic use of some Drupal6 CMS features enables three separate but complementary interfaces that provide access to oceanographic research metadata via the MySQL database: 1) a Drupal6-powered front-end; 2) a standard SQL port (used to provide a Mapserver interface to the metadata and data; and 3) a SPARQL port (feeding a new faceted search capability being developed). Future plans include the creation of science ontologies, by scientist/technologist teams, that will drive semantically-enabled faceted search capabilities planned for the site. Incorporation of semantic technologies included in the future Drupal 7 core release is also anticipated. Using a public domain CMS as opposed to proprietary middleware, and taking advantage of the many features of Drupal 6 that are designed to support semantically-enabled interfaces will help prepare the BCO-DMO database for interoperability with other ecosystem databases.

  14. Automated Predictive Big Data Analytics Using Ontology Based Semantics.

    PubMed

    Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A

    2015-10-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.

  15. Automated Predictive Big Data Analytics Using Ontology Based Semantics

    PubMed Central

    Nural, Mustafa V.; Cotterell, Michael E.; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A.

    2017-01-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology. PMID:29657954

  16. Dynamic User Interfaces for Service Oriented Architectures in Healthcare.

    PubMed

    Schweitzer, Marco; Hoerbst, Alexander

    2016-01-01

    Electronic Health Records (EHRs) play a crucial role in healthcare today. Considering a data-centric view, EHRs are very advanced as they provide and share healthcare data in a cross-institutional and patient-centered way adhering to high syntactic and semantic interoperability. However, the EHR functionalities available for the end users are rare and hence often limited to basic document query functions. Future EHR use necessitates the ability to let the users define their needed data according to a certain situation and how this data should be processed. Workflow and semantic modelling approaches as well as Web services provide means to fulfil such a goal. This thesis develops concepts for dynamic interfaces between EHR end users and a service oriented eHealth infrastructure, which allow the users to design their flexible EHR needs, modeled in a dynamic and formal way. These are used to discover, compose and execute the right Semantic Web services.

  17. Mechanisms of remembering the past and imagining the future--new data from autobiographical memory tasks in a lifespan approach.

    PubMed

    Abram, M; Picard, L; Navarro, B; Piolino, P

    2014-10-01

    We investigated the episodic/semantic distinction in remembering the past and imagining the future and explored cognitive mechanisms predicting events' specificity throughout the lifespan. Eighty-three 6- to 81-year-old participants, divided into 5 age groups, underwent past, present and future episodic (events' evocation) and semantic (self-descriptions) autobiographical tasks and a complementary cognitive test battery (executive functions, working and episodic memory). The main results showed age effects on episodic events' evocation indicating an inverted U function (i.e., developmental progression from 6 to 21years and aging decline). By contrast, age effects were slighter on self-descriptions while self-defining events' evocation increased with age. Furthermore, age effects on episodic events' evocation were mainly mediated by age effects on cognitive functions and personal semantics. These new findings indicate a developmental and aging episodic/semantic distinction for both remembering the past and imagining the future, and suggest that above similarities, these abilities could have a fundamentally different basis. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Monitoring Data-Structure Evolution in Distributed Message-Passing Programs

    NASA Technical Reports Server (NTRS)

    Sarukkai, Sekhar R.; Beers, Andrew; Woodrow, Thomas S. (Technical Monitor)

    1996-01-01

    Monitoring the evolution of data structures in parallel and distributed programs, is critical for debugging its semantics and performance. However, the current state-of-art in tracking and presenting data-structure information on parallel and distributed environments is cumbersome and does not scale. In this paper we present a methodology that automatically tracks memory bindings (not the actual contents) of static and dynamic data-structures of message-passing C programs, using PVM. With the help of a number of examples we show that in addition to determining the impact of memory allocation overheads on program performance, graphical views can help in debugging the semantics of program execution. Scalable animations of virtual address bindings of source-level data-structures are used for debugging the semantics of parallel programs across all processors. In conjunction with light-weight core-files, this technique can be used to complement traditional debuggers on single processors. Detailed information (such as data-structure contents), on specific nodes, can be determined using traditional debuggers after the data structure evolution leading to the semantic error is observed graphically.

  19. The Impact of Alerting Designs on Air Traffic Controller's Eye Movement Patterns and Situation Awareness.

    PubMed

    Kearney, Peter; Li, Wen-Chin; Yu, Chung-San; Braithwaite, Graham

    2018-06-26

    This research investigated controller' situation awareness by comparing COOPANS's acoustic alerts with newly designed semantic alerts. The results demonstrate that ATCOs' visual scan patterns had significant differences between acoustic and semantic designs. ATCOs established different eye movement patterns on fixations number, fixation duration and saccade velocity. Effective decision support systems require human-centred design with effective stimuli to direct ATCO's attention to critical events. It is necessary to provide ATCOs with specific alerting information to reflect the nature of of the critical situation in order to minimize the side-effects of startle and inattentional deafness. Consequently, the design of a semantic alert can significantly reduce ATCOs' response time, therefore providing valuable extra time in a time-limited situation to formulate and execute resolution strategies in critical air safety events. The findings of this research indicate that the context-specified design of semantic alerts could improve ATCO's situational awareness and significantly reduce response time in the event of Short Term Conflict Alert activation which alerts to two aircraft having less than the required lateral or vertical separation.

  20. Thread safe astronomy

    NASA Astrophysics Data System (ADS)

    Seaman, R.

    2008-03-01

    Observational astronomy is the beneficiary of an ancient chain of apprenticeship. Kepler's laws required Tycho's data. As the pace of discoveries has increased over the centuries, so has the cadence of tutelage (literally, "watching over"). Naked eye astronomy is thousands of years old, the telescope hundreds, digital imaging a few decades, but today's undergraduates will use instrumentation yet unbuilt - and thus, unfamiliar to their professors - to complete their doctoral dissertations. Not only has the quickening cadence of astronomical data-taking overrun the apprehension of the science within, but the contingent pace of experimental design threatens our capacity to learn new techniques and apply them productively. Virtual technologies are necessary to accelerate our human processes of perception and comprehension to keep up with astronomical instrumentation and pipelined dataflows. Necessary, but not sufficient. Computers can confuse us as efficiently as they illuminate. Rather, as with neural pathways evolved to meet competitive ecological challenges, astronomical software and data must become organized into ever more coherent `threads' of execution. These are the same threaded constructs as understood by computer science. No datum is an island.

  1. Semantic markup of sensor capabilities: how simple it too simple?

    NASA Astrophysics Data System (ADS)

    Rueda-Velasquez, C. A.; Janowicz, K.; Fredericks, J.

    2016-12-01

    Semantics plays a key role for the publication, retrieval, integration, and reuse of observational data across the geosciences. In most cases, one can safely assume that the providers of such data, e.g., individual scientists, understand the observation context in which their data are collected,e.g., the used observation procedure, the sampling strategy, the feature of interest being studied, and so forth. However, can we expect that the same is true for the technical details of the used sensors and especially the nuanced changes that can impact observations in often unpredictable ways? Should the burden of annotating the sensor capabilities, firmware, operation ranges, and so forth be really part of a scientist's responsibility? Ideally, semantic annotations should be provided by the parties that understand these details and have a vested interest in maintaining these data. With manufactures providing semantically-enabled metadata for their sensors and instruments, observations could more easily be annotated and thereby enriched using this information. Unfortunately, today's sensor ontologies and tool chains developed for the Semantic Web community require expertise beyond the knowledge and interest of most manufacturers. Consequently, knowledge engineers need to better understand the sweet spot between simple ontologies/vocabularies and sufficient expressivity as well as the tools required to enable manufacturers to share data about their sensors. Here, we report on the current results of EarthCube's X-Domes project that aims to address the questions outlined above.

  2. The Semantic Network at Work and Rest: Differential Connectivity of Anterior Temporal Lobe Subregions.

    PubMed

    Jackson, Rebecca L; Hoffman, Paul; Pobric, Gorana; Lambon Ralph, Matthew A

    2016-02-03

    The anterior temporal lobe (ATL) makes a critical contribution to semantic cognition. However, the functional connectivity of the ATL and the functional network underlying semantic cognition has not been elucidated. In addition, subregions of the ATL have distinct functional properties and thus the potential differential connectivity between these subregions requires investigation. We explored these aims using both resting-state and active semantic task data in humans in combination with a dual-echo gradient echo planar imaging (EPI) paradigm designed to ensure signal throughout the ATL. In the resting-state analysis, the ventral ATL (vATL) and anterior middle temporal gyrus (MTG) were shown to connect to areas responsible for multimodal semantic cognition, including bilateral ATL, inferior frontal gyrus, medial prefrontal cortex, angular gyrus, posterior MTG, and medial temporal lobes. In contrast, the anterior superior temporal gyrus (STG)/superior temporal sulcus was connected to a distinct set of auditory and language-related areas, including bilateral STG, precentral and postcentral gyri, supplementary motor area, supramarginal gyrus, posterior temporal cortex, and inferior and middle frontal gyri. Complementary analyses of functional connectivity during an active semantic task were performed using a psychophysiological interaction (PPI) analysis. The PPI analysis highlighted the same semantic regions suggesting a core semantic network active during rest and task states. This supports the necessity for semantic cognition in internal processes occurring during rest. The PPI analysis showed additional connectivity of the vATL to regions of occipital and frontal cortex. These areas strongly overlap with regions found to be sensitive to executively demanding, controlled semantic processing. Previous studies have shown that semantic cognition depends on subregions of the anterior temporal lobe (ATL). However, the network of regions functionally connected to these subregions has not been demarcated. Here, we show that these ventrolateral anterior temporal subregions form part of a network responsible for semantic processing during both rest and an explicit semantic task. This demonstrates the existence of a core functional network responsible for multimodal semantic cognition regardless of state. Distinct connectivity is identified in the superior ATL, which is connected to auditory and language areas. Understanding the functional connectivity of semantic cognition allows greater understanding of how this complex process may be performed and the role of distinct subregions of the anterior temporal cortex. Copyright © 2016 Jackson et al.

  3. The Semantic Network at Work and Rest: Differential Connectivity of Anterior Temporal Lobe Subregions

    PubMed Central

    Jackson, Rebecca L.; Hoffman, Paul; Pobric, Gorana

    2016-01-01

    The anterior temporal lobe (ATL) makes a critical contribution to semantic cognition. However, the functional connectivity of the ATL and the functional network underlying semantic cognition has not been elucidated. In addition, subregions of the ATL have distinct functional properties and thus the potential differential connectivity between these subregions requires investigation. We explored these aims using both resting-state and active semantic task data in humans in combination with a dual-echo gradient echo planar imaging (EPI) paradigm designed to ensure signal throughout the ATL. In the resting-state analysis, the ventral ATL (vATL) and anterior middle temporal gyrus (MTG) were shown to connect to areas responsible for multimodal semantic cognition, including bilateral ATL, inferior frontal gyrus, medial prefrontal cortex, angular gyrus, posterior MTG, and medial temporal lobes. In contrast, the anterior superior temporal gyrus (STG)/superior temporal sulcus was connected to a distinct set of auditory and language-related areas, including bilateral STG, precentral and postcentral gyri, supplementary motor area, supramarginal gyrus, posterior temporal cortex, and inferior and middle frontal gyri. Complementary analyses of functional connectivity during an active semantic task were performed using a psychophysiological interaction (PPI) analysis. The PPI analysis highlighted the same semantic regions suggesting a core semantic network active during rest and task states. This supports the necessity for semantic cognition in internal processes occurring during rest. The PPI analysis showed additional connectivity of the vATL to regions of occipital and frontal cortex. These areas strongly overlap with regions found to be sensitive to executively demanding, controlled semantic processing. SIGNIFICANCE STATEMENT Previous studies have shown that semantic cognition depends on subregions of the anterior temporal lobe (ATL). However, the network of regions functionally connected to these subregions has not been demarcated. Here, we show that these ventrolateral anterior temporal subregions form part of a network responsible for semantic processing during both rest and an explicit semantic task. This demonstrates the existence of a core functional network responsible for multimodal semantic cognition regardless of state. Distinct connectivity is identified in the superior ATL, which is connected to auditory and language areas. Understanding the functional connectivity of semantic cognition allows greater understanding of how this complex process may be performed and the role of distinct subregions of the anterior temporal cortex. PMID:26843633

  4. Common world model for unmanned systems

    NASA Astrophysics Data System (ADS)

    Dean, Robert Michael S.

    2013-05-01

    The Robotic Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities. Key to this effort is the Common World Model, which moves beyond the state-of-the-art by representing the world using metric, semantic, and symbolic information. It joins these layers of information to define objects in the world. These objects may be reasoned upon jointly using traditional geometric, symbolic cognitive algorithms and new computational nodes formed by the combination of these disciplines. The Common World Model must understand how these objects relate to each other. Our world model includes the concept of Self-Information about the robot. By encoding current capability, component status, task execution state, and histories we track information which enables the robot to reason and adapt its performance using Meta-Cognition and Machine Learning principles. The world model includes models of how aspects of the environment behave, which enable prediction of future world states. To manage complexity, we adopted a phased implementation approach to the world model. We discuss the design of "Phase 1" of this world model, and interfaces by tracing perception data through the system from the source to the meta-cognitive layers provided by ACT-R and SS-RICS. We close with lessons learned from implementation and how the design relates to Open Architecture.

  5. The Yin and the Yang of Prediction: An fMRI Study of Semantic Predictive Processing

    PubMed Central

    Weber, Kirsten; Lau, Ellen F.; Stillerman, Benjamin; Kuperberg, Gina R.

    2016-01-01

    Probabilistic prediction plays a crucial role in language comprehension. When predictions are fulfilled, the resulting facilitation allows for fast, efficient processing of ambiguous, rapidly-unfolding input; when predictions are not fulfilled, the resulting error signal allows us to adapt to broader statistical changes in this input. We used functional Magnetic Resonance Imaging to examine the neuroanatomical networks engaged in semantic predictive processing and adaptation. We used a relatedness proportion semantic priming paradigm, in which we manipulated the probability of predictions while holding local semantic context constant. Under conditions of higher (versus lower) predictive validity, we replicate previous observations of reduced activity to semantically predictable words in the left anterior superior/middle temporal cortex, reflecting facilitated processing of targets that are consistent with prior semantic predictions. In addition, under conditions of higher (versus lower) predictive validity we observed significant differences in the effects of semantic relatedness within the left inferior frontal gyrus and the posterior portion of the left superior/middle temporal gyrus. We suggest that together these two regions mediated the suppression of unfulfilled semantic predictions and lexico-semantic processing of unrelated targets that were inconsistent with these predictions. Moreover, under conditions of higher (versus lower) predictive validity, a functional connectivity analysis showed that the left inferior frontal and left posterior superior/middle temporal gyrus were more tightly interconnected with one another, as well as with the left anterior cingulate cortex. The left anterior cingulate cortex was, in turn, more tightly connected to superior lateral frontal cortices and subcortical regions—a network that mediates rapid learning and adaptation and that may have played a role in switching to a more predictive mode of processing in response to the statistical structure of the wider environmental context. Together, these findings highlight close links between the networks mediating semantic prediction, executive function and learning, giving new insights into how our brains are able to flexibly adapt to our environment. PMID:27010386

  6. The Yin and the Yang of Prediction: An fMRI Study of Semantic Predictive Processing.

    PubMed

    Weber, Kirsten; Lau, Ellen F; Stillerman, Benjamin; Kuperberg, Gina R

    2016-01-01

    Probabilistic prediction plays a crucial role in language comprehension. When predictions are fulfilled, the resulting facilitation allows for fast, efficient processing of ambiguous, rapidly-unfolding input; when predictions are not fulfilled, the resulting error signal allows us to adapt to broader statistical changes in this input. We used functional Magnetic Resonance Imaging to examine the neuroanatomical networks engaged in semantic predictive processing and adaptation. We used a relatedness proportion semantic priming paradigm, in which we manipulated the probability of predictions while holding local semantic context constant. Under conditions of higher (versus lower) predictive validity, we replicate previous observations of reduced activity to semantically predictable words in the left anterior superior/middle temporal cortex, reflecting facilitated processing of targets that are consistent with prior semantic predictions. In addition, under conditions of higher (versus lower) predictive validity we observed significant differences in the effects of semantic relatedness within the left inferior frontal gyrus and the posterior portion of the left superior/middle temporal gyrus. We suggest that together these two regions mediated the suppression of unfulfilled semantic predictions and lexico-semantic processing of unrelated targets that were inconsistent with these predictions. Moreover, under conditions of higher (versus lower) predictive validity, a functional connectivity analysis showed that the left inferior frontal and left posterior superior/middle temporal gyrus were more tightly interconnected with one another, as well as with the left anterior cingulate cortex. The left anterior cingulate cortex was, in turn, more tightly connected to superior lateral frontal cortices and subcortical regions-a network that mediates rapid learning and adaptation and that may have played a role in switching to a more predictive mode of processing in response to the statistical structure of the wider environmental context. Together, these findings highlight close links between the networks mediating semantic prediction, executive function and learning, giving new insights into how our brains are able to flexibly adapt to our environment.

  7. System Definition Document

    DOT National Transportation Integrated Search

    1996-06-12

    The Gary-Chicago-Milwaukee (GCM) Corridor Transportation Information Center : (C-TIC) System Definition Document describes the C-TIC concept and defines the : high level processes and dataflows. The Requirements Specification together : with the Inte...

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Rui; Praggastis, Brenda L.; Smith, William P.

    While streaming data have become increasingly more popular in business and research communities, semantic models and processing software for streaming data have not kept pace. Traditional semantic solutions have not addressed transient data streams. Semantic web languages (e.g., RDF, OWL) have typically addressed static data settings and linked data approaches have predominantly addressed static or growing data repositories. Streaming data settings have some fundamental differences; in particular, data are consumed on the fly and data may expire. Stream reasoning, a combination of stream processing and semantic reasoning, has emerged with the vision of providing "smart" processing of streaming data. C-SPARQLmore » is a prominent stream reasoning system that handles semantic (RDF) data streams. Many stream reasoning systems including C-SPARQL use a sliding window and use data arrival time to evict data. For data streams that include expiration times, a simple arrival time scheme is inadequate if the window size does not match the expiration period. In this paper, we propose a cache-enabled, order-aware, ontology-based stream reasoning framework. This framework consumes RDF streams with expiration timestamps assigned by the streaming source. Our framework utilizes both arrival and expiration timestamps in its cache eviction policies. In addition, we introduce the notion of "semantic importance" which aims to address the relevance of data to the expected reasoning, thus enabling the eviction algorithms to be more context- and reasoning-aware when choosing what data to maintain for question answering. We evaluate this framework by implementing three different prototypes and utilizing five metrics. The trade-offs of deploying the proposed framework are also discussed.« less

  9. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-13

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

  10. [Effects of punctuation on the processing of syntactically ambiguous Japanese sentences with a semantic bias].

    PubMed

    Niikuni, Keiyu; Muramoto, Toshiaki

    2014-06-01

    This study explored the effects of a comma on the processing of structurally ambiguous Japanese sentences with a semantic bias. A previous study has shown that a comma which is incompatible with an ambiguous sentence's semantic bias affects the processing of the sentence, but the effects of a comma that is compatible with the bias are unclear. In the present study, we examined the role of a comma compatible with the sentence's semantic bias using the self-paced reading method, which enabled us to determine the reading times for the region of the sentence where readers would be expected to solve the ambiguity using semantic information (the "target region"). The results show that a comma significantly increases the reading time of the punctuated word but decreases the reading time in the target region. We concluded that even if the semantic information provided might be sufficient for disambiguation, the insertion of a comma would affect the processing cost of the ambiguity, indicating that readers use both the comma and semantic information in parallel for sentence processing.

  11. A semantic model for multimodal data mining in healthcare information systems.

    PubMed

    Iakovidis, Dimitris; Smailis, Christos

    2012-01-01

    Electronic health records (EHRs) are representative examples of multimodal/multisource data collections; including measurements, images and free texts. The diversity of such information sources and the increasing amounts of medical data produced by healthcare institutes annually, pose significant challenges in data mining. In this paper we present a novel semantic model that describes knowledge extracted from the lowest-level of a data mining process, where information is represented by multiple features i.e. measurements or numerical descriptors extracted from measurements, images, texts or other medical data, forming multidimensional feature spaces. Knowledge collected by manual annotation or extracted by unsupervised data mining from one or more feature spaces is modeled through generalized qualitative spatial semantics. This model enables a unified representation of knowledge across multimodal data repositories. It contributes to bridging the semantic gap, by enabling direct links between low-level features and higher-level concepts e.g. describing body parts, anatomies and pathological findings. The proposed model has been developed in web ontology language based on description logics (OWL-DL) and can be applied to a variety of data mining tasks in medical informatics. It utility is demonstrated for automatic annotation of medical data.

  12. Mind wandering minimizes mind numbing: Reducing semantic-satiation effects through absorptive lapses of attention.

    PubMed

    Mooneyham, Benjamin W; Schooler, Jonathan W

    2016-08-01

    Mind wandering is associated with perceptual decoupling: the disengagement of attention from perception. This decoupling is deleterious to performance in many situations; however, we sought to determine whether it might occur in the service of performance in certain circumstances. In two studies, we examined the role of mind wandering in a test of "semantic satiation," a phenomenon in which the repeated presentation of a word reduces semantic priming for a subsequently presented semantic associate. We posited that the attentional and perceptual decoupling associated with mind wandering would reduce the amount of satiation in the semantic representations of repeatedly presented words, thus leading to a reduced semantic-satiation effect. Our results supported this hypothesis: Self-reported mind-wandering episodes (Study 1) and behavioral indices of decoupled attention (Study 2) were both predictive of maintained semantic priming in situations predicted to induce semantic satiation. Additionally, our results suggest that moderate inattention to repetitive stimuli is not sufficient to enable "dishabituation": the refreshment of cognitive performance that results from diverting attention away from the task at hand. Rather, full decoupling is necessary to reap the benefits of mind wandering and to minimize mind numbing.

  13. Linking Somatic and Symbolic Representation in Semantic Memory: The Dynamic Multilevel Reactivation Framework

    PubMed Central

    Reilly, Jamie; Peelle, Jonathan E; Garcia, Amanda; Crutch, Sebastian J

    2016-01-01

    Biological plausibility is an essential constraint for any viable model of semantic memory. Yet, we have only the most rudimentary understanding of how the human brain conducts abstract symbolic transformations that underlie word and object meaning. Neuroscience has evolved a sophisticated arsenal of techniques for elucidating the architecture of conceptual representation. Nevertheless, theoretical convergence remains elusive. Here we describe several contrastive approaches to the organization of semantic knowledge, and in turn we offer our own perspective on two recurring questions in semantic memory research: 1) to what extent are conceptual representations mediated by sensorimotor knowledge (i.e., to what degree is semantic memory embodied)? 2) How might an embodied semantic system represent abstract concepts such as modularity, symbol, or proposition? To address these questions, we review the merits of sensorimotor (i.e., embodied) and amodal (i.e., disembodied) semantic theories and address the neurobiological constraints underlying each. We conclude that the shortcomings of both perspectives in their extreme forms necessitate a hybrid middle ground. We accordingly propose the Dynamic Multilevel Reactivation Framework, an integrative model premised upon flexible interplay between sensorimotor and amodal symbolic representations mediated by multiple cortical hubs. We discuss applications of the Dynamic Multilevel Reactivation Framework to abstract and concrete concept representation and describe how a multidimensional conceptual topography based on emotion, sensation, and magnitude can successfully frame a semantic space containing meanings for both abstract and concrete words. The consideration of ‘abstract conceptual features’ does not diminish the role of logical and/or executive processing in activating, manipulating and using information stored in conceptual representations. Rather, it proposes that the material on which these processes operate necessarily combine pure sensorimotor information and higher-order cognitive dimensions involved in symbolic representation. PMID:27294419

  14. Linking somatic and symbolic representation in semantic memory: the dynamic multilevel reactivation framework.

    PubMed

    Reilly, Jamie; Peelle, Jonathan E; Garcia, Amanda; Crutch, Sebastian J

    2016-08-01

    Biological plausibility is an essential constraint for any viable model of semantic memory. Yet, we have only the most rudimentary understanding of how the human brain conducts abstract symbolic transformations that underlie word and object meaning. Neuroscience has evolved a sophisticated arsenal of techniques for elucidating the architecture of conceptual representation. Nevertheless, theoretical convergence remains elusive. Here we describe several contrastive approaches to the organization of semantic knowledge, and in turn we offer our own perspective on two recurring questions in semantic memory research: (1) to what extent are conceptual representations mediated by sensorimotor knowledge (i.e., to what degree is semantic memory embodied)? (2) How might an embodied semantic system represent abstract concepts such as modularity, symbol, or proposition? To address these questions, we review the merits of sensorimotor (i.e., embodied) and amodal (i.e., disembodied) semantic theories and address the neurobiological constraints underlying each. We conclude that the shortcomings of both perspectives in their extreme forms necessitate a hybrid middle ground. We accordingly propose the Dynamic Multilevel Reactivation Framework-an integrative model predicated upon flexible interplay between sensorimotor and amodal symbolic representations mediated by multiple cortical hubs. We discuss applications of the dynamic multilevel reactivation framework to abstract and concrete concept representation and describe how a multidimensional conceptual topography based on emotion, sensation, and magnitude can successfully frame a semantic space containing meanings for both abstract and concrete words. The consideration of 'abstract conceptual features' does not diminish the role of logical and/or executive processing in activating, manipulating and using information stored in conceptual representations. Rather, it proposes that the materials upon which these processes operate necessarily combine pure sensorimotor information and higher-order cognitive dimensions involved in symbolic representation.

  15. What we talk about when we talk about access deficits

    PubMed Central

    Mirman, Daniel; Britt, Allison E.

    2014-01-01

    Semantic impairments have been divided into storage deficits, in which the semantic representations themselves are damaged, and access deficits, in which the representations are intact but access to them is impaired. The behavioural phenomena that have been associated with access deficits include sensitivity to cueing, sensitivity to presentation rate, performance inconsistency, negative serial position effects, sensitivity to number and strength of competitors, semantic blocking effects, disordered selection between strong and weak competitors, correlation between semantic deficits and executive function deficits and reduced word frequency effects. Four general accounts have been proposed for different subsets of these phenomena: abnormal refractoriness, too much activation, impaired competitive selection and deficits of semantic control. A combination of abnormal refractoriness and impaired competitive selection can account for most of the behavioural phenomena, but there remain several open questions. In particular, it remains unclear whether access deficits represent a single syndrome, a syndrome with multiple subtypes or a variable collection of phenomena, whether the underlying deficit is domain-general or domain-specific, whether it is owing to disorders of inhibition, activation or selection, and the nature of the connection (if any) between access phenomena in aphasia and in neurologically intact controls. Computational models offer a promising approach to answering these questions. PMID:24324232

  16. Performance of Brazilian children on phonemic and semantic verbal fluency tasks

    PubMed Central

    Charchat-Fichman, Helenice; Oliveira, Rosinda Martins; da Silva, Andreza Morais

    2011-01-01

    The most used verbal fluency paradigms are semantic and letter fluency tasks. Studies suggest that these paradigms access semantic memory and executive function and are sensitive to frontal lobe disturbances. There are few studies in Brazilian samples on these paradigms. Objective The present study investigated performance, and the effects of age, on verbal fluency tasks in Brazilian children. The results were compared with those of other studies, and the consistency of the scoring criteria data is presented. Methods A sample of 119 children (7 to 10 years old) was submitted to the three phonemic fluency (F, A, M) tasks and three semantic fluency (animals, clothes, fruits) tasks. The results of thirty subjects were scored by two independent examiners. Results A significant positive correlation was found between the scores calculated by the two independent examiners. Significant positive correlations were found between performance on the semantic fluency task and the phonemic fluency task. The effect of age was significant for both tasks, and a significant difference was found between the 7- and 9-year-old subjects and between the 7- and 10-year-old subjects. The 8-year-old group did not differ to any of the other age groups. Conclusion The pattern of results was similar to that observed in previous Brazilian and international studies. PMID:29213727

  17. Biotea: RDFizing PubMed Central in support for the paper as an interface to the Web of Data

    PubMed Central

    2013-01-01

    Background The World Wide Web has become a dissemination platform for scientific and non-scientific publications. However, most of the information remains locked up in discrete documents that are not always interconnected or machine-readable. The connectivity tissue provided by RDF technology has not yet been widely used to support the generation of self-describing, machine-readable documents. Results In this paper, we present our approach to the generation of self-describing machine-readable scholarly documents. We understand the scientific document as an entry point and interface to the Web of Data. We have semantically processed the full-text, open-access subset of PubMed Central. Our RDF model and resulting dataset make extensive use of existing ontologies and semantic enrichment services. We expose our model, services, prototype, and datasets at http://biotea.idiginfo.org/ Conclusions The semantic processing of biomedical literature presented in this paper embeds documents within the Web of Data and facilitates the execution of concept-based queries against the entire digital library. Our approach delivers a flexible and adaptable set of tools for metadata enrichment and semantic processing of biomedical documents. Our model delivers a semantically rich and highly interconnected dataset with self-describing content so that software can make effective use of it. PMID:23734622

  18. Principal Component Analysis Study of Visual and Verbal Metaphoric Comprehension in Children with Autism and Learning Disabilities

    ERIC Educational Resources Information Center

    Mashal, Nira; Kasirer, Anat

    2012-01-01

    This research extends previous studies regarding the metaphoric competence of autistic and learning disabled children on different measures of visual and verbal non-literal language comprehension, as well as cognitive abilities that include semantic knowledge, executive functions, similarities, and reading fluency. Thirty seven children with…

  19. A Comparison of Five FMRI Protocols for Mapping Speech Comprehension Systems

    PubMed Central

    Binder, Jeffrey R.; Swanson, Sara J.; Hammeke, Thomas A.; Sabsevitz, David S.

    2008-01-01

    Aims Many fMRI protocols for localizing speech comprehension have been described, but there has been little quantitative comparison of these methods. We compared five such protocols in terms of areas activated, extent of activation, and lateralization. Methods FMRI BOLD signals were measured in 26 healthy adults during passive listening and active tasks using words and tones. Contrasts were designed to identify speech perception and semantic processing systems. Activation extent and lateralization were quantified by counting activated voxels in each hemisphere for each participant. Results Passive listening to words produced bilateral superior temporal activation. After controlling for pre-linguistic auditory processing, only a small area in the left superior temporal sulcus responded selectively to speech. Active tasks engaged an extensive, bilateral attention and executive processing network. Optimal results (consistent activation and strongly lateralized pattern) were obtained by contrasting an active semantic decision task with a tone decision task. There was striking similarity between the network of brain regions activated by the semantic task and the network of brain regions that showed task-induced deactivation, suggesting that semantic processing occurs during the resting state. Conclusions FMRI protocols for mapping speech comprehension systems differ dramatically in pattern, extent, and lateralization of activation. Brain regions involved in semantic processing were identified only when an active, non-linguistic task was used as a baseline, supporting the notion that semantic processing occurs whenever attentional resources are not controlled. Identification of these lexical-semantic regions is particularly important for predicting language outcome in patients undergoing temporal lobe surgery. PMID:18513352

  20. Holding a manual response sequence in memory can disrupt vocal responses that share semantic features with the manual response.

    PubMed

    Fournier, Lisa Renee; Wiediger, Matthew D; McMeans, Ryan; Mattson, Paul S; Kirkwood, Joy; Herzog, Theibot

    2010-07-01

    Holding an action plan in memory for later execution can delay execution of another action if the actions share a similar (compatible) feature. This compatibility interference (CI) occurs for actions that share the same response modality (e.g., manual response). We investigated whether CI can generalize to actions that utilize different response modalities (manual and vocal). In three experiments, participants planned and withheld a sequence of key-presses with the left- or right-hand based on the visual identity of the first stimulus, and then immediately executed a speeded, vocal response ('left' or 'right') to a second visual stimulus. The vocal response was based on discriminating stimulus color (Experiment 1), reading a written word (Experiment 2), or reporting the antonym of a written word (Experiment 3). Results showed that CI occurred when the manual response hand (e.g., left) was compatible with the identity of the vocal response (e.g., 'left') in Experiment 1 and 3, but not in Experiment 2. This suggests that partial overlap of semantic codes is sufficient to obtain CI unless the intervening action can be accessed automatically (Experiment 2). These findings are consistent with the code occupation hypothesis and the general framework of the theory of event coding (Behav Brain Sci 24:849-878, 2001a; Behav Brain Sci 24:910-937, 2001b).

  1. Neurocognitive disorder in hypertensive patients. Heart-Brain Study.

    PubMed

    Vicario, A; Cerezo, G H; Del Sueldo, M; Zilberman, J; Pawluk, S M; Lódolo, N; De Cerchio, A E; Ruffa, R M; Plunkett, R; Giuliano, M E; Forcada, P; Hauad, S; Flores, R

    2018-02-15

    The relation between hypertension and cognitive impairment is an undisputable fact. The aims of this study were to determine the prevalence of cognitive impairment in hypertensive patients, to identify the most affected cognitive domain, and to observe the association with different parameters of hypertension and other vascular risk factors. A multicentre study was carried out, and 1281 hypertensive patients of both genders and ≥21 years of age were included. Data on the following parameters were obtained: cognitive status (Minimal Cognitive Examination), behavioural status (Hospital Anxiety and Depression Scale), blood pressure, anthropometry, and biochemical profile. The average age was 60.2±13.5 years (71% female), and the educational level was 9.9±5.1 years. Global cognitive impairment was seen in 22.1%, executive dysfunction in 36.2%, and semantic memory impairment in 48.9%. Cognitive impairment was higher in males (36.8% vs. 30.06%) within both the 70-79-year-old and the ≥80-year-old (50% vs. 40%) age groups. Abnormal Clock Drawing Test results were related to high pulse pressure (p<0.0036), and abnormal Mini-Boston Naming Test results to both high systolic blood pressure (p<0.052) and pulse pressure (p<0.001). The treated/uncontrolled hypertensive group showed abnormal results both in the Mini Mental State Examination (OR, 0.73; p=0.036) and the Mini-Boston Naming Test (OR, 1.36; p=0.021). Among patients without cognitive impairment (MMSE >24), 29.4% presented executive dysfunction, and 41.5% semantic memory impairment. Cognitive impairment was higher in hypertensive patients than in the general population. Executive functions and semantic memory were the most affected cognitive domains. High systolic blood pressure and pulse pressure were associated with abnormal results in cognitive tests. Copyright © 2018 SEH-LELHA. Publicado por Elsevier España, S.L.U. All rights reserved.

  2. Atypical performance patterns on Delis-Kaplan Executive Functioning System Color-Word Interference Test: Cognitive switching and learning ability in older adults.

    PubMed

    Berg, Jody-Lynn; Swan, Natasha M; Banks, Sarah J; Miller, Justin B

    2016-09-01

    Cognitive set shifting requires flexible application of lower level processes. The Delis-Kaplan Executive Functioning System (DKEFS) Color-Word Interference Test (CWIT) is commonly used to clinically assess cognitive set shifting. An atypical pattern of performance has been observed on the CWIT; a subset of individuals perform faster, with equal or fewer errors, on the more difficult inhibition/switching than the inhibition trial. This study seeks to explore the cognitive underpinnings of this atypical pattern. It is hypothesized that atypical patterns on CWIT will be associated with better performance on underlying cognitive measures of attention, working memory, and learning when compared to typical CWIT patterns. Records from 239 clinical referrals (age: M = 68.09 years, SD = 10.62; education: M = 14.87 years, SD = 2.73) seen for a neuropsychological evaluation as part of diagnostic work up in an outpatient dementia and movement disorders clinic were sampled. The standard battery of tests included measures of attention, learning, fluency, executive functioning, and working memory. Analyses of variance (ANOVAs) were conducted to compare the cognitive performance of those with typical versus atypical CWIT patterns. An atypical pattern of performance was confirmed in 23% of our sample. Analyses revealed a significant group difference in acquisition of information on both nonverbal (Brief Visuospatial Memory Test-Revised, BVMT-R total recall), F(1, 213) = 16.61, p < .001, and verbal (Hopkins Verbal Learning Test-Revised, HVLT-R total recall) learning tasks, F(1, 181) = 6.43, p < .01, and semantic fluency (Animal Naming), F(1, 232) = 7.57, p = .006, with the atypical group performing better on each task. Effect sizes were larger for nonverbal (Cohen's d = 0.66) than verbal learning (Cohen's d = 0.47) and semantic fluency (Cohen's d = 0.43). Individuals demonstrating an atypical pattern of performance on the CWIT inhibition/switching trial also demonstrated relative strengths in semantic fluency and learning.

  3. The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunderam, Vaidy S.

    2012-03-20

    The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept ofmore » a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.« less

  4. A model for the control mode man-computer interface dialogue

    NASA Technical Reports Server (NTRS)

    Chafin, R. L.

    1981-01-01

    A four stage model is presented for the control mode man-computer interface dialogue. It consists of context development, semantic development syntactic development, and command execution. Each stage is discussed in terms of the operator skill levels (naive, novice, competent, and expert) and pertinent human factors issues. These issues are human problem solving, human memory, and schemata. The execution stage is discussed in terms of the operators typing skills. This model provides an understanding of the human process in command mode activity for computer systems and a foundation for relating system characteristics to operator characteristics.

  5. OntoADR a semantic resource describing adverse drug reactions to support searching, coding, and information retrieval.

    PubMed

    Souvignet, Julien; Declerck, Gunnar; Asfari, Hadyl; Jaulent, Marie-Christine; Bousquet, Cédric

    2016-10-01

    Efficient searching and coding in databases that use terminological resources requires that they support efficient data retrieval. The Medical Dictionary for Regulatory Activities (MedDRA) is a reference terminology for several countries and organizations to code adverse drug reactions (ADRs) for pharmacovigilance. Ontologies that are available in the medical domain provide several advantages such as reasoning to improve data retrieval. The field of pharmacovigilance does not yet benefit from a fully operational ontology to formally represent the MedDRA terms. Our objective was to build a semantic resource based on formal description logic to improve MedDRA term retrieval and aid the generation of on-demand custom groupings by appropriately and efficiently selecting terms: OntoADR. The method consists of the following steps: (1) mapping between MedDRA terms and SNOMED-CT, (2) generation of semantic definitions using semi-automatic methods, (3) storage of the resource and (4) manual curation by pharmacovigilance experts. We built a semantic resource for ADRs enabling a new type of semantics-based term search. OntoADR adds new search capabilities relative to previous approaches, overcoming the usual limitations of computation using lightweight description logic, such as the intractability of unions or negation queries, bringing it closer to user needs. Our automated approach for defining MedDRA terms enabled the association of at least one defining relationship with 67% of preferred terms. The curation work performed on our sample showed an error level of 14% for this automated approach. We tested OntoADR in practice, which allowed us to build custom groupings for several medical topics of interest. The methods we describe in this article could be adapted and extended to other terminologies which do not benefit from a formal semantic representation, thus enabling better data retrieval performance. Our custom groupings of MedDRA terms were used while performing signal detection, which suggests that the graphical user interface we are currently implementing to process OntoADR could be usefully integrated into specialized pharmacovigilance software that rely on MedDRA. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. The inferior, anterior temporal lobes and semantic memory clarified: novel evidence from distortion-corrected fMRI.

    PubMed

    Visser, M; Embleton, K V; Jefferies, E; Parker, G J; Ralph, M A Lambon

    2010-05-01

    The neural basis of semantic memory generates considerable debate. Semantic dementia results from bilateral anterior temporal lobe (ATL) atrophy and gives rise to a highly specific impairment of semantic memory, suggesting that this region is a critical neural substrate for semantic processing. Recent rTMS experiments with neurologically-intact participants also indicate that the ATL are a necessary substrate for semantic memory. Exactly which regions within the ATL are important for semantic memory are difficult to detect from these methods (because the damage in SD covers a large part of the ATL). Functional neuroimaging might provide important clues about which specific areas exhibit activation that correlates with normal semantic performance. Neuroimaging studies, however, have not consistently found anterior temporal lobe activation in semantic tasks. A recent meta-analysis indicates that this inconsistency may be due to a collection of technical limitations associated with previous studies, including a reduced field-of-view and magnetic susceptibility artefacts associated with standard gradient echo fMRI. We conducted an fMRI study of semantic memory using a combination of techniques which improve sensitivity to ATL activations whilst preserving whole-brain coverage. As expected from SD patients and ATL rTMS experiments, this method revealed bilateral temporal activation extending from the inferior temporal lobe along the fusiform gyrus to the anterior temporal regions, bilaterally. We suggest that the inferior, anterior temporal lobe region makes a crucial contribution to semantic cognition and utilising this version of fMRI will enable further research on the semantic role of the ATL. 2010 Elsevier Ltd. All rights reserved.

  7. Geospatial-temporal semantic graph representations of trajectories from remote sensing and geolocation data

    DOEpatents

    Perkins, David Nikolaus; Brost, Randolph; Ray, Lawrence P.

    2017-08-08

    Various technologies for facilitating analysis of large remote sensing and geolocation datasets to identify features of interest are described herein. A search query can be submitted to a computing system that executes searches over a geospatial temporal semantic (GTS) graph to identify features of interest. The GTS graph comprises nodes corresponding to objects described in the remote sensing and geolocation datasets, and edges that indicate geospatial or temporal relationships between pairs of nodes in the nodes. Trajectory information is encoded in the GTS graph by the inclusion of movable nodes to facilitate searches for features of interest in the datasets relative to moving objects such as vehicles.

  8. Improvements to the Ontology-based Metadata Portal for Unified Semantics (OlyMPUS)

    NASA Astrophysics Data System (ADS)

    Linsinbigler, M. A.; Gleason, J. L.; Huffer, E.

    2016-12-01

    The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS), funded by the NASA Earth Science Technology Office Advanced Information Systems Technology program, is an end-to-end system designed to support Earth Science data consumers and data providers, enabling the latter to register data sets and provision them with the semantically rich metadata that drives the Ontology-Driven Interactive Search Environment for Earth Sciences (ODISEES). OlyMPUS complements the ODISEES' data discovery system with an intelligent tool to enable data producers to auto-generate semantically enhanced metadata and upload it to the metadata repository that drives ODISEES. Like ODISEES, the OlyMPUS metadata provisioning tool leverages robust semantics, a NoSQL database and query engine, an automated reasoning engine that performs first- and second-order deductive inferencing, and uses a controlled vocabulary to support data interoperability and automated analytics. The ODISEES data discovery portal leverages this metadata to provide a seamless data discovery and access experience for data consumers who are interested in comparing and contrasting the multiple Earth science data products available across NASA data centers. Olympus will support scientists' services and tools for performing complex analyses and identifying correlations and non-obvious relationships across all types of Earth System phenomena using the full spectrum of NASA Earth Science data available. By providing an intelligent discovery portal that supplies users - both human users and machines - with detailed information about data products, their contents and their structure, ODISEES will reduce the level of effort required to identify and prepare large volumes of data for analysis. This poster will explain how OlyMPUS leverages deductive reasoning and other technologies to create an integrated environment for generating and exploiting semantically rich metadata.

  9. From perceptual to lexico-semantic analysis--cortical plasticity enabling new levels of processing.

    PubMed

    Schlaffke, Lara; Rüther, Naima N; Heba, Stefanie; Haag, Lauren M; Schultz, Thomas; Rosengarth, Katharina; Tegenthoff, Martin; Bellebaum, Christian; Schmidt-Wilcke, Tobias

    2015-11-01

    Certain kinds of stimuli can be processed on multiple levels. While the neural correlates of different levels of processing (LOPs) have been investigated to some extent, most of the studies involve skills and/or knowledge already present when performing the task. In this study we specifically sought to identify neural correlates of an evolving skill that allows the transition from perceptual to a lexico-semantic stimulus analysis. Eighteen participants were trained to decode 12 letters of Morse code that were presented acoustically inside and outside of the scanner environment. Morse code was presented in trains of three letters while brain activity was assessed with fMRI. Participants either attended to the stimulus length (perceptual analysis), or evaluated its meaning distinguishing words from nonwords (lexico-semantic analysis). Perceptual and lexico-semantic analyses shared a mutual network comprising the left premotor cortex, the supplementary motor area (SMA) and the inferior parietal lobule (IPL). Perceptual analysis was associated with a strong brain activation in the SMA and the superior temporal gyrus bilaterally (STG), which remained unaltered from pre and post training. In the lexico-semantic analysis post learning, study participants showed additional activation in the left inferior frontal cortex (IFC) and in the left occipitotemporal cortex (OTC), regions known to be critically involved in lexical processing. Our data provide evidence for cortical plasticity evolving with a learning process enabling the transition from perceptual to lexico-semantic stimulus analysis. Importantly, the activation pattern remains task-related LOP and is thus the result of a decision process as to which LOP to engage in. © 2015 The Authors. Human Brain Mapping Published by Wiley Periodicals, Inc.

  10. From perceptual to lexico‐semantic analysis—cortical plasticity enabling new levels of processing

    PubMed Central

    Schlaffke, Lara; Rüther, Naima N.; Heba, Stefanie; Haag, Lauren M.; Schultz, Thomas; Rosengarth, Katharina; Tegenthoff, Martin; Bellebaum, Christian

    2015-01-01

    Abstract Certain kinds of stimuli can be processed on multiple levels. While the neural correlates of different levels of processing (LOPs) have been investigated to some extent, most of the studies involve skills and/or knowledge already present when performing the task. In this study we specifically sought to identify neural correlates of an evolving skill that allows the transition from perceptual to a lexico‐semantic stimulus analysis. Eighteen participants were trained to decode 12 letters of Morse code that were presented acoustically inside and outside of the scanner environment. Morse code was presented in trains of three letters while brain activity was assessed with fMRI. Participants either attended to the stimulus length (perceptual analysis), or evaluated its meaning distinguishing words from nonwords (lexico‐semantic analysis). Perceptual and lexico‐semantic analyses shared a mutual network comprising the left premotor cortex, the supplementary motor area (SMA) and the inferior parietal lobule (IPL). Perceptual analysis was associated with a strong brain activation in the SMA and the superior temporal gyrus bilaterally (STG), which remained unaltered from pre and post training. In the lexico‐semantic analysis post learning, study participants showed additional activation in the left inferior frontal cortex (IFC) and in the left occipitotemporal cortex (OTC), regions known to be critically involved in lexical processing. Our data provide evidence for cortical plasticity evolving with a learning process enabling the transition from perceptual to lexico‐semantic stimulus analysis. Importantly, the activation pattern remains task‐related LOP and is thus the result of a decision process as to which LOP to engage in. Hum Brain Mapp 36:4512–4528, 2015. © 2015 The Authors. Human Brain Mapping Published byWiley Periodicals, Inc. PMID:26304153

  11. A health analytics semantic ETL service for obesity surveillance.

    PubMed

    Poulymenopoulou, M; Papakonstantinou, D; Malamateniou, F; Vassilacopoulos, G

    2015-01-01

    The increasingly large amount of data produced in healthcare (e.g. collected through health information systems such as electronic medical records - EMRs or collected through novel data sources such as personal health records - PHRs, social media, web resources) enable the creation of detailed records about people's health, sentiments and activities (e.g. physical activity, diet, sleep quality) that can be used in the public health area among others. However, despite the transformative potential of big data in public health surveillance there are several challenges in integrating big data. In this paper, the interoperability challenge is tackled and a semantic Extract Transform Load (ETL) service is proposed that seeks to semantically annotate big data to result into valuable data for analysis. This service is considered as part of a health analytics engine on the cloud that interacts with existing healthcare information exchange networks, like the Integrating the Healthcare Enterprise (IHE), PHRs, sensors, mobile applications, and other web resources to retrieve patient health, behavioral and daily activity data. The semantic ETL service aims at semantically integrating big data for use by analytic mechanisms. An illustrative implementation of the service on big data which is potentially relevant to human obesity, enables using appropriate analytic techniques (e.g. machine learning, text mining) that are expected to assist in identifying patterns and contributing factors (e.g. genetic background, social, environmental) for this social phenomenon and, hence, drive health policy changes and promote healthy behaviors where residents live, work, learn, shop and play.

  12. Semantic message oriented middleware for publish/subscribe networks

    NASA Astrophysics Data System (ADS)

    Li, Han; Jiang, Guofei

    2004-09-01

    The publish/subscribe paradigm of Message Oriented Middleware provides a loosely coupled communication model between distributed applications. Traditional publish/subscribe middleware uses keywords to match advertisements and subscriptions and does not support deep semantic matching. To this end, we designed and implemented a Semantic Message Oriented Middleware system to provide such capabilities for semantic description and matching. We adopted the DARPA Agent Markup Language and Ontology Inference Layer, a formal knowledge representation language for expressing sophisticated classifications and enabling automated inference, as the topic description language in our middleware system. A simple description logic inference system was implemented to handle the matching process between the subscriptions of subscribers and the advertisements of publishers. Moreover our middleware system also has a security architecture to support secure communication and user privilege control.

  13. Semantic framework for mapping object-oriented model to semantic web languages

    PubMed Central

    Ježek, Petr; Mouček, Roman

    2015-01-01

    The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework. PMID:25762923

  14. Semantic framework for mapping object-oriented model to semantic web languages.

    PubMed

    Ježek, Petr; Mouček, Roman

    2015-01-01

    The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework.

  15. Semantic Fluency in Aphasia: Clustering and Switching in the Course of 1 Minute

    ERIC Educational Resources Information Center

    Bose, Arpita; Wood, Rosalind; Kiran, Swathi

    2017-01-01

    Background: Verbal fluency tasks are included in a broad range of aphasia assessments. It is well documented that people with aphasia (PWA) produce fewer items in these tasks. Successful performance on verbal fluency relies on the integrity of both linguistic and executive control abilities. It remains unclear if limited output in aphasia is…

  16. Event-Related Potentials Discriminate Familiar and Unusual Goal Outcomes in 5-Month-Olds and Adults

    ERIC Educational Resources Information Center

    Michel, Christine; Kaduk, Katharina; Ní Choisdealbha, Áine; Reid, Vincent M.

    2017-01-01

    Previous event-related potential (ERP) work has indicated that the neural processing of action sequences develops with age. Although adults and 9-month-olds use a semantic processing system, perceiving actions activates attentional processes in 7-month-olds. However, presenting a sequence of action context, action execution and action conclusion…

  17. Service composition towards increasing end-user accessibility.

    PubMed

    Kaklanis, Nikolaos; Votis, Konstantinos; Tzovaras, Dimitrios

    2015-01-01

    This paper presents the Cloud4all Service Synthesizer Tool, a framework that enables efficient orchestration of accessibility services, as well as their combination into complex forms, providing more advanced functionalities towards increasing the accessibility of end-users with various types of functional limitations. The supported services are described formally within an ontology, enabling, thus, semantic service composition. The proposed service composition approach is based on semantic matching between services specifications on the one hand and user needs/preferences and current context of use on the other hand. The use of automatic composition of accessibility services can significantly enhance end-users' accessibility, especially in cases where assistive solutions are not available in their device.

  18. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention

    PubMed Central

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-01

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features. PMID:26759193

  19. Information-Systems Data-Flow Diagram

    NASA Technical Reports Server (NTRS)

    Blosiu, J. O.

    1983-01-01

    Single form presents clear picture of entire system. Form giving relational review of data flow well suited to information system planning, analysis, engineering, and management. Used to review data flow for developing system or one already in use.

  20. Levodopa enhances explicit new-word learning in healthy adults: a preliminary study.

    PubMed

    Shellshear, Leanne; MacDonald, Anna D; Mahoney, Jeffrey; Finch, Emma; McMahon, Katie; Silburn, Peter; Nathan, Pradeep J; Copland, David A

    2015-09-01

    While the role of dopamine in modulating executive function, working memory and associative learning has been established; its role in word learning and language processing more generally is not clear. This preliminary study investigated the impact of increased synaptic dopamine levels on new-word learning ability in healthy young adults using an explicit learning paradigm. A double-blind, placebo-controlled, between-groups design was used. Participants completed five learning sessions over 1 week with levodopa or placebo administered at each session (five doses, 100 mg). Each session involved a study phase followed by a test phase. Test phases involved recall and recognition tests of the new (non-word) names previously paired with unfamiliar objects (half with semantic descriptions) during the study phase. The levodopa group showed superior recall accuracy for new words over five learning sessions compared with the placebo group and better recognition accuracy at a 1-month follow-up for words learnt with a semantic description. These findings suggest that dopamine boosts initial lexical acquisition and enhances longer-term consolidation of words learnt with semantic information, consistent with dopaminergic enhancement of semantic salience. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Single Sided Messaging v. 0.6.6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curry, Matthew Leon; Farmer, Matthew Shane; Hassani, Amin

    Single-Sided Messaging (SSM) is a portable, multitransport networking library that enables applications to leverage potential one-sided capabilities of underlying network transports. It also provides desirable semantics that services for highperformance, massively parallel computers can leverage, such as an explicit cancel operation for pending transmissions, as well as enhanced matching semantics favoring large numbers of buffers attached to a single match entry. This release supports TCP/IP, shared memory, and Infiniband.

  2. Frequency Monitoring: A Methodology for Assessing the Organization of Information

    DTIC Science & Technology

    1988-08-01

    Memory & Cognition, 6, 410-415. Tulving, E. Episodic and semantic memory (1972). In E. Tulving & W. Donaldson (Ed.), Organization and memory . New...are stored in episodic memory (Tulving, 1972). These global-level memory units enable people to make important decisions about such significant... semantically similar. However, as indicated earlier, an advantage of frequency-estimation tests of memory is that they do not require the presentation of

  3. Semantic and syntactic interoperability in online processing of big Earth observation data.

    PubMed

    Sudmanns, Martin; Tiede, Dirk; Lang, Stefan; Baraldi, Andrea

    2018-01-01

    The challenge of enabling syntactic and semantic interoperability for comprehensive and reproducible online processing of big Earth observation (EO) data is still unsolved. Supporting both types of interoperability is one of the requirements to efficiently extract valuable information from the large amount of available multi-temporal gridded data sets. The proposed system wraps world models, (semantic interoperability) into OGC Web Processing Services (syntactic interoperability) for semantic online analyses. World models describe spatio-temporal entities and their relationships in a formal way. The proposed system serves as enabler for (1) technical interoperability using a standardised interface to be used by all types of clients and (2) allowing experts from different domains to develop complex analyses together as collaborative effort. Users are connecting the world models online to the data, which are maintained in a centralised storage as 3D spatio-temporal data cubes. It allows also non-experts to extract valuable information from EO data because data management, low-level interactions or specific software issues can be ignored. We discuss the concept of the proposed system, provide a technical implementation example and describe three use cases for extracting changes from EO images and demonstrate the usability also for non-EO, gridded, multi-temporal data sets (CORINE land cover).

  4. Semantic and syntactic interoperability in online processing of big Earth observation data

    PubMed Central

    Sudmanns, Martin; Tiede, Dirk; Lang, Stefan; Baraldi, Andrea

    2018-01-01

    ABSTRACT The challenge of enabling syntactic and semantic interoperability for comprehensive and reproducible online processing of big Earth observation (EO) data is still unsolved. Supporting both types of interoperability is one of the requirements to efficiently extract valuable information from the large amount of available multi-temporal gridded data sets. The proposed system wraps world models, (semantic interoperability) into OGC Web Processing Services (syntactic interoperability) for semantic online analyses. World models describe spatio-temporal entities and their relationships in a formal way. The proposed system serves as enabler for (1) technical interoperability using a standardised interface to be used by all types of clients and (2) allowing experts from different domains to develop complex analyses together as collaborative effort. Users are connecting the world models online to the data, which are maintained in a centralised storage as 3D spatio-temporal data cubes. It allows also non-experts to extract valuable information from EO data because data management, low-level interactions or specific software issues can be ignored. We discuss the concept of the proposed system, provide a technical implementation example and describe three use cases for extracting changes from EO images and demonstrate the usability also for non-EO, gridded, multi-temporal data sets (CORINE land cover). PMID:29387171

  5. Semantic Agent-Based Service Middleware and Simulation for Smart Cities

    PubMed Central

    Liu, Ming; Xu, Yang; Hu, Haixiao; Mohammed, Abdul-Wahid

    2016-01-01

    With the development of Machine-to-Machine (M2M) technology, a variety of embedded and mobile devices is integrated to interact via the platform of the Internet of Things, especially in the domain of smart cities. One of the primary challenges is that selecting the appropriate services or service combination for upper layer applications is hard, which is due to the absence of a unified semantical service description pattern, as well as the service selection mechanism. In this paper, we define a semantic service representation model from four key properties: Capability (C), Deployment (D), Resource (R) and IOData (IO). Based on this model, an agent-based middleware is built to support semantic service enablement. In this middleware, we present an efficient semantic service discovery and matching approach for a service combination process, which calculates the semantic similarity between services, and a heuristic algorithm to search the service candidates for a specific service request. Based on this design, we propose a simulation of virtual urban fire fighting, and the experimental results manifest the feasibility and efficiency of our design. PMID:28009818

  6. Semantic Agent-Based Service Middleware and Simulation for Smart Cities.

    PubMed

    Liu, Ming; Xu, Yang; Hu, Haixiao; Mohammed, Abdul-Wahid

    2016-12-21

    With the development of Machine-to-Machine (M2M) technology, a variety of embedded and mobile devices is integrated to interact via the platform of the Internet of Things, especially in the domain of smart cities. One of the primary challenges is that selecting the appropriate services or service combination for upper layer applications is hard, which is due to the absence of a unified semantical service description pattern, as well as the service selection mechanism. In this paper, we define a semantic service representation model from four key properties: Capability (C), Deployment (D), Resource (R) and IOData (IO). Based on this model, an agent-based middleware is built to support semantic service enablement. In this middleware, we present an efficient semantic service discovery and matching approach for a service combination process, which calculates the semantic similarity between services, and a heuristic algorithm to search the service candidates for a specific service request. Based on this design, we propose a simulation of virtual urban fire fighting, and the experimental results manifest the feasibility and efficiency of our design.

  7. Effects of semantic relatedness on age-related associative memory deficits: the role of theta oscillations.

    PubMed

    Crespo-Garcia, Maite; Cantero, Jose L; Atienza, Mercedes

    2012-07-16

    Growing evidence suggests that age-related deficits in associative memory are alleviated when the to-be-associated items are semantically related. Here we investigate whether this beneficial effect of semantic relatedness is paralleled by spatio-temporal changes in cortical EEG dynamics during incidental encoding. Young and older adults were presented with faces at a particular spatial location preceded by a biographical cue that was either semantically related or unrelated. As expected, automatic encoding of face-location associations benefited from semantic relatedness in the two groups of age. This effect correlated with increased power of theta oscillations over medial and anterior lateral regions of the prefrontal cortex (PFC) and lateral regions of the posterior parietal cortex (PPC) in both groups. But better-performing elders also showed increased brain-behavior correlation in the theta band over the right inferior frontal gyrus (IFG) as compared to young adults. Semantic relatedness was, however, insufficient to fully eliminate age-related differences in associative memory. In line with this finding, poorer-performing elders relative to young adults showed significant reductions of theta power in the left IFG that were further predictive of behavioral impairment in the recognition task. All together, these results suggest that older adults benefit less than young adults from executive processes during encoding mainly due to neural inefficiency over regions of the left ventrolateral prefrontal cortex (VLPFC). But this associative deficit may be partially compensated for by engaging preexistent semantic knowledge, which likely leads to an efficient recruitment of attentional and integration processes supported by the left PPC and left anterior PFC respectively, together with neural compensatory mechanisms governed by the right VLPFC. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. From the Bench to the Bedside: The Role of Semantic Web and Translational Medicine for Enabling the Next Generation Healthcare Enterprise

    NASA Astrophysics Data System (ADS)

    Kashyap, Vipul

    The success of new innovations and technologies are very often disruptive in nature. At the same time, they enable novel next generation infrastructures and solutions. These solutions introduce great efficiencies in the form of efficient processes and the ability to create, organize, share and manage knowledge effectively; and the same time provide crucial enablers for proposing and realizing new visions. In this paper, we propose a new vision of the next generation healthcare enterprise and discuss how Translational Medicine, which aims to improve communication between the basic and clinical sciences, is a key requirement for achieving this vision. This will lead therapeutic insights may be derived from new scientific ideas - and vice versa. Translation research goes from bench to bedside, where theories emerging from preclinical experimentation are tested on disease-affected human subjects, and from bedside to bench, where information obtained from preliminary human experimentation can be used to refine our understanding of the biological principles underpinning the heterogeneity of human disease and polymorphism(s). Informatics and semantic technologies in particular, has a big role to play in making this a reality. We identify critical requirements, viz., data integration, clinical decision support and knowledge maintenance and provenance; and illustrate semantics-based solutions wrt example scenarios and use cases.

  9. Enabling interoperability in planetary sciences and heliophysics: The case for an information model

    NASA Astrophysics Data System (ADS)

    Hughes, J. Steven; Crichton, Daniel J.; Raugh, Anne C.; Cecconi, Baptiste; Guinness, Edward A.; Isbell, Christopher E.; Mafi, Joseph N.; Gordon, Mitchell K.; Hardman, Sean H.; Joyner, Ronald S.

    2018-01-01

    The Planetary Data System has developed the PDS4 Information Model to enable interoperability across diverse science disciplines. The Information Model is based on an integration of International Organization for Standardization (ISO) level standards for trusted digital archives, information model development, and metadata registries. Where controlled vocabularies provides a basic level of interoperability by providing a common set of terms for communication between both machines and humans the Information Model improves interoperability by means of an ontology that provides semantic information or additional related context for the terms. The information model was defined by team of computer scientists and science experts from each of the diverse disciplines in the Planetary Science community, including Atmospheres, Geosciences, Cartography and Imaging Sciences, Navigational and Ancillary Information, Planetary Plasma Interactions, Ring-Moon Systems, and Small Bodies. The model was designed to be extensible beyond the Planetary Science community, for example there are overlaps between certain PDS disciplines and the Heliophysics and Astrophysics disciplines. "Interoperability" can apply to many aspects of both the developer and the end-user experience, for example agency-to-agency, semantic level, and application level interoperability. We define these types of interoperability and focus on semantic level interoperability, the type of interoperability most directly enabled by an information model.

  10. Enhancing Users' Participation in Business Process Modeling through Ontology-Based Training

    NASA Astrophysics Data System (ADS)

    Macris, A.; Malamateniou, F.; Vassilacopoulos, G.

    Successful business process design requires active participation of users who are familiar with organizational activities and business process modelling concepts. Hence, there is a need to provide users with reusable, flexible, agile and adaptable training material in order to enable them instil their knowledge and expertise in business process design and automation activities. Knowledge reusability is of paramount importance in designing training material on process modelling since it enables users participate actively in process design/redesign activities stimulated by the changing business environment. This paper presents a prototype approach for the design and use of training material that provides significant advantages to both the designer (knowledge - content reusability and semantic web enabling) and the user (semantic search, knowledge navigation and knowledge dissemination). The approach is based on externalizing domain knowledge in the form of ontology-based knowledge networks (i.e. training scenarios serving specific training needs) so that it is made reusable.

  11. Synergistic Instance-Level Subspace Alignment for Fine-Grained Sketch-Based Image Retrieval.

    PubMed

    Li, Ke; Pang, Kaiyue; Song, Yi-Zhe; Hospedales, Timothy M; Xiang, Tao; Zhang, Honggang

    2017-08-25

    We study the problem of fine-grained sketch-based image retrieval. By performing instance-level (rather than category-level) retrieval, it embodies a timely and practical application, particularly with the ubiquitous availability of touchscreens. Three factors contribute to the challenging nature of the problem: (i) free-hand sketches are inherently abstract and iconic, making visual comparisons with photos difficult, (ii) sketches and photos are in two different visual domains, i.e. black and white lines vs. color pixels, and (iii) fine-grained distinctions are especially challenging when executed across domain and abstraction-level. To address these challenges, we propose to bridge the image-sketch gap both at the high-level via parts and attributes, as well as at the low-level, via introducing a new domain alignment method. More specifically, (i) we contribute a dataset with 304 photos and 912 sketches, where each sketch and image is annotated with its semantic parts and associated part-level attributes. With the help of this dataset, we investigate (ii) how strongly-supervised deformable part-based models can be learned that subsequently enable automatic detection of part-level attributes, and provide pose-aligned sketch-image comparisons. To reduce the sketch-image gap when comparing low-level features, we also (iii) propose a novel method for instance-level domain-alignment, that exploits both subspace and instance-level cues to better align the domains. Finally (iv) these are combined in a matching framework integrating aligned low-level features, mid-level geometric structure and high-level semantic attributes. Extensive experiments conducted on our new dataset demonstrate effectiveness of the proposed method.

  12. Automated geospatial Web Services composition based on geodata quality requirements

    NASA Astrophysics Data System (ADS)

    Cruz, Sérgio A. B.; Monteiro, Antonio M. V.; Santos, Rafael

    2012-10-01

    Service-Oriented Architecture and Web Services technologies improve the performance of activities involved in geospatial analysis with a distributed computing architecture. However, the design of the geospatial analysis process on this platform, by combining component Web Services, presents some open issues. The automated construction of these compositions represents an important research topic. Some approaches to solving this problem are based on AI planning methods coupled with semantic service descriptions. This work presents a new approach using AI planning methods to improve the robustness of the produced geospatial Web Services composition. For this purpose, we use semantic descriptions of geospatial data quality requirements in a rule-based form. These rules allow the semantic annotation of geospatial data and, coupled with the conditional planning method, this approach represents more precisely the situations of nonconformities with geodata quality that may occur during the execution of the Web Service composition. The service compositions produced by this method are more robust, thus improving process reliability when working with a composition of chained geospatial Web Services.

  13. N400 ERPs for actions: building meaning in context

    PubMed Central

    Amoruso, Lucía; Gelormini, Carlos; Aboitiz, Francisco; Alvarez González, Miguel; Manes, Facundo; Cardona, Juan F.; Ibanez, Agustín

    2013-01-01

    Converging neuroscientific evidence suggests the existence of close links between language and sensorimotor cognition. Accordingly, during the comprehension of meaningful actions, our brain would recruit semantic-related operations similar to those associated with the processing of language information. Consistent with this view, electrophysiological findings show that the N400 component, traditionally linked to the semantic processing of linguistic material, can also be elicited by action-related material. This review outlines recent data from N400 studies that examine the understanding of action events. We focus on three specific domains, including everyday action comprehension, co-speech gesture integration, and the semantics involved in motor planning and execution. Based on the reviewed findings, we suggest that both negativities (the N400 and the action-N400) reflect a common neurocognitive mechanism involved in the construction of meaning through the expectancies created by previous experiences and current contextual information. To shed light on how this process is instantiated in the brain, a testable contextual fronto-temporo-parietal model is proposed. PMID:23459873

  14. The effect of the COMT val(158)met polymorphism on neural correlates of semantic verbal fluency.

    PubMed

    Krug, Axel; Markov, Valentin; Sheldrick, Abigail; Krach, Sören; Jansen, Andreas; Zerres, Klaus; Eggermann, Thomas; Stöcker, Tony; Shah, N Jon; Kircher, Tilo

    2009-12-01

    Variation in the val(158)met polymorphism of the COMT gene has been found to be associated with cognitive performance. In functional neuroimaging studies, this dysfunction has been linked to signal changes in prefrontal areas. Given the complex modulation and functional heterogeneity of frontal lobe systems, further specification of COMT gene-related phenotypes differing in prefrontally mediated cognitive performance are of major interest. Eighty healthy individuals (54 men, 26 women; mean age 23.3 years) performed an overt semantic verbal fluency task while brain activation was measured with functional magnetic resonance imaging (fMRI). COMT val(158)met genotype was determined and correlated with brain activation measured with fMRI during the task. Although there were no differences in performance, brain activation in the left inferior frontal gyrus [Brodmann area 10] was positively correlated with the number of val alleles in the COMT gene. COMT val(158)met status modulates brain activation during the language production on a semantic level in an area related to executive functions.

  15. Degenerative jargon aphasia: unusual progression of logopenic/phonological progressive aphasia?

    PubMed

    Caffarra, Paolo; Gardini, Simona; Cappa, Stefano; Dieci, Francesca; Concari, Letizia; Barocco, Federica; Ghetti, Caterina; Ruffini, Livia; Prati, Guido Dalla Rosa

    2013-01-01

    Primary progressive aphasia (PPA) corresponds to the gradual degeneration of language which can occur as nonfluent/agrammatic PPA, semantic variant PPA or logopenic variant PPA. We describe the clinical evolution of a patient with PPA presenting jargon aphasia as a late feature. At the onset of the disease (ten years ago) the patient showed anomia and executive deficits, followed later on by phonemic paraphasias and neologisms, deficits in verbal short-term memory, naming, verbal and semantic fluency. At recent follow-up the patient developed an unintelligible jargon with both semantic and neologistic errors, as well as with severe deficit of comprehension which precluded any further neuropsychological assessment. Compared to healthy controls, FDG-PET showed a hypometabolism in the left angular and middle temporal gyri, precuneus, caudate, posterior cingulate, middle frontal gyrus, and bilaterally in the superior temporal and inferior frontal gyri. The clinical and neuroimaging profile seems to support the hypothesis that the patient developed a late feature of logopenic variant PPA characterized by jargonaphasia and associated with superior temporal and parietal dysfunction.

  16. Problems of teaching students to use the featured technologies in the area of semantic web

    NASA Astrophysics Data System (ADS)

    Klimov, V. V.; Chernyshov, A. A.; Balandina, A. I.; Kostkina, A. D.

    2017-01-01

    The following paper contains the description of up-to-date technologies in the area of web-services development, service-oriented architecture and the Semantic Web. The paper contains the analysis of the most popular and widespread technologies and methods in the semantic web area which are used in the developed educational course. In the paper, we also describe the problem of teaching students to use these technologies and specify conditions for the creation of the learning and development course. We also describe the main exercise for personal work and skills, which all the students learning this course have to gain. Moreover, in the paper we specify the problem with software which students are going to use while learning this course. In order to solve this problem, we introduce the developing system which will be used to support the laboratory works. For this moment this system supports only the fourth work execution, but our following plans contain the expansion of the system in order to support the leftover works.

  17. Mapping interference resolution across task domains: A shared control process in left inferior frontal gyrus

    PubMed Central

    Nelson, James K.; Reuter-Lorenz, Patricia A.; Persson, Jonas; Sylvester, Ching-Yune C.; Jonides, John

    2009-01-01

    Work in functional neuroimaging has mapped interference resolution processing onto left inferior frontal regions for both verbal working memory and a variety of semantic processing tasks. The proximity of the identified regions from these different tasks suggests the existence of a common, domain-general interference resolution mechanism. The current research specifically tests this idea in a within-subject design using fMRI to assess the activation associated with variable selection requirements in a semantic retrieval task (verb generation) and a verbal working memory task with a trial-specific proactive interference manipulation (recent-probes). High interference trials on both tasks were associated with activity in the midventrolateral region of the left inferior frontal gyrus, and the regions activated in each task strongly overlapped. The results indicate that an elemental component of executive control associated with interference resolution during retrieval from working memory and from semantic memory can be mapped to a common portion of the left inferior frontal gyrus. PMID:19111526

  18. Designing Class Methods from Dataflow Diagrams

    NASA Astrophysics Data System (ADS)

    Shoval, Peretz; Kabeli-Shani, Judith

    A method for designing the class methods of an information system is described. The method is part of FOOM - Functional and Object-Oriented Methodology. In the analysis phase of FOOM, two models defining the users' requirements are created: a conceptual data model - an initial class diagram; and a functional model - hierarchical OO-DFDs (object-oriented dataflow diagrams). Based on these models, a well-defined process of methods design is applied. First, the OO-DFDs are converted into transactions, i.e., system processes that supports user task. The components and the process logic of each transaction are described in detail, using pseudocode. Then, each transaction is decomposed, according to well-defined rules, into class methods of various types: basic methods, application-specific methods and main transaction (control) methods. Each method is attached to a proper class; messages between methods express the process logic of each transaction. The methods are defined using pseudocode or message charts.

  19. Study of Thread Level Parallelism in a Video Encoding Application for Chip Multiprocessor Design

    NASA Astrophysics Data System (ADS)

    Debes, Eric; Kaine, Greg

    2002-11-01

    In media applications there is a high level of available thread level parallelism (TLP). In this paper we study the intra TLP in a video encoder. We show that a well-distributed highly optimized encoder running on a symmetric multiprocessor (SMP) system can run 3.2 faster on a 4-way SMP machine than on a single processor. The multithreaded encoder running on an SMP system is then used to understand the requirements of a chip multiprocessor (CMP) architecture, which is one possible architectural direction to better exploit TLP. In the framework of this study, we use a software approach to evaluate the dataflow between processors for the video encoder running on an SMP system. An estimation of the dataflow is done with L2 cache miss event counters using Intel® VTuneTM performance analyzer. The experimental measurements are compared to theoretical results.

  20. Assessing semantic similarity of texts - Methods and algorithms

    NASA Astrophysics Data System (ADS)

    Rozeva, Anna; Zerkova, Silvia

    2017-12-01

    Assessing the semantic similarity of texts is an important part of different text-related applications like educational systems, information retrieval, text summarization, etc. This task is performed by sophisticated analysis, which implements text-mining techniques. Text mining involves several pre-processing steps, which provide for obtaining structured representative model of the documents in a corpus by means of extracting and selecting the features, characterizing their content. Generally the model is vector-based and enables further analysis with knowledge discovery approaches. Algorithms and measures are used for assessing texts at syntactical and semantic level. An important text-mining method and similarity measure is latent semantic analysis (LSA). It provides for reducing the dimensionality of the document vector space and better capturing the text semantics. The mathematical background of LSA for deriving the meaning of the words in a given text by exploring their co-occurrence is examined. The algorithm for obtaining the vector representation of words and their corresponding latent concepts in a reduced multidimensional space as well as similarity calculation are presented.

  1. Semantic distance as a critical factor in icon design for in-car infotainment systems.

    PubMed

    Silvennoinen, Johanna M; Kujala, Tuomo; Jokinen, Jussi P P

    2017-11-01

    In-car infotainment systems require icons that enable fluent cognitive information processing and safe interaction while driving. An important issue is how to find an optimised set of icons for different functions in terms of semantic distance. In an optimised icon set, every icon needs to be semantically as close as possible to the function it visually represents and semantically as far as possible from the other functions represented concurrently. In three experiments (N = 21 each), semantic distances of 19 icons to four menu functions were studied with preference rankings, verbal protocols, and the primed product comparisons method. The results show that the primed product comparisons method can be efficiently utilised for finding an optimised set of icons for time-critical applications out of a larger set of icons. The findings indicate the benefits of the novel methodological perspective into the icon design for safety-critical contexts in general. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Postoperative cognitive changes after total knee arthroplasty under regional anesthesia

    PubMed Central

    Jeon, Young-Tae; Kim, Byung-Gun; Park, Young Ho; Sohn, Hye-Min; Kim, Jungeun; Kim, Seung Chan; An, Seong Soo; Kim, SangYun

    2016-01-01

    Abstract Background: The type of postoperative cognitive decline after surgery under spinal anesthesia is unknown. We investigated the type of postoperative cognitive decline after total knee arthroplasty (TKA). Neuropsychological testing was conducted and the changes in cerebrospinal fluid (CSF) biomarkers after surgery were evaluated. Methods: Fifteen patients who required bilateral TKA at a 1-week interval under spinal anesthesia were included. Neuropsychological tests were performed twice, once the day before the first operation and just before the second operation (usually 1 week after the first test) to determine cognitive decline. Validated neuropsychological tests were used to examine 4 types of cognitive decline: memory, frontal-executive, language-semantic, and others. Concentrations of CSF amyloid peptide, tau protein, and S100B were measured twice during spinal anesthesia at a 1-week interval. The patients showed poor performance in frontal-executive function (forward digit span, semantic fluency, letter-phonemic fluency, and Stroop color reading) at the second compared to the first neuropsychological assessment. Results: S100B concentration decreased significantly 1 week after the operation compared to the basal value (638 ± 178 vs 509 ± 167 pg/mL) (P = 0.019). Amyloid protein β1–42, total tau, and phosphorylated tau concentrations tended to decrease but the changes were not significant. Conclusion: Our results suggest that frontal-executive function declined 1 week after TKA under spinal anesthesia. The CSF biomarker analysis indicated that TKA under regional anesthesia might not cause neuronal damage. PMID:28033253

  3. The functional neuroanatomy of autobiographical memory: A meta-analysis

    PubMed Central

    Svoboda, Eva; McKinnon, Margaret C.; Levine, Brian

    2007-01-01

    Autobiographical memory (AM) entails a complex set of operations, including episodic memory, self-reflection, emotion, visual imagery, attention, executive functions, and semantic processes. The heterogeneous nature of AM poses significant challenges in capturing its behavioral and neuroanatomical correlates. Investigators have recently turned their attention to the functional neuroanatomy of AM. We used the effect-location method of meta-analysis to analyze data from 24 functional imaging studies of AM. The results indicated a core neural network of left-lateralized regions, including the medial and ventrolateral prefrontal, medial and lateral temporal and retrosplenial/posterior cingulate cortices, the temporoparietal junction and the cerebellum. Secondary and tertiary regions, less frequently reported in imaging studies of AM, are also identified. We examined the neural correlates of putative component processes in AM, including, executive functions, self-reflection, episodic remembering and visuospatial processing. We also separately analyzed the effect of select variables on the AM network across individual studies, including memory age, qualitative factors (personal significance, level of detail and vividness), semantic and emotional content, and the effect of reference conditions. We found that memory age effects on medial temporal lobe structures may be modulated by qualitative aspects of memory. Studies using rest as a control task masked process-specific components of the AM neural network. Our findings support a neural distinction between episodic and semantic memory in AM. Finally, emotional events produced a shift in lateralization of the AM network with activation observed in emotion-centered regions and deactivation (or lack of activation) observed in regions associated with cognitive processes. PMID:16806314

  4. A system for environmental model coupling and code reuse: The Great Rivers Project

    NASA Astrophysics Data System (ADS)

    Eckman, B.; Rice, J.; Treinish, L.; Barford, C.

    2008-12-01

    As part of the Great Rivers Project, IBM is collaborating with The Nature Conservancy and the Center for Sustainability and the Global Environment (SAGE) at the University of Wisconsin, Madison to build a Modeling Framework and Decision Support System (DSS) designed to help policy makers and a variety of stakeholders (farmers, fish & wildlife managers, hydropower operators, et al.) to assess, come to consensus, and act on land use decisions representing effective compromises between human use and ecosystem preservation/restoration. Initially focused on Brazil's Paraguay-Parana, China's Yangtze, and the Mississippi Basin in the US, the DSS integrates data and models from a wide variety of environmental sectors, including water balance, water quality, carbon balance, crop production, hydropower, and biodiversity. In this presentation we focus on the modeling framework aspect of this project. In our approach to these and other environmental modeling projects, we see a flexible, extensible modeling framework infrastructure for defining and running multi-step analytic simulations as critical. In this framework, we divide monolithic models into atomic components with clearly defined semantics encoded via rich metadata representation. Once models and their semantics and composition rules have been registered with the system by their authors or other experts, non-expert users may construct simulations as workflows of these atomic model components. A model composition engine enforces rules/constraints for composing model components into simulations, to avoid the creation of Frankenmodels, models that execute but produce scientifically invalid results. A common software environment and common representations of data and models are required, as well as an adapter strategy for code written in e.g., Fortran or python, that still enables efficient simulation runs, including parallelization. Since each new simulation, as a new composition of model components, requires calibration of parameters (fudge factors) to produce scientifically valid results, we are also developing an autocalibration engine. Finally, visualization is a key element of this modeling framework strategy, both to convey complex scientific data effectively, and also to enable non-expert users to make full use of the relevant features of the framework. We are developing a visualization environment with a strong data model, to enable visualizations, model results, and data all to be handled similarly.

  5. Friction in Command and Control: Sources of Conflict in Military Doctrine

    DTIC Science & Technology

    2011-06-01

    Colonel Michael W. Kometer (Date) ____________________________________ Dr. S . Michael Pavelec (Date...central belief s . This chapter will attempt to clarify these terms. Unity of Command Martin Van Creveld defines command ―as a function that has to...execution.17 Control The semantic demarcation between Air Force and Marine command and control doctrine s comes with the different approaches to

  6. Formal thought disorder, neuropsychology and insight in schizophrenia.

    PubMed

    Barrera, Alvaro; McKenna, Peter J; Berrios, German E

    2009-01-01

    Information provided by patients with schizophrenia and their respective carers is used to study the descriptive psychopathology and neuropsychology of formal thought disorder (FTD). Relatively intellectually preserved schizophrenia patients (n = 31) exhibiting from no to severe positive FTD completed a self-report scale of FTD, a scale of insight as well as several tests of executive and semantic function. The patients' carers completed another scale of FTD to assess the patients' speech. FTD as self-reported by patients was significantly associated with the synonyms test performance and severity of the reality distortion dimension. FTD as assessed by a clinician and by the patients' carers was significantly associated with executive test performance and performance in a test of associative semantics. Overall insight was significantly associated with severity of the reality distortion dimension and graded naming test performance, but was not associated with self-reported FTD or severity of FTD as assessed by the clinician or carers. The self-reported experience of FTD has different clinical and neuropsychological correlates from those of FTD as assessed by clinicians and carers. The assessment of FTD by patients and carers used along with the clinician's assessment may further the study of this group of symptoms. 2009 S. Karger AG, Basel.

  7. Flexible goal attribution in early mindreading.

    PubMed

    Michael, John; Christensen, Wayne

    2016-03-01

    The 2-systems theory developed by Apperly and Butterfill (2009; Butterfill & Apperly, 2013) is an influential approach to explaining the success of infants and young children on implicit false-belief tasks. There is extensive empirical and theoretical work examining many aspects of this theory, but little attention has been paid to the way in which it characterizes goal attribution. We argue here that this aspect of the theory is inadequate. Butterfill and Apperly's characterization of goal attribution is designed to show how goals could be ascribed by infants without representing them as related to other psychological states, and the minimal mindreading system is supposed to operate without employing flexible semantic-executive cognitive processes. But research on infant goal attribution reveals that infants exhibit a high degree of situational awareness that is strongly suggestive of flexible semantic-executive cognitive processing, and infants appear moreover to be sensitive to interrelations between goals, preferences, and beliefs. Further, close attention to the structure of implicit mindreading tasks--for which the theory was specifically designed--indicates that flexible goal attribution is required to succeed. We conclude by suggesting 2 approaches to resolving these problems. (c) 2016 APA, all rights reserved).

  8. A Proposed Neurological Interpretation of Language Evolution.

    PubMed

    Ardila, Alfredo

    2015-01-01

    Since the very beginning of the aphasia history it has been well established that there are two major aphasic syndromes (Wernicke's-type and Broca's-type aphasia); each one of them is related to the disturbance at a specific linguistic level (lexical/semantic and grammatical) and associated with a particular brain damage localization (temporal and frontal-subcortical). It is proposed that three stages in language evolution could be distinguished: (a) primitive communication systems similar to those observed in other animals, including nonhuman primates; (b) initial communication systems using sound combinations (lexicon) but without relationships among the elements (grammar); and (c) advanced communication systems including word-combinations (grammar). It is proposed that grammar probably originated from the internal representation of actions, resulting in the creation of verbs; this is an ability that depends on the so-called Broca's area and related brain networks. It is suggested that grammar is the basic ability for the development of so-called metacognitive executive functions. It is concluded that while the lexical/semantic language system (vocabulary) probably appeared during human evolution long before the contemporary man (Homo sapiens sapiens), the grammatical language historically represents a recent acquisition and is correlated with the development of complex cognition (metacognitive executive functions).

  9. A Proposed Neurological Interpretation of Language Evolution

    PubMed Central

    2015-01-01

    Since the very beginning of the aphasia history it has been well established that there are two major aphasic syndromes (Wernicke's-type and Broca's-type aphasia); each one of them is related to the disturbance at a specific linguistic level (lexical/semantic and grammatical) and associated with a particular brain damage localization (temporal and frontal-subcortical). It is proposed that three stages in language evolution could be distinguished: (a) primitive communication systems similar to those observed in other animals, including nonhuman primates; (b) initial communication systems using sound combinations (lexicon) but without relationships among the elements (grammar); and (c) advanced communication systems including word-combinations (grammar). It is proposed that grammar probably originated from the internal representation of actions, resulting in the creation of verbs; this is an ability that depends on the so-called Broca's area and related brain networks. It is suggested that grammar is the basic ability for the development of so-called metacognitive executive functions. It is concluded that while the lexical/semantic language system (vocabulary) probably appeared during human evolution long before the contemporary man (Homo sapiens sapiens), the grammatical language historically represents a recent acquisition and is correlated with the development of complex cognition (metacognitive executive functions). PMID:26124540

  10. Neural basis for generalized quantifier comprehension.

    PubMed

    McMillan, Corey T; Clark, Robin; Moore, Peachie; Devita, Christian; Grossman, Murray

    2005-01-01

    Generalized quantifiers like "all cars" are semantically well understood, yet we know little about their neural representation. Our model of quantifier processing includes a numerosity device, operations that combine number elements and working memory. Semantic theory posits two types of quantifiers: first-order quantifiers identify a number state (e.g. "at least 3") and higher-order quantifiers additionally require maintaining a number state actively in working memory for comparison with another state (e.g. "less than half"). We used BOLD fMRI to test the hypothesis that all quantifiers recruit inferior parietal cortex associated with numerosity, while only higher-order quantifiers recruit prefrontal cortex associated with executive resources like working memory. Our findings showed that first-order and higher-order quantifiers both recruit right inferior parietal cortex, suggesting that a numerosity component contributes to quantifier comprehension. Moreover, only probes of higher-order quantifiers recruited right dorsolateral prefrontal cortex, suggesting involvement of executive resources like working memory. We also observed activation of thalamus and anterior cingulate that may be associated with selective attention. Our findings are consistent with a large-scale neural network centered in frontal and parietal cortex that supports comprehension of generalized quantifiers.

  11. Toward Agent Programs with Circuit Semantics

    NASA Technical Reports Server (NTRS)

    Nilsson, Nils J.

    1992-01-01

    New ideas are presented for computing and organizing actions for autonomous agents in dynamic environments-environments in which the agent's current situation cannot always be accurately discerned and in which the effects of actions cannot always be reliably predicted. The notion of 'circuit semantics' for programs based on 'teleo-reactive trees' is introduced. Program execution builds a combinational circuit which receives sensory inputs and controls actions. These formalisms embody a high degree of inherent conditionality and thus yield programs that are suitably reactive to their environments. At the same time, the actions computed by the programs are guided by the overall goals of the agent. The paper also speculates about how programs using these ideas could be automatically generated by artificial intelligence planning systems and adapted by learning methods.

  12. Towards a Consistent and Scientifically Accurate Drug Ontology.

    PubMed

    Hogan, William R; Hanna, Josh; Joseph, Eric; Brochhausen, Mathias

    2013-01-01

    Our use case for comparative effectiveness research requires an ontology of drugs that enables querying National Drug Codes (NDCs) by active ingredient, mechanism of action, physiological effect, and therapeutic class of the drug products they represent. We conducted an ontological analysis of drugs from the realist perspective, and evaluated existing drug terminology, ontology, and database artifacts from (1) the technical perspective, (2) the perspective of pharmacology and medical science (3) the perspective of description logic semantics (if they were available in Web Ontology Language or OWL), and (4) the perspective of our realism-based analysis of the domain. No existing resource was sufficient. Therefore, we built the Drug Ontology (DrOn) in OWL, which we populated with NDCs and other classes from RxNorm using only content created by the National Library of Medicine. We also built an application that uses DrOn to query for NDCs as outlined above, available at: http://ingarden.uams.edu/ingredients. The application uses an OWL-based description logic reasoner to execute end-user queries. DrOn is available at http://code.google.com/p/dr-on.

  13. Semantically enabled and statistically supported biological hypothesis testing with tissue microarray databases

    PubMed Central

    2011-01-01

    Background Although many biological databases are applying semantic web technologies, meaningful biological hypothesis testing cannot be easily achieved. Database-driven high throughput genomic hypothesis testing requires both of the capabilities of obtaining semantically relevant experimental data and of performing relevant statistical testing for the retrieved data. Tissue Microarray (TMA) data are semantically rich and contains many biologically important hypotheses waiting for high throughput conclusions. Methods An application-specific ontology was developed for managing TMA and DNA microarray databases by semantic web technologies. Data were represented as Resource Description Framework (RDF) according to the framework of the ontology. Applications for hypothesis testing (Xperanto-RDF) for TMA data were designed and implemented by (1) formulating the syntactic and semantic structures of the hypotheses derived from TMA experiments, (2) formulating SPARQLs to reflect the semantic structures of the hypotheses, and (3) performing statistical test with the result sets returned by the SPARQLs. Results When a user designs a hypothesis in Xperanto-RDF and submits it, the hypothesis can be tested against TMA experimental data stored in Xperanto-RDF. When we evaluated four previously validated hypotheses as an illustration, all the hypotheses were supported by Xperanto-RDF. Conclusions We demonstrated the utility of high throughput biological hypothesis testing. We believe that preliminary investigation before performing highly controlled experiment can be benefited. PMID:21342584

  14. Lifting Events in RDF from Interactions with Annotated Web Pages

    NASA Astrophysics Data System (ADS)

    Stühmer, Roland; Anicic, Darko; Sen, Sinan; Ma, Jun; Schmidt, Kay-Uwe; Stojanovic, Nenad

    In this paper we present a method and an implementation for creating and processing semantic events from interaction with Web pages which opens possibilities to build event-driven applications for the (Semantic) Web. Events, simple or complex, are models for things that happen e.g., when a user interacts with a Web page. Events are consumed in some meaningful way e.g., for monitoring reasons or to trigger actions such as responses. In order for receiving parties to understand events e.g., comprehend what has led to an event, we propose a general event schema using RDFS. In this schema we cover the composition of complex events and event-to-event relationships. These events can then be used to route semantic information about an occurrence to different recipients helping in making the Semantic Web active. Additionally, we present an architecture for detecting and composing events in Web clients. For the contents of events we show a way of how they are enriched with semantic information about the context in which they occurred. The paper is presented in conjunction with the use case of Semantic Advertising, which extends traditional clickstream analysis by introducing semantic short-term profiling, enabling discovery of the current interest of a Web user and therefore supporting advertisement providers in responding with more relevant advertisements.

  15. Serial and semantic encoding of lists of words in schizophrenia patients with visual hallucinations.

    PubMed

    Brébion, Gildas; Ohlsen, Ruth I; Pilowsky, Lyn S; David, Anthony S

    2011-03-30

    Previous research has suggested that visual hallucinations in schizophrenia are associated with abnormal salience of visual mental images. Since visual imagery is used as a mnemonic strategy to learn lists of words, increased visual imagery might impede the other commonly used strategies of serial and semantic encoding. We had previously published data on the serial and semantic strategies implemented by patients when learning lists of concrete words with different levels of semantic organisation (Brébion et al., 2004). In this paper we present a re-analysis of these data, aiming at investigating the associations between learning strategies and visual hallucinations. Results show that the patients with visual hallucinations presented less serial clustering in the non-organisable list than the other patients. In the semantically organisable list with typical instances, they presented both less serial and less semantic clustering than the other patients. Thus, patients with visual hallucinations demonstrate reduced use of serial and semantic encoding in the lists made up of fairly familiar concrete words, which enable the formation of mental images. Although these results are preliminary, we propose that this different processing of the lists stems from the abnormal salience of the mental images such patients experience from the word stimuli. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  16. Improving life sciences information retrieval using semantic web technology.

    PubMed

    Quan, Dennis

    2007-05-01

    The ability to retrieve relevant information is at the heart of every aspect of research and development in the life sciences industry. Information is often distributed across multiple systems and recorded in a way that makes it difficult to piece together the complete picture. Differences in data formats, naming schemes and network protocols amongst information sources, both public and private, must be overcome, and user interfaces not only need to be able to tap into these diverse information sources but must also assist users in filtering out extraneous information and highlighting the key relationships hidden within an aggregated set of information. The Semantic Web community has made great strides in proposing solutions to these problems, and many efforts are underway to apply Semantic Web techniques to the problem of information retrieval in the life sciences space. This article gives an overview of the principles underlying a Semantic Web-enabled information retrieval system: creating a unified abstraction for knowledge using the RDF semantic network model; designing semantic lenses that extract contextually relevant subsets of information; and assembling semantic lenses into powerful information displays. Furthermore, concrete examples of how these principles can be applied to life science problems including a scenario involving a drug discovery dashboard prototype called BioDash are provided.

  17. TopFed: TCGA tailored federated query processing and linking to LOD.

    PubMed

    Saleem, Muhammad; Padmanabhuni, Shanmukha S; Ngomo, Axel-Cyrille Ngonga; Iqbal, Aftab; Almeida, Jonas S; Decker, Stefan; Deus, Helena F

    2014-01-01

    The Cancer Genome Atlas (TCGA) is a multidisciplinary, multi-institutional effort to catalogue genetic mutations responsible for cancer using genome analysis techniques. One of the aims of this project is to create a comprehensive and open repository of cancer related molecular analysis, to be exploited by bioinformaticians towards advancing cancer knowledge. However, devising bioinformatics applications to analyse such large dataset is still challenging, as it often requires downloading large archives and parsing the relevant text files. Therefore, it is making it difficult to enable virtual data integration in order to collect the critical co-variates necessary for analysis. We address these issues by transforming the TCGA data into the Semantic Web standard Resource Description Format (RDF), link it to relevant datasets in the Linked Open Data (LOD) cloud and further propose an efficient data distribution strategy to host the resulting 20.4 billion triples data via several SPARQL endpoints. Having the TCGA data distributed across multiple SPARQL endpoints, we enable biomedical scientists to query and retrieve information from these SPARQL endpoints by proposing a TCGA tailored federated SPARQL query processing engine named TopFed. We compare TopFed with a well established federation engine FedX in terms of source selection and query execution time by using 10 different federated SPARQL queries with varying requirements. Our evaluation results show that TopFed selects on average less than half of the sources (with 100% recall) with query execution time equal to one third to that of FedX. With TopFed, we aim to offer biomedical scientists a single-point-of-access through which distributed TCGA data can be accessed in unison. We believe the proposed system can greatly help researchers in the biomedical domain to carry out their research effectively with TCGA as the amount and diversity of data exceeds the ability of local resources to handle its retrieval and parsing.

  18. Developing an Ontology for Ocean Biogeochemistry Data

    NASA Astrophysics Data System (ADS)

    Chandler, C. L.; Allison, M. D.; Groman, R. C.; West, P.; Zednik, S.; Maffei, A. R.

    2010-12-01

    Semantic Web technologies offer great promise for enabling new and better scientific research. However, significant challenges must be met before the promise of the Semantic Web can be realized for a discipline as diverse as oceanography. Evolving expectations for open access to research data combined with the complexity of global ecosystem science research themes present a significant challenge, and one that is best met through an informatics approach. The Biological and Chemical Oceanography Data Management Office (BCO-DMO) is funded by the National Science Foundation Division of Ocean Sciences to work with ocean biogeochemistry researchers to improve access to data resulting from their respective programs. In an effort to improve data access, BCO-DMO staff members are collaborating with researchers from the Tetherless World Constellation (Rensselaer Polytechnic Institute) to develop an ontology that formally describes the concepts and relationships in the data managed by the BCO-DMO. The project required transforming a legacy system of human-readable, flat files of metadata to well-ordered controlled vocabularies to a fully developed ontology. To improve semantic interoperability, terms from the BCO-DMO controlled vocabularies are being mapped to controlled vocabulary terms adopted by other oceanographic data management organizations. While the entire process has proven to be difficult, time-consuming and labor-intensive, the work has been rewarding and is a necessary prerequisite for the eventual incorporation of Semantic Web tools. From the beginning of the project, development of the ontology has been guided by a use case based approach. The use cases were derived from data access related requests received from members of the research community served by the BCO-DMO. The resultant ontology satisfies the requirements of the use cases and reflects the information stored in the metadata database. The BCO-DMO metadata database currently contains information that powers several different user and machine-to-machine interfaces to the BCO-DMO data repositories. One goal of the ontology development project is to enable subsequent development of semantically-enabled components (e.g. faceted search) to enhance the power of those interfaces. Addition of semantic capabilities to the existing data interfaces will improve data access through enhanced data discovery. In addition to sharing the ontology, we will describe the challenges encountered thus far in the project, the technologies currently being used, and the strategies associated with the use case based informatics approach.

  19. Automatic and Controlled Semantic Retrieval: TMS Reveals Distinct Contributions of Posterior Middle Temporal Gyrus and Angular Gyrus

    PubMed Central

    Davey, James; Cornelissen, Piers L.; Thompson, Hannah E.; Sonkusare, Saurabh; Hallam, Glyn; Smallwood, Jonathan

    2015-01-01

    Semantic retrieval involves both (1) automatic spreading activation between highly related concepts and (2) executive control processes that tailor this activation to suit the current context or goals. Two structures in left temporoparietal cortex, angular gyrus (AG) and posterior middle temporal gyrus (pMTG), are thought to be crucial to semantic retrieval and are often recruited together during semantic tasks; however, they show strikingly different patterns of functional connectivity at rest (coupling with the “default mode network” and “frontoparietal control system,” respectively). Here, transcranial magnetic stimulation (TMS) was used to establish a causal yet dissociable role for these sites in semantic cognition in human volunteers. TMS to AG disrupted thematic judgments particularly when the link between probe and target was strong (e.g., a picture of an Alsatian with a bone), and impaired the identification of objects at a specific but not a superordinate level (for the verbal label “Alsatian” not “animal”). In contrast, TMS to pMTG disrupted thematic judgments for weak but not strong associations (e.g., a picture of an Alsatian with razor wire), and impaired identity matching for both superordinate and specific-level labels. Thus, stimulation to AG interfered with the automatic retrieval of specific concepts from the semantic store while stimulation of pMTG impaired semantic cognition when there was a requirement to flexibly shape conceptual activation in line with the task requirements. These results demonstrate that AG and pMTG make a dissociable contribution to automatic and controlled aspects of semantic retrieval. SIGNIFICANCE STATEMENT We demonstrate a novel functional dissociation between the angular gyrus (AG) and posterior middle temporal gyrus (pMTG) in conceptual processing. These sites are often coactivated during neuroimaging studies using semantic tasks, but their individual contributions are unclear. Using transcranial magnetic stimulation and tasks designed to assess different aspects of semantics (item identity and thematic matching), we tested two alternative theoretical accounts. Neither site showed the pattern expected for a “thematic hub” (i.e., a site storing associations between concepts) since stimulation disrupted both tasks. Instead, the data indicated that pMTG contributes to the controlled retrieval of conceptual knowledge, while AG is critical for the efficient automatic retrieval of specific semantic information. PMID:26586812

  20. Developing A Web-based User Interface for Semantic Information Retrieval

    NASA Technical Reports Server (NTRS)

    Berrios, Daniel C.; Keller, Richard M.

    2003-01-01

    While there are now a number of languages and frameworks that enable computer-based systems to search stored data semantically, the optimal design for effective user interfaces for such systems is still uncle ar. Such interfaces should mask unnecessary query detail from users, yet still allow them to build queries of arbitrary complexity without significant restrictions. We developed a user interface supporting s emantic query generation for Semanticorganizer, a tool used by scient ists and engineers at NASA to construct networks of knowledge and dat a. Through this interface users can select node types, node attribute s and node links to build ad-hoc semantic queries for searching the S emanticOrganizer network.

  1. Interoperable cross-domain semantic and geospatial framework for automatic change detection

    NASA Astrophysics Data System (ADS)

    Kuo, Chiao-Ling; Hong, Jung-Hong

    2016-01-01

    With the increasingly diverse types of geospatial data established over the last few decades, semantic interoperability in integrated applications has attracted much interest in the field of Geographic Information System (GIS). This paper proposes a new strategy and framework to process cross-domain geodata at the semantic level. This framework leverages the semantic equivalence of concepts between domains through bridge ontology and facilitates the integrated use of different domain data, which has been long considered as an essential superiority of GIS, but is impeded by the lack of understanding about the semantics implicitly hidden in the data. We choose the task of change detection to demonstrate how the introduction of ontology concept can effectively make the integration possible. We analyze the common properties of geodata and change detection factors, then construct rules and summarize possible change scenario for making final decisions. The use of topographic map data to detect changes in land use shows promising success, as far as the improvement of efficiency and level of automation is concerned. We believe the ontology-oriented approach will enable a new way for data integration across different domains from the perspective of semantic interoperability, and even open a new dimensionality for the future GIS.

  2. Verification and Planning Based on Coinductive Logic Programming

    NASA Technical Reports Server (NTRS)

    Bansal, Ajay; Min, Richard; Simon, Luke; Mallya, Ajay; Gupta, Gopal

    2008-01-01

    Coinduction is a powerful technique for reasoning about unfounded sets, unbounded structures, infinite automata, and interactive computations [6]. Where induction corresponds to least fixed point's semantics, coinduction corresponds to greatest fixed point semantics. Recently coinduction has been incorporated into logic programming and an elegant operational semantics developed for it [11, 12]. This operational semantics is the greatest fix point counterpart of SLD resolution (SLD resolution imparts operational semantics to least fix point based computations) and is termed co- SLD resolution. In co-SLD resolution, a predicate goal p( t) succeeds if it unifies with one of its ancestor calls. In addition, rational infinite terms are allowed as arguments of predicates. Infinite terms are represented as solutions to unification equations and the occurs check is omitted during the unification process. Coinductive Logic Programming (Co-LP) and Co-SLD resolution can be used to elegantly perform model checking and planning. A combined SLD and Co-SLD resolution based LP system forms the common basis for planning, scheduling, verification, model checking, and constraint solving [9, 4]. This is achieved by amalgamating SLD resolution, co-SLD resolution, and constraint logic programming [13] in a single logic programming system. Given that parallelism in logic programs can be implicitly exploited [8], complex, compute-intensive applications (planning, scheduling, model checking, etc.) can be executed in parallel on multi-core machines. Parallel execution can result in speed-ups as well as in larger instances of the problems being solved. In the remainder we elaborate on (i) how planning can be elegantly and efficiently performed under real-time constraints, (ii) how real-time systems can be elegantly and efficiently model- checked, as well as (iii) how hybrid systems can be verified in a combined system with both co-SLD and SLD resolution. Implementations of co-SLD resolution as well as preliminary implementations of the planning and verification applications have been developed [4]. Co-LP and Model Checking: The vast majority of properties that are to be verified can be classified into safety properties and liveness properties. It is well known within model checking that safety properties can be verified by reachability analysis, i.e, if a counter-example to the property exists, it can be finitely determined by enumerating all the reachable states of the Kripke structure.

  3. Representational constraints on children's suggestibility.

    PubMed

    Ceci, Stephen J; Papierno, Paul B; Kulkofsky, Sarah

    2007-06-01

    In a multistage experiment, twelve 4- and 9-year-old children participated in a triad rating task. Their ratings were mapped with multidimensional scaling, from which euclidean distances were computed to operationalize semantic distance between items in target pairs. These children and age-mates then participated in an experiment that employed these target pairs in a story, which was followed by a misinformation manipulation. Analyses linked individual and developmental differences in suggestibility to children's representations of the target items. Semantic proximity was a strong predictor of differences in suggestibility: The closer a suggested distractor was to the original item's representation, the greater was the distractor's suggestive influence. The triad participants' semantic proximity subsequently served as the basis for correctly predicting memory performance in the larger group. Semantic proximity enabled a priori counterintuitive predictions of reverse age-related trends to be confirmed whenever the distance between representations of items in a target pair was greater for younger than for older children.

  4. Reliable Adaptive Video Streaming Driven by Perceptual Semantics for Situational Awareness

    PubMed Central

    Pimentel-Niño, M. A.; Saxena, Paresh; Vazquez-Castro, M. A.

    2015-01-01

    A novel cross-layer optimized video adaptation driven by perceptual semantics is presented. The design target is streamed live video to enhance situational awareness in challenging communications conditions. Conventional solutions for recreational applications are inadequate and novel quality of experience (QoE) framework is proposed which allows fully controlled adaptation and enables perceptual semantic feedback. The framework relies on temporal/spatial abstraction for video applications serving beyond recreational purposes. An underlying cross-layer optimization technique takes into account feedback on network congestion (time) and erasures (space) to best distribute available (scarce) bandwidth. Systematic random linear network coding (SRNC) adds reliability while preserving perceptual semantics. Objective metrics of the perceptual features in QoE show homogeneous high performance when using the proposed scheme. Finally, the proposed scheme is in line with content-aware trends, by complying with information-centric-networking philosophy and architecture. PMID:26247057

  5. Mediator infrastructure for information integration and semantic data integration environment for biomedical research.

    PubMed

    Grethe, Jeffrey S; Ross, Edward; Little, David; Sanders, Brian; Gupta, Amarnath; Astakhov, Vadim

    2009-01-01

    This paper presents current progress in the development of semantic data integration environment which is a part of the Biomedical Informatics Research Network (BIRN; http://www.nbirn.net) project. BIRN is sponsored by the National Center for Research Resources (NCRR), a component of the National Institutes of Health (NIH). A goal is the development of a cyberinfrastructure for biomedical research that supports advance data acquisition, data storage, data management, data integration, data mining, data visualization, and other computing and information processing services over the Internet. Each participating institution maintains storage of their experimental or computationally derived data. Mediator-based data integration system performs semantic integration over the databases to enable researchers to perform analyses based on larger and broader datasets than would be available from any single institution's data. This paper describes recent revision of the system architecture, implementation, and capabilities of the semantically based data integration environment for BIRN.

  6. High-performance analysis of filtered semantic graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buluc, Aydin; Fox, Armando; Gilbert, John R.

    2012-01-01

    High performance is a crucial consideration when executing a complex analytic query on a massive semantic graph. In a semantic graph, vertices and edges carry "attributes" of various types. Analytic queries on semantic graphs typically depend on the values of these attributes; thus, the computation must either view the graph through a filter that passes only those individual vertices and edges of interest, or else must first materialize a subgraph or subgraphs consisting of only the vertices and edges of interest. The filtered approach is superior due to its generality, ease of use, and memory efficiency, but may carry amore » performance cost. In the Knowledge Discovery Toolbox (KDT), a Python library for parallel graph computations, the user writes filters in a high-level language, but those filters result in relatively low performance due to the bottleneck of having to call into the Python interpreter for each edge. In this work, we use the Selective Embedded JIT Specialization (SEJITS) approach to automatically translate filters defined by programmers into a lower-level efficiency language, bypassing the upcall into Python. We evaluate our approach by comparing it with the high-performance C++ /MPI Combinatorial BLAS engine, and show that the productivity gained by using a high-level filtering language comes without sacrificing performance.« less

  7. A service-oriented distributed semantic mediator: integrating multiscale biomedical information.

    PubMed

    Mora, Oscar; Engelbrecht, Gerhard; Bisbal, Jesus

    2012-11-01

    Biomedical research continuously generates large amounts of heterogeneous and multimodal data spread over multiple data sources. These data, if appropriately shared and exploited, could dramatically improve the research practice itself, and ultimately the quality of health care delivered. This paper presents DISMED (DIstributed Semantic MEDiator), an open source semantic mediator that provides a unified view of a federated environment of multiscale biomedical data sources. DISMED is a Web-based software application to query and retrieve information distributed over a set of registered data sources, using semantic technologies. It also offers a userfriendly interface specifically designed to simplify the usage of these technologies by non-expert users. Although the architecture of the software mediator is generic and domain independent, in the context of this paper, DISMED has been evaluated for managing biomedical environments and facilitating research with respect to the handling of scientific data distributed in multiple heterogeneous data sources. As part of this contribution, a quantitative evaluation framework has been developed. It consist of a benchmarking scenario and the definition of five realistic use-cases. This framework, created entirely with public datasets, has been used to compare the performance of DISMED against other available mediators. It is also available to the scientific community in order to evaluate progress in the domain of semantic mediation, in a systematic and comparable manner. The results show an average improvement in the execution time by DISMED of 55% compared to the second best alternative in four out of the five use-cases of the experimental evaluation.

  8. Potential role of monkey inferior parietal neurons coding action semantic equivalences as precursors of parts of speech.

    PubMed

    Yamazaki, Yumiko; Yokochi, Hiroko; Tanaka, Michio; Okanoya, Kazuo; Iriki, Atsushi

    2010-01-01

    The anterior portion of the inferior parietal cortex possesses comprehensive representations of actions embedded in behavioural contexts. Mirror neurons, which respond to both self-executed and observed actions, exist in this brain region in addition to those originally found in the premotor cortex. We found that parietal mirror neurons responded differentially to identical actions embedded in different contexts. Another type of parietal mirror neuron represents an inverse and complementary property of responding equally to dissimilar actions made by itself and others for an identical purpose. Here, we propose a hypothesis that these sets of inferior parietal neurons constitute a neural basis for encoding the semantic equivalence of various actions across different agents and contexts. The neurons have mirror neuron properties, and they encoded generalization of agents, differentiation of outcomes, and categorization of actions that led to common functions. By integrating the activities of these mirror neurons with various codings, we further suggest that in the ancestral primates' brains, these various representations of meaningful action led to the gradual establishment of equivalence relations among the different types of actions, by sharing common action semantics. Such differential codings of the components of actions might represent precursors to the parts of protolanguage, such as gestural communication, which are shared among various members of a society. Finally, we suggest that the inferior parietal cortex serves as an interface between this action semantics system and other higher semantic systems, through common structures of action representation that mimic language syntax.

  9. Potential role of monkey inferior parietal neurons coding action semantic equivalences as precursors of parts of speech

    PubMed Central

    Yamazaki, Yumiko; Yokochi, Hiroko; Tanaka, Michio; Okanoya, Kazuo; Iriki, Atsushi

    2010-01-01

    The anterior portion of the inferior parietal cortex possesses comprehensive representations of actions embedded in behavioural contexts. Mirror neurons, which respond to both self-executed and observed actions, exist in this brain region in addition to those originally found in the premotor cortex. We found that parietal mirror neurons responded differentially to identical actions embedded in different contexts. Another type of parietal mirror neuron represents an inverse and complementary property of responding equally to dissimilar actions made by itself and others for an identical purpose. Here, we propose a hypothesis that these sets of inferior parietal neurons constitute a neural basis for encoding the semantic equivalence of various actions across different agents and contexts. The neurons have mirror neuron properties, and they encoded generalization of agents, differentiation of outcomes, and categorization of actions that led to common functions. By integrating the activities of these mirror neurons with various codings, we further suggest that in the ancestral primates' brains, these various representations of meaningful action led to the gradual establishment of equivalence relations among the different types of actions, by sharing common action semantics. Such differential codings of the components of actions might represent precursors to the parts of protolanguage, such as gestural communication, which are shared among various members of a society. Finally, we suggest that the inferior parietal cortex serves as an interface between this action semantics system and other higher semantic systems, through common structures of action representation that mimic language syntax. PMID:20119879

  10. Speech-associated gestures, Broca’s area, and the human mirror system

    PubMed Central

    Skipper, Jeremy I.; Goldin-Meadow, Susan; Nusbaum, Howard C.; Small, Steven L

    2009-01-01

    Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca’s area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a “mirror” or “observation–execution matching” system). We asked whether the role that Broca’s area plays in processing speech-associated gestures is consistent with the semantic retrieval/selection account (predicting relatively weak interactions between Broca’s area and other cortical areas because the meaningful information that speech-associated gestures convey reduces semantic ambiguity and thus reduces the need for semantic retrieval/selection) or the action recognition account (predicting strong interactions between Broca’s area and other cortical areas because speech-associated gestures are goal-direct actions that are “mirrored”). We compared the functional connectivity of Broca’s area with other cortical areas when participants listened to stories while watching meaningful speech-associated gestures, speech-irrelevant self-grooming hand movements, or no hand movements. A network analysis of neuroimaging data showed that interactions involving Broca’s area and other cortical areas were weakest when spoken language was accompanied by meaningful speech-associated gestures, and strongest when spoken language was accompanied by self-grooming hand movements or by no hand movements at all. Results are discussed with respect to the role that the human mirror system plays in processing speech-associated movements. PMID:17533001

  11. Addressing the Challenges of Multi-Domain Data Integration with the SemantEco Framework

    NASA Astrophysics Data System (ADS)

    Patton, E. W.; Seyed, P.; McGuinness, D. L.

    2013-12-01

    Data integration across multiple domains will continue to be a challenge with the proliferation of big data in the sciences. Data origination issues and how data are manipulated are critical to enable scientists to understand and consume disparate datasets as research becomes more multidisciplinary. We present the SemantEco framework as an exemplar for designing an integrative portal for data discovery, exploration, and interpretation that uses best practice W3C Recommendations. We use the Resource Description Framework (RDF) with extensible ontologies described in the Web Ontology Language (OWL) to provide graph-based data representation. Furthermore, SemantEco ingests data via the software package csv2rdf4lod, which generates data provenance using the W3C provenance recommendation (PROV). Our presentation will discuss benefits and challenges of semantic integration, their effect on runtime performance, and how the SemantEco framework assisted in identifying performance issues and improved query performance across multiple domains by an order of magnitude. SemantEco benefits from a semantic approach that provides an 'open world', which allows data to incrementally change just as it does in the real world. SemantEco modules may load new ontologies and data using the W3C's SPARQL Protocol and RDF Query Language via HTTP. Modules may also provide user interface elements for applications and query capabilities to support new use cases. Modules can associate with domains, which are first-class objects in SemantEco. This enables SemantEco to perform integration and reasoning both within and across domains on module-provided data. The SemantEco framework has been used to construct a web portal for environmental and ecological data. The portal includes water and air quality data from the U.S. Geological Survey (USGS) and Environmental Protection Agency (EPA) and species observation counts for birds and fish from the Avian Knowledge Network and the Santa Barbara Long Term Ecological Research, respectively. We provide regulation ontologies using OWL2 datatype facets to detect out-of-range measurements for environmental standards set by the EPA, i.a. Users adjust queries using module-defined facets and a map presents the resulting measurement sites. Custom icons identify sites that violate regulations, making them easy to locate. Selecting a site gives the option of charting spatially proximate data from different domains over time. Our portal currently provides 1.6 billion triples of scientific data in RDF. We segment data by ZIP code and reasoning over 2157 measurements with our EPA regulation ontology that contains 131 regulations takes 2.5 seconds on a 2.4 GHz Intel Core 2 Quad with 8 GB of RAM. SemantEco's modular design and reasoning capabilities make it an exemplar for building multidisciplinary data integration tools that provide data access to scientists and the general population alike. Its provenance tracking provides accountability and its reasoning services can assist users in interpreting data. Future work includes support for geographical queries using the Open Geospatial Consortium's GeoSPARQL standard.

  12. BioPortal: An Open-Source Community-Based Ontology Repository

    NASA Astrophysics Data System (ADS)

    Noy, N.; NCBO Team

    2011-12-01

    Advances in computing power and new computational techniques have changed the way researchers approach science. In many fields, one of the most fruitful approaches has been to use semantically aware software to break down the barriers among disparate domains, systems, data sources, and technologies. Such software facilitates data aggregation, improves search, and ultimately allows the detection of new associations that were previously not detectable. Achieving these analyses requires software systems that take advantage of the semantics and that can intelligently negotiate domains and knowledge sources, identifying commonality across systems that use different and conflicting vocabularies, while understanding apparent differences that may be concealed by the use of superficially similar terms. An ontology, a semantically rich vocabulary for a domain of interest, is the cornerstone of software for bridging systems, domains, and resources. However, as ontologies become the foundation of all semantic technologies in e-science, we must develop an infrastructure for sharing ontologies, finding and evaluating them, integrating and mapping among them, and using ontologies in applications that help scientists process their data. BioPortal [1] is an open-source on-line community-based ontology repository that has been used as a critical component of semantic infrastructure in several domains, including biomedicine and bio-geochemical data. BioPortal, uses the social approaches in the Web 2.0 style to bring structure and order to the collection of biomedical ontologies. It enables users to provide and discuss a wide array of knowledge components, from submitting the ontologies themselves, to commenting on and discussing classes in the ontologies, to reviewing ontologies in the context of their own ontology-based projects, to creating mappings between overlapping ontologies and discussing and critiquing the mappings. Critically, it provides web-service access to all its content, enabling its integration in semantically enriched applications. [1] Noy, N.F., Shah, N.H., et al., BioPortal: ontologies and integrated data resources at the click of a mouse. Nucleic Acids Res, 2009. 37(Web Server issue): p. W170-3.

  13. Analyzing structural changes in SNOMED CT's Bacterial infectious diseases using a visual semantic delta.

    PubMed

    Ochs, Christopher; Case, James T; Perl, Yehoshua

    2017-03-01

    Thousands of changes are applied to SNOMED CT's concepts during each release cycle. These changes are the result of efforts to improve or expand the coverage of health domains in the terminology. Understanding which concepts changed, how they changed, and the overall impact of a set of changes is important for editors and end users. Each SNOMED CT release comes with delta files, which identify all of the individual additions and removals of concepts and relationships. These files typically contain tens of thousands of individual entries, overwhelming users. They also do not identify the editorial processes that were applied to individual concepts and they do not capture the overall impact of a set of changes on a subhierarchy of concepts. In this paper we introduce a methodology and accompanying software tool called a SNOMED CT Visual Semantic Delta ("semantic delta" for short) to enable a comprehensive review of changes in SNOMED CT. The semantic delta displays a graphical list of editing operations that provides semantics and context to the additions and removals in the delta files. However, there may still be thousands of editing operations applied to a set of concepts. To address this issue, a semantic delta includes a visual summary of changes that affected sets of structurally and semantically similar concepts. The software tool for creating semantic deltas offers views of various granularities, allowing a user to control how much change information they view. In this tool a user can select a set of structurally and semantically similar concepts and review the editing operations that affected their modeling. The semantic delta methodology is demonstrated on SNOMED CT's Bacterial infectious disease subhierarchy, which has undergone a significant remodeling effort over the last two years. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases.

    PubMed

    Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel

    2013-04-15

    In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.

  15. Analyzing Structural Changes in SNOMED CT’s Bacterial Infectious Diseases Using a Visual Semantic Delta

    PubMed Central

    Ochs, Christopher; Case, James T.; Perl, Yehoshua

    2017-01-01

    Thousands of changes are applied to SNOMED CT’s concepts during each release cycle. These changes are the result of efforts to improve or expand the coverage of health domains in the terminology. Understanding which concepts changed, how they changed, and the overall impact of a set of changes is important for editors and end users. Each SNOMED CT release comes with delta files, which identify all of the individual additions and removals of concepts and relationships. These files typically contain tens of thousands of individual entries, overwhelming users. They also do not identify the editorial processes that were applied to individual concepts and they do not capture the overall impact of a set of changes on a subhierarchy of concepts. In this paper we introduce a methodology and accompanying software tool called a SNOMED CT Visual Semantic Delta (“semantic delta” for short) to enable a comprehensive review of changes in SNOMED CT. The semantic delta displays a graphical list of editing operations that provides semantics and context to the additions and removals in the delta files. However, there may still be thousands of editing operations applied to a set of concepts. To address this issue, a semantic delta includes a visual summary of changes that affected sets of structurally and semantically similar concepts. The software tool for creating semantic deltas offers views of various granularities, allowing a user to control how much change information they view. In this tool a user can select a set of structurally and semantically similar concepts and review the editing operations that affected their modeling. The semantic delta methodology is demonstrated on SNOMED CT’s Bacterial infectious disease subhierarchy, which has undergone a significant remodeling effort over the last two years. PMID:28215561

  16. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases

    PubMed Central

    2013-01-01

    Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394

  17. Quality evaluation of value sets from cancer study common data elements using the UMLS semantic groups

    PubMed Central

    Solbrig, Harold R; Chute, Christopher G

    2012-01-01

    Objective The objective of this study is to develop an approach to evaluate the quality of terminological annotations on the value set (ie, enumerated value domain) components of the common data elements (CDEs) in the context of clinical research using both unified medical language system (UMLS) semantic types and groups. Materials and methods The CDEs of the National Cancer Institute (NCI) Cancer Data Standards Repository, the NCI Thesaurus (NCIt) concepts and the UMLS semantic network were integrated using a semantic web-based framework for a SPARQL-enabled evaluation. First, the set of CDE-permissible values with corresponding meanings in external controlled terminologies were isolated. The corresponding value meanings were then evaluated against their NCI- or UMLS-generated semantic network mapping to determine whether all of the meanings fell within the same semantic group. Results Of the enumerated CDEs in the Cancer Data Standards Repository, 3093 (26.2%) had elements drawn from more than one UMLS semantic group. A random sample (n=100) of this set of elements indicated that 17% of them were likely to have been misclassified. Discussion The use of existing semantic web tools can support a high-throughput mechanism for evaluating the quality of large CDE collections. This study demonstrates that the involvement of multiple semantic groups in an enumerated value domain of a CDE is an effective anchor to trigger an auditing point for quality evaluation activities. Conclusion This approach produces a useful quality assurance mechanism for a clinical study CDE repository. PMID:22511016

  18. Semantic and Syntactic Interference in Sentence Comprehension: A Comparison of Working Memory Models

    PubMed Central

    Tan, Yingying; Martin, Randi C.; Van Dyke, Julie A.

    2017-01-01

    This study investigated the nature of the underlying working memory system supporting sentence processing through examining individual differences in sensitivity to retrieval interference effects during sentence comprehension. Interference effects occur when readers incorrectly retrieve sentence constituents which are similar to those required during integrative processes. We examined interference arising from a partial match between distracting constituents and syntactic and semantic cues, and related these interference effects to performance on working memory, short-term memory (STM), vocabulary, and executive function tasks. For online sentence comprehension, as measured by self-paced reading, the magnitude of individuals' syntactic interference effects was predicted by general WM capacity and the relation remained significant when partialling out vocabulary, indicating that the effects were not due to verbal knowledge. For offline sentence comprehension, as measured by responses to comprehension questions, both general WM capacity and vocabulary knowledge interacted with semantic interference for comprehension accuracy, suggesting that both general WM capacity and the quality of semantic representations played a role in determining how well interference was resolved offline. For comprehension question reaction times, a measure of semantic STM capacity interacted with semantic but not syntactic interference. However, a measure of phonological capacity (digit span) and a general measure of resistance to response interference (Stroop effect) did not predict individuals' interference resolution abilities in either online or offline sentence comprehension. The results are discussed in relation to the multiple capacities account of working memory (e.g., Martin and Romani, 1994; Martin and He, 2004), and the cue-based retrieval parsing approach (e.g., Lewis et al., 2006; Van Dyke et al., 2014). While neither approach was fully supported, a possible means of reconciling the two approaches and directions for future research are proposed. PMID:28261133

  19. Disrupting the brain to validate hypotheses on the neurobiology of language

    PubMed Central

    Papeo, Liuba; Pascual-Leone, Alvaro; Caramazza, Alfonso

    2013-01-01

    Comprehension of words is an important part of the language faculty, involving the joint activity of frontal and temporo-parietal brain regions. Transcranial Magnetic Stimulation (TMS) enables the controlled perturbation of brain activity, and thus offers a unique tool to test specific predictions about the causal relationship between brain regions and language understanding. This potential has been exploited to better define the role of regions that are classically accepted as part of the language-semantic network. For instance, TMS has contributed to establish the semantic relevance of the left anterior temporal lobe, or to solve the ambiguity between the semantic vs. phonological function assigned to the left inferior frontal gyrus (LIFG). We consider, more closely, the results from studies where the same technique, similar paradigms (lexical-semantic tasks) and materials (words) have been used to assess the relevance of regions outside the classically-defined language-semantic network—i.e., precentral motor regions—for the semantic analysis of words. This research shows that different aspects of the left precentral gyrus (primary motor and premotor sites) are sensitive to the action-non action distinction of words' meanings. However, the behavioral changes due to TMS over these sites are incongruent with what is expected after perturbation of a task-relevant brain region. Thus, the relationship between motor activity and language-semantic behavior remains far from clear. A better understanding of this issue could be guaranteed by investigating functional interactions between motor sites and semantically-relevant regions. PMID:23630480

  20. The perirhinal cortex and conceptual processing: Effects of feature-based statistics following damage to the anterior temporal lobes.

    PubMed

    Wright, Paul; Randall, Billi; Clarke, Alex; Tyler, Lorraine K

    2015-09-01

    The anterior temporal lobe (ATL) plays a prominent role in models of semantic knowledge, although it remains unclear how the specific subregions within the ATL contribute to semantic memory. Patients with neurodegenerative diseases, like semantic dementia, have widespread damage to the ATL thus making inferences about the relationship between anatomy and cognition problematic. Here we take a detailed anatomical approach to ask which substructures within the ATL contribute to conceptual processing, with the prediction that the perirhinal cortex (PRc) will play a critical role for concepts that are more semantically confusable. We tested two patient groups, those with and without damage to the PRc, across two behavioural experiments - picture naming and word-picture matching. For both tasks, we manipulated the degree of semantic confusability of the concepts. By contrasting the performance of the two groups, along with healthy controls, we show that damage to the PRc results in worse performance in processing concepts with higher semantic confusability across both experiments. Further by correlating the degree of damage across anatomically defined regions of interest with performance, we find that PRc damage is related to performance for concepts with increased semantic confusability. Our results show that the PRc supports a necessary and crucial neurocognitve function that enables fine-grained conceptual processes to take place through the resolution of semantic confusability. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Informatics in radiology: radiology gamuts ontology: differential diagnosis for the Semantic Web.

    PubMed

    Budovec, Joseph J; Lam, Cesar A; Kahn, Charles E

    2014-01-01

    The Semantic Web is an effort to add semantics, or "meaning," to empower automated searching and processing of Web-based information. The overarching goal of the Semantic Web is to enable users to more easily find, share, and combine information. Critical to this vision are knowledge models called ontologies, which define a set of concepts and formalize the relations between them. Ontologies have been developed to manage and exploit the large and rapidly growing volume of information in biomedical domains. In diagnostic radiology, lists of differential diagnoses of imaging observations, called gamuts, provide an important source of knowledge. The Radiology Gamuts Ontology (RGO) is a formal knowledge model of differential diagnoses in radiology that includes 1674 differential diagnoses, 19,017 terms, and 52,976 links between terms. Its knowledge is used to provide an interactive, freely available online reference of radiology gamuts ( www.gamuts.net ). A Web service allows its content to be discovered and consumed by other information systems. The RGO integrates radiologic knowledge with other biomedical ontologies as part of the Semantic Web. © RSNA, 2014.

  2. Understanding metaphors and idioms: a single-case neuropsychological study in a person with Down syndrome.

    PubMed

    Papagno, C; Vallar, G

    2001-05-01

    The ability of subject F.F., diagnosed with Down syndrome, to appreciate nonliteral (interpreting metaphors and idioms) and literal (vocabulary knowledge, including highly specific and unusual items) aspects of language was investigated. F.F. was impaired in understanding both metaphors and idioms, while her phonological, syntactic and lexical-semantic skills were largely preserved. By contrast, some aspects of F.F.'s executive functions and many visuospatial abilities were defective. The suggestion is made that the interpretation of metaphors and idioms is largely independent of that of literal language, preserved in F.F., and that some executive aspects of working memory and visuospatial and imagery processes may play a role.

  3. Natural Language Processing (NLP), Machine Learning (ML), and Semantics in Polar Science

    NASA Astrophysics Data System (ADS)

    Duerr, R.; Ramdeen, S.

    2017-12-01

    One of the interesting features of Polar Science is that it historically has been extremely interdisciplinary, encompassing all of the physical and social sciences. Given the ubiquity of specialized terminology in each field, enabling researchers to find, understand, and use all of the heterogeneous data needed for polar research continues to be a bottleneck. Within the informatics community, semantics has broadly accepted as a solution to these problems, yet progress in developing reusable semantic resources has been slow. The NSF-funded ClearEarth project has been adapting the methods and tools from other communities such as Biomedicine to the Earth sciences with the goal of enhancing progress and the rate at which the needed semantic resources can be created. One of the outcomes of the project has been a better understanding of the differences in the way linguists and physical scientists understand disciplinary text. One example of these differences is the tendency for each discipline and often disciplinary subfields to expend effort in creating discipline specific glossaries where individual terms often are comprised of more than one word (e.g., first-year sea ice). Often each term in a glossary is imbued with substantial contextual or physical meaning - meanings which are rarely explicitly called out within disciplinary texts; meaning which are therefore not immediately accessible to those outside that discipline or subfield; meanings which can often be represented semantically. Here we show how recognition of these difference and the use of glossaries can be used to speed up the annotation processes endemic to NLP, enable inter-community recognition and possible reconciliation of terminology differences. A number of processes and tools will be described, as will progress towards semi-automated generation of ontology structures.

  4. Bilingualism and increased attention to speech: Evidence from event-related potentials.

    PubMed

    Kuipers, Jan Rouke; Thierry, Guillaume

    2015-10-01

    A number of studies have shown that from an early age, bilinguals outperform their monolingual peers on executive control tasks. We previously found that bilingual children and adults also display greater attention to unexpected language switches within speech. Here, we investigated the effect of a bilingual upbringing on speech perception in one language. We recorded monolingual and bilingual toddlers' event-related potentials (ERPs) to spoken words preceded by pictures. Words matching the picture prime elicited an early frontal positivity in bilingual participants only, whereas later ERP amplitudes associated with semantic processing did not differ between groups. These results add to the growing body of evidence that bilingualism increases overall attention during speech perception whilst semantic integration is unaffected. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Resting-state functional connectivity and pitch identification ability in non-musicians

    PubMed Central

    Hou, Jiancheng; Chen, Chuansheng; Dong, Qi

    2015-01-01

    Previous studies have used task-related fMRI to investigate the neural basis of pitch identification (PI), but no study has examined the associations between resting-state functional connectivity (RSFC) and PI ability. Using a large sample of Chinese non-musicians (N = 320, with 56 having prior musical training), the current study examined the associations among musical training, PI ability, and RSFC. Results showed that musical training was associated with increased RSFC within the networks for multiple cognitive functions (such as vision, phonology, semantics, auditory encoding, and executive functions). PI ability was associated with RSFC with regions for perceptual and auditory encoding for participants with musical training, and with RSFC with regions for short-term memory, semantics, and phonology for participants without musical training. PMID:25717289

  6. A Pilot Study on Modeling of Diagnostic Criteria Using OWL and SWRL.

    PubMed

    Hong, Na; Jiang, Guoqian; Pathak, Jyotishiman; Chute, Christopher G

    2015-01-01

    The objective of this study is to describe our efforts in a pilot study on modeling diagnostic criteria using a Semantic Web-based approach. We reused the basic framework of the ICD-11 content model and refined it into an operational model in the Web Ontology Language (OWL). The refinement is based on a bottom-up analysis method, in which we analyzed data elements (including value sets) in a collection (n=20) of randomly selected diagnostic criteria. We also performed a case study to formalize rule logic in the diagnostic criteria of metabolic syndrome using the Semantic Web Rule Language (SWRL). The results demonstrated that it is feasible to use OWL and SWRL to formalize the diagnostic criteria knowledge, and to execute the rules through reasoning.

  7. Semantic representation of CDC-PHIN vocabulary using Simple Knowledge Organization System.

    PubMed

    Zhu, Min; Mirhaji, Parsa

    2008-11-06

    PHIN Vocabulary Access and Distribution System (VADS) promotes the use of standards based vocabulary within CDC information systems. However, the current PHIN vocabulary representation hinders its wide adoption. Simple Knowledge Organization System (SKOS) is a W3C draft specification to support the formal representation of Knowledge Organization Systems (KOS) within the framework of the Semantic Web. We present a method of adopting SKOS to represent PHIN vocabulary in order to enable automated information sharing and integration.

  8. Alternating-Offers Protocol for Multi-issue Bilateral Negotiation in Semantic-Enabled Marketplaces

    NASA Astrophysics Data System (ADS)

    Ragone, Azzurra; di Noia, Tommaso; di Sciascio, Eugenio; Donini, Francesco M.

    We present a semantic-based approach to multi-issue bilateral negotiation for e-commerce. We use Description Logics to model advertisements, and relations among issues as axioms in a TBox. We then introduce a logic-based alternating-offers protocol, able to handle conflicting information, that merges non-standard reasoning services in Description Logics with utility thoery to find the most suitable agreements. We illustrate and motivate the theoretical framework, the logical language, and the negotiation protocol.

  9. Knowledge-driven enhancements for task composition in bioinformatics.

    PubMed

    Sutherland, Karen; McLeod, Kenneth; Ferguson, Gus; Burger, Albert

    2009-10-01

    A key application area of semantic technologies is the fast-developing field of bioinformatics. Sealife was a project within this field with the aim of creating semantics-based web browsing capabilities for the Life Sciences. This includes meaningfully linking significant terms from the text of a web page to executable web services. It also involves the semantic mark-up of biological terms, linking them to biomedical ontologies, then discovering and executing services based on terms that interest the user. A system was produced which allows a user to identify terms of interest on a web page and subsequently connects these to a choice of web services which can make use of these inputs. Elements of Artificial Intelligence Planning build on this to present a choice of higher level goals, which can then be broken down to construct a workflow. An Argumentation System was implemented to evaluate the results produced by three different gene expression databases. An evaluation of these modules was carried out on users from a variety of backgrounds. Users with little knowledge of web services were able to achieve tasks that used several services in much less time than they would have taken to do this manually. The Argumentation System was also considered a useful resource and feedback was collected on the best way to present results. Overall the system represents a move forward in helping users to both construct workflows and analyse results by incorporating specific domain knowledge into the software. It also provides a mechanism by which web pages can be linked to web services. However, this work covers a specific domain and much co-ordinated effort is needed to make all web services available for use in such a way, i.e. the integration of underlying knowledge is a difficult but essential task.

  10. Specific cognitive functions and depressive symptoms as predictors of activities of daily living in older adults with heterogeneous cognitive backgrounds

    PubMed Central

    de Paula, Jonas J.; Diniz, Breno S.; Bicalho, Maria A.; Albuquerque, Maicon Rodrigues; Nicolato, Rodrigo; de Moraes, Edgar N.; Romano-Silva, Marco A.; Malloy-Diniz, Leandro F.

    2015-01-01

    Cognitive functioning influences activities of daily living (ADL). However, studies reporting the association between ADL and neuropsychological performance show inconsistent results regarding what specific cognitive domains are related to each specific functional domains. Additionally, whether depressive symptoms are associated with a worse functional performance in older adults is still under explored. We investigated if specific cognitive domains and depressive symptoms would affect different aspects of ADL. Participants were 274 older adults (96 normal aging participants, 85 patients with mild cognitive impairment, and 93 patients probable with mild Alzheimer’s disease dementia) with low formal education (∼4 years). Measures of ADL included three complexity levels: Self-care, Instrumental-Domestic, and Instrumental-Complex. The specific cognitive functions were evaluated through a factorial strategy resulting in four cognitive domains: Executive Functions, Language/Semantic Memory, Episodic Memory, and Visuospatial Abilities. The Geriatric Depression Scale measured depressive symptoms. Multiple linear regression analysis showed executive functions and episodic memory as significant predictors of Instrumental-Domestic ADL, and executive functions, episodic memory and language/semantic memory as predictors of Instrumental-Complex ADL (22 and 28% of explained variance, respectively). Ordinal regression analysis showed the influence of specific cognitive functions and depressive symptoms on each one of the instrumental ADL. We observed a heterogeneous pattern of association with explained variance ranging from 22 to 38%. Different instrumental ADL had specific cognitive predictors and depressive symptoms were predictive of ADL involving social contact. Our results suggest a specific pattern of influence depending on the specific instrumental daily living activity. PMID:26257644

  11. Centrality-based Selection of Semantic Resources for Geosciences

    NASA Astrophysics Data System (ADS)

    Cerba, Otakar; Jedlicka, Karel

    2017-04-01

    Semantical questions intervene almost in all disciplines dealing with geographic data and information, because relevant semantics is crucial for any way of communication and interaction among humans as well as among machines. But the existence of such a large number of different semantic resources (such as various thesauri, controlled vocabularies, knowledge bases or ontologies) makes the process of semantics implementation much more difficult and complicates the use of the advantages of semantics. This is because in many cases users are not able to find the most suitable resource for their purposes. The research presented in this paper introduces a methodology consisting of an analysis of identical relations in Linked Data space, which covers a majority of semantic resources, to find a suitable resource of semantic information. Identical links interconnect representations of an object or a concept in various semantic resources. Therefore this type of relations is considered to be crucial from the view of Linked Data, because these links provide new additional information, including various views on one concept based on different cultural or regional aspects (so-called social role of Linked Data). For these reasons it is possible to declare that one reasonable criterion for feasible semantic resources for almost all domains, including geosciences, is their position in a network of interconnected semantic resources and level of linking to other knowledge bases and similar products. The presented methodology is based on searching of mutual connections between various instances of one concept using "follow your nose" approach. The extracted data on interconnections between semantic resources are arranged to directed graphs and processed by various metrics patterned on centrality computing (degree, closeness or betweenness centrality). Semantic resources recommended by the research could be used for providing semantically described keywords for metadata records or as names of items in data models. Such an approach enables much more efficient data harmonization, integration, sharing and exploitation. * * * * This publication was supported by the project LO1506 of the Czech Ministry of Education, Youth and Sports. This publication was supported by project Data-Driven Bioeconomy (DataBio) from the ICT-15-2016-2017, Big Data PPP call.

  12. Electrophysiological difference between the representations of causal judgment and associative judgment in semantic memory.

    PubMed

    Chen, Qingfei; Liang, Xiuling; Lei, Yi; Li, Hong

    2015-05-01

    Causally related concepts like "virus" and "epidemic" and general associatively related concepts like "ring" and "emerald" are represented and accessed separately. The Evoked Response Potential (ERP) procedure was used to examine the representations of causal judgment and associative judgment in semantic memory. Participants were required to remember a task cue (causal or associative) presented at the beginning of each trial, and assess whether the relationship between subsequently presented words matched the initial task cue. The ERP data showed that an N400 effect (250-450 ms) was more negative for unrelated words than for all related words. Furthermore, the N400 effect elicited by causal relations was more positive than for associative relations in causal cue condition, whereas no significant difference was found in the associative cue condition. The centrally distributed late ERP component (650-750 ms) elicited by the causal cue condition was more positive than for the associative cue condition. These results suggested that the processing of causal judgment and associative judgment in semantic memory recruited different degrees of attentional and executive resources. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Effect of episodic and working memory impairments on semantic and cognitive procedural learning at alcohol treatment entry.

    PubMed

    Pitel, Anne Lise; Witkowski, Thomas; Vabret, François; Guillery-Girard, Bérengère; Desgranges, Béatrice; Eustache, Francis; Beaunieux, Hélène

    2007-02-01

    Chronic alcoholism is known to impair the functioning of episodic and working memory, which may consequently reduce the ability to learn complex novel information. Nevertheless, semantic and cognitive procedural learning have not been properly explored at alcohol treatment entry, despite its potential clinical relevance. The goal of the present study was therefore to determine whether alcoholic patients, immediately after the weaning phase, are cognitively able to acquire complex new knowledge, given their episodic and working memory deficits. Twenty alcoholic inpatients with episodic memory and working memory deficits at alcohol treatment entry and a control group of 20 healthy subjects underwent a protocol of semantic acquisition and cognitive procedural learning. The semantic learning task consisted of the acquisition of 10 novel concepts, while subjects were administered the Tower of Toronto task to measure cognitive procedural learning. Analyses showed that although alcoholic subjects were able to acquire the category and features of the semantic concepts, albeit slowly, they presented impaired label learning. In the control group, executive functions and episodic memory predicted semantic learning in the first and second halves of the protocol, respectively. In addition to the cognitive processes involved in the learning strategies invoked by controls, alcoholic subjects seem to attempt to compensate for their impaired cognitive functions, invoking capacities of short-term passive storage. Regarding cognitive procedural learning, although the patients eventually achieved the same results as the controls, they failed to automate the procedure. Contrary to the control group, the alcoholic groups' learning performance was predicted by controlled cognitive functions throughout the protocol. At alcohol treatment entry, alcoholic patients with neuropsychological deficits have difficulty acquiring novel semantic and cognitive procedural knowledge. Compared with controls, they seem to use more costly learning strategies, which are nonetheless less efficient. These learning disabilities need to be considered when treatment requiring the acquisition of complex novel information is envisaged.

  14. A High-Speed Design of Montgomery Multiplier

    NASA Astrophysics Data System (ADS)

    Fan, Yibo; Ikenaga, Takeshi; Goto, Satoshi

    With the increase of key length used in public cryptographic algorithms such as RSA and ECC, the speed of Montgomery multiplication becomes a bottleneck. This paper proposes a high speed design of Montgomery multiplier. Firstly, a modified scalable high-radix Montgomery algorithm is proposed to reduce critical path. Secondly, a high-radix clock-saving dataflow is proposed to support high-radix operation and one clock cycle delay in dataflow. Finally, a hardware-reused architecture is proposed to reduce the hardware cost and a parallel radix-16 design of data path is proposed to accelerate the speed. By using HHNEC 0.25μm standard cell library, the implementation results show that the total cost of Montgomery multiplier is 130 KGates, the clock frequency is 180MHz and the throughput of 1024-bit RSA encryption is 352kbps. This design is suitable to be used in high speed RSA or ECC encryption/decryption. As a scalable design, it supports any key-length encryption/decryption up to the size of on-chip memory.

  15. Applying a visual language for image processing as a graphical teaching tool in medical imaging

    NASA Astrophysics Data System (ADS)

    Birchman, James J.; Tanimoto, Steven L.; Rowberg, Alan H.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Typical user interaction in image processing is with command line entries, pull-down menus, or text menu selections from a list, and as such is not generally graphical in nature. Although applying these interactive methods to construct more sophisticated algorithms from a series of simple image processing steps may be clear to engineers and programmers, it may not be clear to clinicians. A solution to this problem is to implement a visual programming language using visual representations to express image processing algorithms. Visual representations promote a more natural and rapid understanding of image processing algorithms by providing more visual insight into what the algorithms do than the interactive methods mentioned above can provide. Individuals accustomed to dealing with images will be more likely to understand an algorithm that is represented visually. This is especially true of referring physicians, such as surgeons in an intensive care unit. With the increasing acceptance of picture archiving and communications system (PACS) workstations and the trend toward increasing clinical use of image processing, referring physicians will need to learn more sophisticated concepts than simply image access and display. If the procedures that they perform commonly, such as window width and window level adjustment and image enhancement using unsharp masking, are depicted visually in an interactive environment, it will be easier for them to learn and apply these concepts. The software described in this paper is a visual programming language for imaging processing which has been implemented on the NeXT computer using NeXTstep user interface development tools and other tools in an object-oriented environment. The concept is based upon the description of a visual language titled `Visualization of Vision Algorithms' (VIVA). Iconic representations of simple image processing steps are placed into a workbench screen and connected together into a dataflow path by the user. As the user creates and edits a dataflow path, more complex algorithms can be built on the screen. Once the algorithm is built, it can be executed, its results can be reviewed, and operator parameters can be interactively adjusted until an optimized output is produced. The optimized algorithm can then be saved and added to the system as a new operator. This system has been evaluated as a graphical teaching tool for window width and window level adjustment, image enhancement using unsharp masking, and other techniques.

  16. Assessing the Potential Value of Semantic Web Technologies in Support of Military Operations

    DTIC Science & Technology

    2003-09-01

    Teleconference). Deitel , P. J. (2002). Java, How to Program , Fourth Edition. Upper Saddle River, New Jersey: Prentice-Hall, Inc. Description Logics... how clients connect with each other to form an impromptu community. Jini™ lets programs use services in a network without knowing anything about the...another runtime program (execution engine) to determine how the computer should do it. Declarative programming is very different from the traditional

  17. Artificial Intelligence - Research and Applications

    DTIC Science & Technology

    1975-05-01

    G, »aln H, Harrow A, Brain B, Deutsch P, Duda R, Flues T, Garvey P. Hart G, Hendrlx 0, Lynch B. Meyer M. Pattner C . Sacerdotl D ...System a. The Procedural Net b. Task-Specific Knowledge c . The Planning Algorithm d . The Execution Algorithm 3. The Semantics of Assembly and...101 3. Querying State Description Models 103 a. Truth Values 103 b. Generators Instead of Backtracking 104 c . The Query Functions 107 d

  18. A novel architecture for information retrieval system based on semantic web

    NASA Astrophysics Data System (ADS)

    Zhang, Hui

    2011-12-01

    Nowadays, the web has enabled an explosive growth of information sharing (there are currently over 4 billion pages covering most areas of human endeavor) so that the web has faced a new challenge of information overhead. The challenge that is now before us is not only to help people locating relevant information precisely but also to access and aggregate a variety of information from different resources automatically. Current web document are in human-oriented formats and they are suitable for the presentation, but machines cannot understand the meaning of document. To address this issue, Berners-Lee proposed a concept of semantic web. With semantic web technology, web information can be understood and processed by machine. It provides new possibilities for automatic web information processing. A main problem of semantic web information retrieval is that when these is not enough knowledge to such information retrieval system, the system will return to a large of no sense result to uses due to a huge amount of information results. In this paper, we present the architecture of information based on semantic web. In addiction, our systems employ the inference Engine to check whether the query should pose to Keyword-based Search Engine or should pose to the Semantic Search Engine.

  19. Architecture for WSN Nodes Integration in Context Aware Systems Using Semantic Messages

    NASA Astrophysics Data System (ADS)

    Larizgoitia, Iker; Muguira, Leire; Vazquez, Juan Ignacio

    Wireless sensor networks (WSN) are becoming extremely popular in the development of context aware systems. Traditionally WSN have been focused on capturing data, which was later analyzed and interpreted in a server with more computational power. In this kind of scenario the problem of representing the sensor information needs to be addressed. Every node in the network might have different sensors attached; therefore their correspondent packet structures will be different. The server has to be aware of the meaning of every single structure and data in order to be able to interpret them. Multiple sensors, multiple nodes, multiple packet structures (and not following a standard format) is neither scalable nor interoperable. Context aware systems have solved this problem with the use of semantic technologies. They provide a common framework to achieve a standard definition of any domain. Nevertheless, these representations are computationally expensive, so a WSN cannot afford them. The work presented in this paper tries to bridge the gap between the sensor information and its semantic representation, by defining a simple architecture that enables the definition of this information natively in a semantic way, achieving the integration of the semantic information in the network packets. This will have several benefits, the most important being the possibility of promoting every WSN node to a real semantic information source.

  20. Hybrid Semantic Analysis for Mapping Adverse Drug Reaction Mentions in Tweets to Medical Terminology.

    PubMed

    Emadzadeh, Ehsan; Sarker, Abeed; Nikfarjam, Azadeh; Gonzalez, Graciela

    2017-01-01

    Social networks, such as Twitter, have become important sources for active monitoring of user-reported adverse drug reactions (ADRs). Automatic extraction of ADR information can be crucial for healthcare providers, drug manufacturers, and consumers. However, because of the non-standard nature of social media language, automatically extracted ADR mentions need to be mapped to standard forms before they can be used by operational pharmacovigilance systems. We propose a modular natural language processing pipeline for mapping (normalizing) colloquial mentions of ADRs to their corresponding standardized identifiers. We seek to accomplish this task and enable customization of the pipeline so that distinct unlabeled free text resources can be incorporated to use the system for other normalization tasks. Our approach, which we call Hybrid Semantic Analysis (HSA), sequentially employs rule-based and semantic matching algorithms for mapping user-generated mentions to concept IDs in the Unified Medical Language System vocabulary. The semantic matching component of HSA is adaptive in nature and uses a regression model to combine various measures of semantic relatedness and resources to optimize normalization performance on the selected data source. On a publicly available corpus, our normalization method achieves 0.502 recall and 0.823 precision (F-measure: 0.624). Our proposed method outperforms a baseline based on latent semantic analysis and another that uses MetaMap.

  1. A Rewriting-Based Approach to Trace Analysis

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We present a rewriting-based algorithm for efficiently evaluating future time Linear Temporal Logic (LTL) formulae on finite execution traces online. While the standard models of LTL are infinite traces, finite traces appear naturally when testing and/or monitoring red applications that only run for limited time periods. The presented algorithm is implemented in the Maude executable specification language and essentially consists of a set of equations establishing an executable semantics of LTL using a simple formula transforming approach. The algorithm is further improved to build automata on-the-fly from formulae, using memoization. The result is a very efficient and small Maude program that can be used to monitor program executions. We furthermore present an alternative algorithm for synthesizing probably minimal observer finite state machines (or automata) from LTL formulae, which can be used to analyze execution traces without the need for a rewriting system, and can hence be used by observers written in conventional programming languages. The presented work is part of an ambitious runtime verification and monitoring project at NASA Ames, called PATHEXPLORER, and demonstrates that rewriting can be a tractable and attractive means for experimenting and implementing program monitoring logics.

  2. A DNA-based semantic fusion model for remote sensing data.

    PubMed

    Sun, Heng; Weng, Jian; Yu, Guangchuang; Massawe, Richard H

    2013-01-01

    Semantic technology plays a key role in various domains, from conversation understanding to algorithm analysis. As the most efficient semantic tool, ontology can represent, process and manage the widespread knowledge. Nowadays, many researchers use ontology to collect and organize data's semantic information in order to maximize research productivity. In this paper, we firstly describe our work on the development of a remote sensing data ontology, with a primary focus on semantic fusion-driven research for big data. Our ontology is made up of 1,264 concepts and 2,030 semantic relationships. However, the growth of big data is straining the capacities of current semantic fusion and reasoning practices. Considering the massive parallelism of DNA strands, we propose a novel DNA-based semantic fusion model. In this model, a parallel strategy is developed to encode the semantic information in DNA for a large volume of remote sensing data. The semantic information is read in a parallel and bit-wise manner and an individual bit is converted to a base. By doing so, a considerable amount of conversion time can be saved, i.e., the cluster-based multi-processes program can reduce the conversion time from 81,536 seconds to 4,937 seconds for 4.34 GB source data files. Moreover, the size of result file recording DNA sequences is 54.51 GB for parallel C program compared with 57.89 GB for sequential Perl. This shows that our parallel method can also reduce the DNA synthesis cost. In addition, data types are encoded in our model, which is a basis for building type system in our future DNA computer. Finally, we describe theoretically an algorithm for DNA-based semantic fusion. This algorithm enables the process of integration of the knowledge from disparate remote sensing data sources into a consistent, accurate, and complete representation. This process depends solely on ligation reaction and screening operations instead of the ontology.

  3. A DNA-Based Semantic Fusion Model for Remote Sensing Data

    PubMed Central

    Sun, Heng; Weng, Jian; Yu, Guangchuang; Massawe, Richard H.

    2013-01-01

    Semantic technology plays a key role in various domains, from conversation understanding to algorithm analysis. As the most efficient semantic tool, ontology can represent, process and manage the widespread knowledge. Nowadays, many researchers use ontology to collect and organize data's semantic information in order to maximize research productivity. In this paper, we firstly describe our work on the development of a remote sensing data ontology, with a primary focus on semantic fusion-driven research for big data. Our ontology is made up of 1,264 concepts and 2,030 semantic relationships. However, the growth of big data is straining the capacities of current semantic fusion and reasoning practices. Considering the massive parallelism of DNA strands, we propose a novel DNA-based semantic fusion model. In this model, a parallel strategy is developed to encode the semantic information in DNA for a large volume of remote sensing data. The semantic information is read in a parallel and bit-wise manner and an individual bit is converted to a base. By doing so, a considerable amount of conversion time can be saved, i.e., the cluster-based multi-processes program can reduce the conversion time from 81,536 seconds to 4,937 seconds for 4.34 GB source data files. Moreover, the size of result file recording DNA sequences is 54.51 GB for parallel C program compared with 57.89 GB for sequential Perl. This shows that our parallel method can also reduce the DNA synthesis cost. In addition, data types are encoded in our model, which is a basis for building type system in our future DNA computer. Finally, we describe theoretically an algorithm for DNA-based semantic fusion. This algorithm enables the process of integration of the knowledge from disparate remote sensing data sources into a consistent, accurate, and complete representation. This process depends solely on ligation reaction and screening operations instead of the ontology. PMID:24116207

  4. Digital patient: Personalized and translational data management through the MyHealthAvatar EU project.

    PubMed

    Kondylakis, Haridimos; Spanakis, Emmanouil G; Sfakianakis, Stelios; Sakkalis, Vangelis; Tsiknakis, Manolis; Marias, Kostas; Xia Zhao; Hong Qing Yu; Feng Dong

    2015-08-01

    The advancements in healthcare practice have brought to the fore the need for flexible access to health-related information and created an ever-growing demand for the design and the development of data management infrastructures for translational and personalized medicine. In this paper, we present the data management solution implemented for the MyHealthAvatar EU research project, a project that attempts to create a digital representation of a patient's health status. The platform is capable of aggregating several knowledge sources relevant for the provision of individualized personal services. To this end, state of the art technologies are exploited, such as ontologies to model all available information, semantic integration to enable data and query translation and a variety of linking services to allow connecting to external sources. All original information is stored in a NoSQL database for reasons of efficiency and fault tolerance. Then it is semantically uplifted through a semantic warehouse which enables efficient access to it. All different technologies are combined to create a novel web-based platform allowing seamless user interaction through APIs that support personalized, granular and secure access to the relevant information.

  5. TOPTRAC: Topical Trajectory Pattern Mining

    PubMed Central

    Kim, Younghoon; Han, Jiawei; Yuan, Cangzhou

    2015-01-01

    With the increasing use of GPS-enabled mobile phones, geo-tagging, which refers to adding GPS information to media such as micro-blogging messages or photos, has seen a surge in popularity recently. This enables us to not only browse information based on locations, but also discover patterns in the location-based behaviors of users. Many techniques have been developed to find the patterns of people's movements using GPS data, but latent topics in text messages posted with local contexts have not been utilized effectively. In this paper, we present a latent topic-based clustering algorithm to discover patterns in the trajectories of geo-tagged text messages. We propose a novel probabilistic model to capture the semantic regions where people post messages with a coherent topic as well as the patterns of movement between the semantic regions. Based on the model, we develop an efficient inference algorithm to calculate model parameters. By exploiting the estimated model, we next devise a clustering algorithm to find the significant movement patterns that appear frequently in data. Our experiments on real-life data sets show that the proposed algorithm finds diverse and interesting trajectory patterns and identifies the semantic regions in a finer granularity than the traditional geographical clustering methods. PMID:26709365

  6. Situational Awareness Applied to Geology Field Mapping using Integration of Semantic Data and Visualization Techniques

    NASA Astrophysics Data System (ADS)

    Houser, P. I. Q.

    2017-12-01

    21st century earth science is data-intensive, characterized by heterogeneous, sometimes voluminous collections representing phenomena at different scales collected for different purposes and managed in disparate ways. However, much of the earth's surface still requires boots-on-the-ground, in-person fieldwork in order to detect the subtle variations from which humans can infer complex structures and patterns. Nevertheless, field experiences can and should be enabled and enhanced by a variety of emerging technologies. The goal of the proposed research project is to pilot test emerging data integration, semantic and visualization technologies for evaluation of their potential usefulness in the field sciences, particularly in the context of field geology. The proposed project will investigate new techniques for data management and integration enabled by semantic web technologies, along with new techniques for augmented reality that can operate on such integrated data to enable in situ visualization in the field. The research objectives include: Develop new technical infrastructure that applies target technologies to field geology; Test, evaluate, and assess the technical infrastructure in a pilot field site; Evaluate the capabilities of the systems for supporting and augmenting field science; and Assess the generality of the system for implementation in new and different types of field sites. Our hypothesis is that these technologies will enable what we call "field science situational awareness" - a cognitive state formerly attained only through long experience in the field - that is highly desirable but difficult to achieve in time- and resource-limited settings. Expected outcomes include elucidation of how, and in what ways, these technologies are beneficial in the field; enumeration of the steps and requirements to implement these systems; and cost/benefit analyses that evaluate under what conditions the investments of time and resources are advisable to construct such system.

  7. Ontology Reuse in Geoscience Semantic Applications

    NASA Astrophysics Data System (ADS)

    Mayernik, M. S.; Gross, M. B.; Daniels, M. D.; Rowan, L. R.; Stott, D.; Maull, K. E.; Khan, H.; Corson-Rikert, J.

    2015-12-01

    The tension between local ontology development and wider ontology connections is fundamental to the Semantic web. It is often unclear, however, what the key decision points should be for new semantic web applications in deciding when to reuse existing ontologies and when to develop original ontologies. In addition, with the growth of semantic web ontologies and applications, new semantic web applications can struggle to efficiently and effectively identify and select ontologies to reuse. This presentation will describe the ontology comparison, selection, and consolidation effort within the EarthCollab project. UCAR, Cornell University, and UNAVCO are collaborating on the EarthCollab project to use semantic web technologies to enable the discovery of the research output from a diverse array of projects. The EarthCollab project is using the VIVO Semantic web software suite to increase discoverability of research information and data related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) diverse research projects informed by geodesy through the UNAVCO geodetic facility and consortium. This presentation will outline of EarthCollab use cases, and provide an overview of key ontologies being used, including the VIVO-Integrated Semantic Framework (VIVO-ISF), Global Change Information System (GCIS), and Data Catalog (DCAT) ontologies. We will discuss issues related to bringing these ontologies together to provide a robust ontological structure to support the EarthCollab use cases. It is rare that a single pre-existing ontology meets all of a new application's needs. New projects need to stitch ontologies together in ways that fit into the broader semantic web ecosystem.

  8. A Categorization of Dynamic Analyzers

    NASA Technical Reports Server (NTRS)

    Lujan, Michelle R.

    1997-01-01

    Program analysis techniques and tools are essential to the development process because of the support they provide in detecting errors and deficiencies at different phases of development. The types of information rendered through analysis includes the following: statistical measurements of code, type checks, dataflow analysis, consistency checks, test data,verification of code, and debugging information. Analyzers can be broken into two major categories: dynamic and static. Static analyzers examine programs with respect to syntax errors and structural properties., This includes gathering statistical information on program content, such as the number of lines of executable code, source lines. and cyclomatic complexity. In addition, static analyzers provide the ability to check for the consistency of programs with respect to variables. Dynamic analyzers in contrast are dependent on input and the execution of a program providing the ability to find errors that cannot be detected through the use of static analysis alone. Dynamic analysis provides information on the behavior of a program rather than on the syntax. Both types of analysis detect errors in a program, but dynamic analyzers accomplish this through run-time behavior. This paper focuses on the following broad classification of dynamic analyzers: 1) Metrics; 2) Models; and 3) Monitors. Metrics are those analyzers that provide measurement. The next category, models, captures those analyzers that present the state of the program to the user at specified points in time. The last category, monitors, checks specified code based on some criteria. The paper discusses each classification and the techniques that are included under them. In addition, the role of each technique in the software life cycle is discussed. Familiarization with the tools that measure, model and monitor programs provides a framework for understanding the program's dynamic behavior from different, perspectives through analysis of the input/output data.

  9. Linking CSF and cognition in Alzheimer's disease: Reanalysis of clinical data.

    PubMed

    Guhra, Michael; Thomas, Christine; Boedeker, Sebastian; Kreisel, Stefan; Driessen, Martin; Beblo, Thomas; Ohrmann, Patricia; Toepper, Max

    2016-01-01

    Memory and executive deficits are important cognitive markers of Alzheimer's disease (AD). Moreover, in the past decade, cerebrospinal fluid (CSF) biomarkers have been increasingly utilized in clinical practice. Both cognitive and CSF markers can be used to differentiate between AD patients and healthy seniors with high diagnostic accuracy. However, the extent to which performance on specific mnemonic or executive tasks enables reliable estimations of the concentrations of different CSF markers and their ratios remains unclear. To address the above issues, we examined the association between neuropsychological data and CSF biomarkers in 51 AD patients using hierarchical multiple regression analyses. In the first step of these analyses, age, education and sex were entered as predictors to control for possible confounding effects. In the second step, data from a neuropsychological test battery assessing episodic memory, semantic memory and executive functioning were included to determine whether these variables significantly increased (compared to step 1) the explained variance in Aβ42 concentration, p-tau concentration, t-tau concentration, Aβ42/t-tau ratio, and Aβ42/Aβ40 ratio. The different models explained 52% of the variance in Aβ42/t-tau ratio, 27% of the variance in Aβ42 concentration, and 28% of the variance in t-tau concentration. In particular, Aβ42/t-tau ratio was associated with verbal recognition and code shifting, with Aβ42 being related to verbal recognition and t-tau being related to code shifting. By contrast, the inclusion of neuropsychological data did not allow reliable estimations of Aβ42/Aβ40 ratio or p-tau concentration. Our results showed that strong associations exist between the cognitive key symptoms of AD and the concentrations and ratios of specific CSF markers. In addition, we revealed a specific combination of neuropsychological tests that may facilitate reliable estimations of CSF concentrations, thereby providing important diagnostic information for non-invasive early AD detection. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Design Features for Linguistically-Mediated Meaning Construction: The Relative Roles of the Linguistic and Conceptual Systems in Subserving the Ideational Function of Language.

    PubMed

    Evans, Vyvyan

    2016-01-01

    Recent research in language and cognitive science proposes that the linguistic system evolved to provide an "executive" control system on the evolutionarily more ancient conceptual system (e.g., Barsalou et al., 2008; Evans, 2009, 2015a,b; Bergen, 2012). In short, the claim is that embodied representations in the linguistic system interface with non-linguistic representations in the conceptual system, facilitating rich meanings, or simulations, enabling linguistically mediated communication. In this paper I build on these proposals by examining the nature of what I identify as design features for this control system. In particular, I address how the ideational function of language-our ability to deploy linguistic symbols to convey meanings of great complexity-is facilitated. The central proposal of this paper is as follows. The linguistic system of any given language user, of any given linguistic system-spoken or signed-facilitates access to knowledge representation-concepts-in the conceptual system, which subserves this ideational function. In the most general terms, the human meaning-making capacity is underpinned by two distinct, although tightly coupled representational systems: the conceptual system and the linguistic system. Each system contributes to meaning construction in qualitatively distinct ways. This leads to the first design feature: given that the two systems are representational-they are populated by semantic representations-the nature and function of the representations are qualitatively different. This proposed design feature I term the bifurcation in semantic representation. After all, it stands to reason that if a linguistic system has a different function, vis-à-vis the conceptual system, which is of far greater evolutionary antiquity, then the semantic representations will be complementary, and as such, qualitatively different, reflecting the functional distinctions of the two systems, in collectively giving rise to meaning. I consider the nature of these qualitatively distinct representations. And second, language itself is adapted to the conceptual system-the semantic potential-that it marshals in the meaning construction process. Hence, a linguistic system itself exhibits a bifurcation, in terms of the symbolic resources at its disposal. This design feature I dub the birfucation in linguistic organization. As I shall argue, this relates to two distinct reference strategies available for symbolic encoding in language: what I dub words-to-world reference and words-to-words reference. In slightly different terms, this design feature of language amounts to a distinction between a lexical subsystem, and a grammatical subsystem.

  11. Quantitative and qualitative analysis of semantic verbal fluency in patients with temporal lobe epilepsy.

    PubMed

    Jaimes-Bautista, A G; Rodríguez-Camacho, M; Martínez-Juárez, I E; Rodríguez-Agudelo, Y

    2017-08-29

    Patients with temporal lobe epilepsy (TLE) perform poorly on semantic verbal fluency (SVF) tasks. Completing these tasks successfully involves multiple cognitive processes simultaneously. Therefore, quantitative analysis of SVF (number of correct words in one minute), conducted in most studies, has been found to be insufficient to identify cognitive dysfunction underlying SVF difficulties in TLE. To determine whether a sample of patients with TLE had SVF difficulties compared with a control group (CG), and to identify the cognitive components associated with SVF difficulties using quantitative and qualitative analysis. SVF was evaluated in 25 patients with TLE and 24 healthy controls; the semantic verbal fluency test included 5 semantic categories: animals, fruits, occupations, countries, and verbs. All 5 categories were analysed quantitatively (number of correct words per minute and interval of execution: 0-15, 16-30, 31-45, and 46-60seconds); the categories animals and fruits were also analysed qualitatively (clusters, cluster size, switches, perseverations, and intrusions). Patients generated fewer words for all categories and intervals and fewer clusters and switches for animals and fruits than the CG (P<.01). Differences between groups were not significant in terms of cluster size and number of intrusions and perseverations (P>.05). Our results suggest an association between SVF difficulties in TLE and difficulty activating semantic networks, impaired strategic search, and poor cognitive flexibility. Attention, inhibition, and working memory are preserved in these patients. Copyright © 2017 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  12. The Pivotal Role of Semantic Memory in Remembering the Past and Imagining the Future

    PubMed Central

    Irish, Muireann; Piguet, Olivier

    2013-01-01

    Episodic memory refers to a complex and multifaceted process which enables the retrieval of richly detailed evocative memories from the past. In contrast, semantic memory is conceptualized as the retrieval of general conceptual knowledge divested of a specific spatiotemporal context. The neural substrates of the episodic and semantic memory systems have been dissociated in healthy individuals during functional imaging studies, and in clinical cohorts, leading to the prevailing view that episodic and semantic memory represent functionally distinct systems subtended by discrete neurobiological substrates. Importantly, however, converging evidence focusing on widespread neural networks now points to significant overlap between those regions essential for retrieval of autobiographical memories, episodic learning, and semantic processing. Here we review recent advances in episodic memory research focusing on neurodegenerative populations which has proved revelatory for our understanding of the complex interplay between episodic and semantic memory. Whereas episodic memory research has traditionally focused on retrieval of autobiographical events from the past, we also include evidence from the recent paradigm shift in which episodic memory is viewed as an adaptive and constructive process which facilitates the imagining of possible events in the future. We examine the available evidence which converges to highlight the pivotal role of semantic memory in providing schemas and meaning whether one is engaged in autobiographical retrieval for the past, or indeed, is endeavoring to construct a plausible scenario of an event in the future. It therefore seems plausible to contend that semantic processing may underlie most, if not all, forms of episodic memory, irrespective of temporal condition. PMID:23565081

  13. Semantic Enhancement for Enterprise Data Management

    NASA Astrophysics Data System (ADS)

    Ma, Li; Sun, Xingzhi; Cao, Feng; Wang, Chen; Wang, Xiaoyuan; Kanellos, Nick; Wolfson, Dan; Pan, Yue

    Taking customer data as an example, the paper presents an approach to enhance the management of enterprise data by using Semantic Web technologies. Customer data is the most important kind of core business entity a company uses repeatedly across many business processes and systems, and customer data management (CDM) is becoming critical for enterprises because it keeps a single, complete and accurate record of customers across the enterprise. Existing CDM systems focus on integrating customer data from all customer-facing channels and front and back office systems through multiple interfaces, as well as publishing customer data to different applications. To make the effective use of the CDM system, this paper investigates semantic query and analysis over the integrated and centralized customer data, enabling automatic classification and relationship discovery. We have implemented these features over IBM Websphere Customer Center, and shown the prototype to our clients. We believe that our study and experiences are valuable for both Semantic Web community and data management community.

  14. BioHackathon series in 2011 and 2012: penetration of ontology and linked data in life science domains

    PubMed Central

    2014-01-01

    The application of semantic technologies to the integration of biological data and the interoperability of bioinformatics analysis and visualization tools has been the common theme of a series of annual BioHackathons hosted in Japan for the past five years. Here we provide a review of the activities and outcomes from the BioHackathons held in 2011 in Kyoto and 2012 in Toyama. In order to efficiently implement semantic technologies in the life sciences, participants formed various sub-groups and worked on the following topics: Resource Description Framework (RDF) models for specific domains, text mining of the literature, ontology development, essential metadata for biological databases, platforms to enable efficient Semantic Web technology development and interoperability, and the development of applications for Semantic Web data. In this review, we briefly introduce the themes covered by these sub-groups. The observations made, conclusions drawn, and software development projects that emerged from these activities are discussed. PMID:24495517

  15. Towards a semantic PACS: Using Semantic Web technology to represent imaging data.

    PubMed

    Van Soest, Johan; Lustberg, Tim; Grittner, Detlef; Marshall, M Scott; Persoon, Lucas; Nijsten, Bas; Feltens, Peter; Dekker, Andre

    2014-01-01

    The DICOM standard is ubiquitous within medicine. However, improved DICOM semantics would significantly enhance search operations. Furthermore, databases of current PACS systems are not flexible enough for the demands within image analysis research. In this paper, we investigated if we can use Semantic Web technology, to store and represent metadata of DICOM image files, as well as linking additional computational results to image metadata. Therefore, we developed a proof of concept containing two applications: one to store commonly used DICOM metadata in an RDF repository, and one to calculate imaging biomarkers based on DICOM images, and store the biomarker values in an RDF repository. This enabled us to search for all patients with a gross tumor volume calculated to be larger than 50 cc. We have shown that we can successfully store the DICOM metadata in an RDF repository and are refining our proof of concept with regards to volume naming, value representation, and the applications themselves.

  16. Pragmatic and executive functions in traumatic brain injury and right brain damage: An exploratory comparative study

    PubMed Central

    Zimmermann, Nicolle; Gindri, Gigiane; de Oliveira, Camila Rosa; Fonseca, Rochele Paz

    2011-01-01

    Objective To describe the frequency of pragmatic and executive deficits in right brain damaged (RBD) and in traumatic brain injury (TBI) patients, and to verify possible dissociations between pragmatic and executive functions in these two groups. Methods The sample comprised 7 cases of TBI and 7 cases of RBD. All participants were assessed by means of tasks from the Montreal Communication Evaluation Battery and executive functions tests including the Trail Making Test, Hayling Test, Wisconsin Card Sorting Test, semantic and phonemic verbal fluency tasks, and working memory tasks from the Brazilian Brief Neuropsychological Assessment Battery NEUPSILIN. Z-score was calculated and a descriptive analysis of frequency of deficits (Z< -1.5) was carried out. Results RBD patients presented with deficits predominantly on conversational and narrative discursive tasks, while TBI patients showed a wider spread pattern of pragmatic deficits. Regarding EF, RBD deficits included predominantly working memory and verbal initiation impairment. On the other hand, TBI individuals again exhibited a general profile of executive dysfunction, affecting mainly working memory, initiation, inhibition, planning and switching. Pragmatic and executive deficits were generally associated upon comparisons of RBD patients and TBI cases, except for two simple dissociations: two post-TBI cases showed executive deficits in the absence of pragmatic deficits. Discussion Pragmatic and executive deficits can be very frequent following TBI or vascular RBD. There seems to be an association between these abilities, indicating that although they can co-occur, a cause-consequence relationship cannot be the only hypothesis. PMID:29213762

  17. The Development of Metaphor Comprehension and Its Relationship with Relational Verbal Reasoning and Executive Function.

    PubMed

    Carriedo, Nuria; Corral, Antonio; Montoro, Pedro R; Herrero, Laura; Ballestrino, Patricia; Sebastián, Iraia

    2016-01-01

    Our main objective was to analyse the different contributions of relational verbal reasoning (analogical and class inclusion) and executive functioning to metaphor comprehension across development. We postulated that both relational reasoning and executive functioning should predict individual and developmental differences. However, executive functioning would become increasingly involved when metaphor comprehension is highly demanding, either because of the metaphors' high difficulty (relatively novel metaphors in the absence of a context) or because of the individual's special processing difficulties, such as low levels of reading experience or low semantic knowledge. Three groups of participants, 11-year-olds, 15-year-olds and young adults, were assessed in different relational verbal reasoning tasks-analogical and class-inclusion-and in executive functioning tasks-updating information in working memory, inhibition, and shifting. The results revealed clear progress in metaphor comprehension between ages 11 and 15 and between ages 15 and 21. However, the importance of executive function in metaphor comprehension was evident by age 15 and was restricted to updating information in working memory and cognitive inhibition. Participants seemed to use two different strategies to interpret metaphors: relational verbal reasoning and executive functioning. This was clearly shown when comparing the performance of the "more efficient" participants in metaphor interpretation with that of the "less efficient" ones. Whereas in the first case none of the executive variables or those associated with relational verbal reasoning were significantly related to metaphor comprehension, in the latter case, both groups of variables had a clear predictor effect.

  18. The ARES High-level Intermediate Representation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moss, Nicholas David

    The LLVM intermediate representation (IR) lacks semantic constructs for depicting common high-performance operations such as parallel and concurrent execution, communication and synchronization. Currently, representing such semantics in LLVM requires either extending the intermediate form (a signi cant undertaking) or the use of ad hoc indirect means such as encoding them as intrinsics and/or the use of metadata constructs. In this paper we discuss a work in progress to explore the design and implementation of a new compilation stage and associated high-level intermediate form that is placed between the abstract syntax tree and when it is lowered to LLVM's IR. Thismore » highlevel representation is a superset of LLVM IR and supports the direct representation of these common parallel computing constructs along with the infrastructure for supporting analysis and transformation passes on this representation.« less

  19. A Semantic Basis for Proof Queries and Transformations

    NASA Technical Reports Server (NTRS)

    Aspinall, David; Denney, Ewen W.; Luth, Christoph

    2013-01-01

    We extend the query language PrQL, designed for inspecting machine representations of proofs, to also allow transformation of proofs. PrQL natively supports hiproofs which express proof structure using hierarchically nested labelled trees, which we claim is a natural way of taming the complexity of huge proofs. Query-driven transformations enable manipulation of this structure, in particular, to transform proofs produced by interactive theorem provers into forms that assist their understanding, or that could be consumed by other tools. In this paper we motivate and define basic transformation operations, using an abstract denotational semantics of hiproofs and queries. This extends our previous semantics for queries based on syntactic tree representations.We define update operations that add and remove sub-proofs, and manipulate the hierarchy to group and ungroup nodes. We show that

  20. SPECTRa-T: machine-based data extraction and semantic searching of chemistry e-theses.

    PubMed

    Downing, Jim; Harvey, Matt J; Morgan, Peter B; Murray-Rust, Peter; Rzepa, Henry S; Stewart, Diana C; Tonge, Alan P; Townsend, Joe A

    2010-02-22

    The SPECTRa-T project has developed text-mining tools to extract named chemical entities (NCEs), such as chemical names and terms, and chemical objects (COs), e.g., experimental spectral assignments and physical chemistry properties, from electronic theses (e-theses). Although NCEs were readily identified within the two major document formats studied, only the use of structured documents enabled identification of chemical objects and their association with the relevant chemical entity (e.g., systematic chemical name). A corpus of theses was analyzed and it is shown that a high degree of semantic information can be extracted from structured documents. This integrated information has been deposited in a persistent Resource Description Framework (RDF) triple-store that allows users to conduct semantic searches. The strength and weaknesses of several document formats are reviewed.

  1. Dataflow Computation for the J-Machine

    DTIC Science & Technology

    1990-06-01

    MOVE 8. 1 CALL ClrTVCTO1 ;((:LkBEL (:LITERAL (:SYIBOL : BBD -IF-4)))) ZIDIF.4: ROVE [1,133, 3.3 ROV 13. A2 ((:TERIXATM)) SUSPEND ;((:LAEL (:LITBUAL...deftostant syn 0) (detconstant int-tag ’int) (detconatant Int 1) (detconstant id-tag ’ td ) (defconstant td 9) (Aotconstaut boolean-tag lbool

  2. The Many Ways Data Must Flow.

    ERIC Educational Resources Information Center

    La Brecque, Mort

    1984-01-01

    To break the bottleneck inherent in today's linear computer architectures, parallel schemes (which allow computers to perform multiple tasks at one time) are being devised. Several of these schemes are described. Dataflow devices, parallel number-crunchers, programing languages, and a device based on a neurological model are among the areas…

  3. Turtle Graphics Implementation Using a Graphical Dataflow Programming Approach

    DTIC Science & Technology

    1992-09-01

    this research. The intent of this section is not to teach how to program in LOGO, with the use of Turtle Graphics, but simply to provide an... how to program in Prograph, but only to provide a basic understanding the Prograph language, and its programming envi- ronment. Several examples are

  4. Information and Networking Technologies in Russian Libraries. UDT Occasional Paper #1.

    ERIC Educational Resources Information Center

    International Federation of Library Associations and Institutions, Ottawa (Ontario). International Office for Universal Dataflow & Telecommunications.

    The Universal Dataflow and Telecommunications (UDT) Occasional Papers distribute information on the use of networking, information technology and telecommunications by and of interest to the international library community. This occasional paper is comprised of three papers related to technologies in Russian libraries: (1) "The First Russian…

  5. Dataflow Integration and Simulation Techniques for DSP System Design Tools

    DTIC Science & Technology

    2007-01-01

    Lebak, M. Richards , and D. Campbell, “VSIPL: An object-based open standard API for vector, signal, and image processing,” in Proceedings of the...Inc., document Version 0.98a. [56] P. Marwedel and G. Goossens , Eds., Code Generation for Embedded Processors. Kluwer Academic Publishers, 1995. [57

  6. EO Domain Specific Knowledge Enabled Services (KES-B)

    NASA Astrophysics Data System (ADS)

    Varas, J.; Busto, J.; Torguet, R.

    2004-09-01

    This paper recovers and describes a number of major statements with respect to the vision, mission and technological approaches of the Technological Research Project (TRP) "EO Domain Specific Knowledge Enabled Services" (project acronym KES-B), which is currently under development at the European Space Research Institute (ESRIN) under contract "16397/02/I- SB". Resulting from the on-going R&D activities, the KES-B project aims are to demonstrate with a prototype system the feasibility of the application of innovative knowledge-based technologies to provide services for easy, scheduled and controlled exploitation of EO resources (e.g.: data, algorithms, procedures, storage, processors, ...), to automate the generation of products, and to support users in easily identifying and accessing the required information or products by using their own vocabulary, domain knowledge and preferences. The ultimate goals of KES-B are summarized in the provision of the two main types of KES services: 1st the Search service (also referred to as Product Exploitation or Information Retrieval; and 2nd the Production service (also referred to as Information Extraction), with the strategic advantage that they are enabled by Knowledge consolidated (formalized) within the system. The KES-B system technical solution approach is driven by a strong commitment for the adoption of industry (XML-based) language standards, aiming to have an interoperable, scalable and flexible operational prototype. In that sense, the Search KES services builds on the basis of the adoption of consolidated and/or emergent W3C semantic-web standards. Remarkably the languages/models Dublin Core (DC), Universal Resource Identifier (URI), Resource Description Framework (RDF) and Ontology Web Language (OWL), and COTS like Protege [1] and JENA [2] are being integrated in the system as building bricks for the construction of the KES based Search services. On the other hand, the Production KES services builds on top of workflow management standards and tools. In this side, the Business Process Execution Language (BPEL), the Web Services Definition Language (WSDL), and the Collaxa [3] COTS tool for workflow management are being integrated for the construction of the KES-B Production Services. The KES-B platform (web portal and web-server) architecture is build on the basis of the J2EE reference architecture. These languages represent the mean for the codification of the different types of knowledge that are to be formalized in the system. This representing the ontological architecture of the system. This shall enable in fact the interoperability with other KES-based systems committing as well to those standards. The motivation behind this vision is pointing towards the construction of the Semantic-web based GRID supply- chain infrastructure for EO-services, in line with the INSPIRE initiative suggestions.

  7. Using the Unified Modelling Language (UML) to guide the systemic description of biological processes and systems.

    PubMed

    Roux-Rouquié, Magali; Caritey, Nicolas; Gaubert, Laurent; Rosenthal-Sabroux, Camille

    2004-07-01

    One of the main issues in Systems Biology is to deal with semantic data integration. Previously, we examined the requirements for a reference conceptual model to guide semantic integration based on the systemic principles. In the present paper, we examine the usefulness of the Unified Modelling Language (UML) to describe and specify biological systems and processes. This makes unambiguous representations of biological systems, which would be suitable for translation into mathematical and computational formalisms, enabling analysis, simulation and prediction of these systems behaviours.

  8. Ontology for Transforming Geo-Spatial Data for Discovery and Integration of Scientific Data

    NASA Astrophysics Data System (ADS)

    Nguyen, L.; Chee, T.; Minnis, P.

    2013-12-01

    Discovery and access to geo-spatial scientific data across heterogeneous repositories and multi-discipline datasets can present challenges for scientist. We propose to build a workflow for transforming geo-spatial datasets into semantic environment by using relationships to describe the resource using OWL Web Ontology, RDF, and a proposed geo-spatial vocabulary. We will present methods for transforming traditional scientific dataset, use of a semantic repository, and querying using SPARQL to integrate and access datasets. This unique repository will enable discovery of scientific data by geospatial bound or other criteria.

  9. C3: The Compositional Construction of Content: A New, More Effective and Efficient Way to Marshal Inferences from Background Knowledge that will Enable More Natural and Effective Communication with Automomous Systems

    DTIC Science & Technology

    2014-01-06

    products derived from this funding. This includes two proposed activities for Summer 2014: • Deep Semantic Annotation with Shallow Methods; James... process that we need to ensure that words are unambiguous before we read them (present in just the semantic field that is presently active). Publication...Technical Report). MIT Artificial Intelligence Laboratory. Allen, J., Manshadi, M., Dzikovska, M., & Swift, M. (2007). Deep linguistic processing for

  10. The agent-based spatial information semantic grid

    NASA Astrophysics Data System (ADS)

    Cui, Wei; Zhu, YaQiong; Zhou, Yong; Li, Deren

    2006-10-01

    Analyzing the characteristic of multi-Agent and geographic Ontology, The concept of the Agent-based Spatial Information Semantic Grid (ASISG) is defined and the architecture of the ASISG is advanced. ASISG is composed with Multi-Agents and geographic Ontology. The Multi-Agent Systems are composed with User Agents, General Ontology Agent, Geo-Agents, Broker Agents, Resource Agents, Spatial Data Analysis Agents, Spatial Data Access Agents, Task Execution Agent and Monitor Agent. The architecture of ASISG have three layers, they are the fabric layer, the grid management layer and the application layer. The fabric layer what is composed with Data Access Agent, Resource Agent and Geo-Agent encapsulates the data of spatial information system so that exhibits a conceptual interface for the Grid management layer. The Grid management layer, which is composed with General Ontology Agent, Task Execution Agent and Monitor Agent and Data Analysis Agent, used a hybrid method to manage all resources that were registered in a General Ontology Agent that is described by a General Ontology System. The hybrid method is assembled by resource dissemination and resource discovery. The resource dissemination push resource from Local Ontology Agent to General Ontology Agent and the resource discovery pull resource from the General Ontology Agent to Local Ontology Agents. The Local Ontology Agent is derived from special domain and describes the semantic information of local GIS. The nature of the Local Ontology Agents can be filtrated to construct a virtual organization what could provides a global scheme. The virtual organization lightens the burdens of guests because they need not search information site by site manually. The application layer what is composed with User Agent, Geo-Agent and Task Execution Agent can apply a corresponding interface to a domain user. The functions that ASISG should provide are: 1) It integrates different spatial information systems on the semantic The Grid management layer establishes a virtual environment that integrates seamlessly all GIS notes. 2) When the resource management system searches data on different spatial information systems, it transfers the meaning of different Local Ontology Agents rather than access data directly. So the ability of search and query can be said to be on the semantic level. 3) The data access procedure is transparent to guests, that is, they could access the information from remote site as current disk because the General Ontology Agent could automatically link data by the Data Agents that link the Ontology concept to GIS data. 4) The capability of processing massive spatial data. Storing, accessing and managing massive spatial data from TB to PB; efficiently analyzing and processing spatial data to produce model, information and knowledge; and providing 3D and multimedia visualization services. 5) The capability of high performance computing and processing on spatial information. Solving spatial problems with high precision, high quality, and on a large scale; and process spatial information in real time or on time, with high-speed and high efficiency. 6) The capability of sharing spatial resources. The distributed heterogeneous spatial information resources are Shared and realizing integrated and inter-operated on semantic level, so as to make best use of spatial information resources,such as computing resources, storage devices, spatial data (integrating from GIS, RS and GPS), spatial applications and services, GIS platforms, 7) The capability of integrating legacy GIS system. A ASISG can not only be used to construct new advanced spatial application systems, but also integrate legacy GIS system, so as to keep extensibility and inheritance and guarantee investment of users. 8) The capability of collaboration. Large-scale spatial information applications and services always involve different departments in different geographic places, so remote and uniform services are needed. 9) The capability of supporting integration of heterogeneous systems. Large-scale spatial information systems are always synthetically applications, so ASISG should provide interoperation and consistency through adopting open and applied technology standards. 10) The capability of adapting dynamic changes. Business requirements, application patterns, management strategies, and IT products always change endlessly for any departments, so ASISG should be self-adaptive. Two examples are provided in this paper, those examples provide a detailed way on how you design your semantic grid based on Multi-Agent systems and Ontology. In conclusion, the semantic grid of spatial information system could improve the ability of the integration and interoperability of spatial information grid.

  11. Semantic Likelihood Models for Bayesian Inference in Human-Robot Interaction

    NASA Astrophysics Data System (ADS)

    Sweet, Nicholas

    Autonomous systems, particularly unmanned aerial systems (UAS), remain limited in au- tonomous capabilities largely due to a poor understanding of their environment. Current sensors simply do not match human perceptive capabilities, impeding progress towards full autonomy. Recent work has shown the value of humans as sources of information within a human-robot team; in target applications, communicating human-generated 'soft data' to autonomous systems enables higher levels of autonomy through large, efficient information gains. This requires development of a 'human sensor model' that allows soft data fusion through Bayesian inference to update the probabilistic belief representations maintained by autonomous systems. Current human sensor models that capture linguistic inputs as semantic information are limited in their ability to generalize likelihood functions for semantic statements: they may be learned from dense data; they do not exploit the contextual information embedded within groundings; and they often limit human input to restrictive and simplistic interfaces. This work provides mechanisms to synthesize human sensor models from constraints based on easily attainable a priori knowledge, develops compression techniques to capture information-dense semantics, and investigates the problem of capturing and fusing semantic information contained within unstructured natural language. A robotic experimental testbed is also developed to validate the above contributions.

  12. A development framework for semantically interoperable health information systems.

    PubMed

    Lopez, Diego M; Blobel, Bernd G M E

    2009-02-01

    Semantic interoperability is a basic challenge to be met for new generations of distributed, communicating and co-operating health information systems (HIS) enabling shared care and e-Health. Analysis, design, implementation and maintenance of such systems and intrinsic architectures have to follow a unified development methodology. The Generic Component Model (GCM) is used as a framework for modeling any system to evaluate and harmonize state of the art architecture development approaches and standards for health information systems as well as to derive a coherent architecture development framework for sustainable, semantically interoperable HIS and their components. The proposed methodology is based on the Rational Unified Process (RUP), taking advantage of its flexibility to be configured for integrating other architectural approaches such as Service-Oriented Architecture (SOA), Model-Driven Architecture (MDA), ISO 10746, and HL7 Development Framework (HDF). Existing architectural approaches have been analyzed, compared and finally harmonized towards an architecture development framework for advanced health information systems. Starting with the requirements for semantic interoperability derived from paradigm changes for health information systems, and supported in formal software process engineering methods, an appropriate development framework for semantically interoperable HIS has been provided. The usability of the framework has been exemplified in a public health scenario.

  13. A semantic web ontology for small molecules and their biological targets.

    PubMed

    Choi, Jooyoung; Davis, Melissa J; Newman, Andrew F; Ragan, Mark A

    2010-05-24

    A wide range of data on sequences, structures, pathways, and networks of genes and gene products is available for hypothesis testing and discovery in biological and biomedical research. However, data describing the physical, chemical, and biological properties of small molecules have not been well-integrated with these resources. Semantically rich representations of chemical data, combined with Semantic Web technologies, have the potential to enable the integration of small molecule and biomolecular data resources, expanding the scope and power of biomedical and pharmacological research. We employed the Semantic Web technologies Resource Description Framework (RDF) and Web Ontology Language (OWL) to generate a Small Molecule Ontology (SMO) that represents concepts and provides unique identifiers for biologically relevant properties of small molecules and their interactions with biomolecules, such as proteins. We instanced SMO using data from three public data sources, i.e., DrugBank, PubChem and UniProt, and converted to RDF triples. Evaluation of SMO by use of predetermined competency questions implemented as SPARQL queries demonstrated that data from chemical and biomolecular data sources were effectively represented and that useful knowledge can be extracted. These results illustrate the potential of Semantic Web technologies in chemical, biological, and pharmacological research and in drug discovery.

  14. Mathematics of Sensing, Exploitation, and Execution (MSEE) Hierarchical Representations for the Evaluation of Sensed Data

    DTIC Science & Technology

    2016-06-01

    theories of the mammalian visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown...test, computer vision, semantic description , street scenes, belief propagation, generative models, nonlinear filtering, sufficient statistics 16...visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown team was on single images

  15. PRELIM: Predictive Relevance Estimation from Linked Models

    DTIC Science & Technology

    2014-10-14

    code ) 14-10-2014 Final Report 11-07-2014 to 14-10-2014 PRELIM: Predictive Relevance Estimation from Linked Models N00014-14-P-1185 10257H. Van Dyke...Parunak, Ph.D. Soar Technology, Inc. 1 Executive  Summary   PRELIM (Predictive Relevance Estimation from Linked Models) draws on semantic models...The central challenge in proactive decision support is to anticipate the decision and information needs of decision-makers, in the light of likely

  16. Generalized Abstract Symbolic Summaries

    NASA Technical Reports Server (NTRS)

    Person, Suzette; Dwyer, Matthew B.

    2009-01-01

    Current techniques for validating and verifying program changes often consider the entire program, even for small changes, leading to enormous V&V costs over a program s lifetime. This is due, in large part, to the use of syntactic program techniques which are necessarily imprecise. Building on recent advances in symbolic execution of heap manipulating programs, in this paper, we develop techniques for performing abstract semantic differencing of program behaviors that offer the potential for improved precision.

  17. Cerebellar tDCS Modulates Neural Circuits during Semantic Prediction: A Combined tDCS-fMRI Study.

    PubMed

    D'Mello, Anila M; Turkeltaub, Peter E; Stoodley, Catherine J

    2017-02-08

    It has been proposed that the cerebellum acquires internal models of mental processes that enable prediction, allowing for the optimization of behavior. In language, semantic prediction speeds speech production and comprehension. Right cerebellar lobules VI and VII (including Crus I/II) are engaged during a variety of language processes and are functionally connected with cerebral cortical language networks. Further, right posterolateral cerebellar neuromodulation modifies behavior during predictive language processing. These data are consistent with a role for the cerebellum in semantic processing and semantic prediction. We combined transcranial direct current stimulation (tDCS) and fMRI to assess the behavioral and neural consequences of cerebellar tDCS during a sentence completion task. Task-based and resting-state fMRI data were acquired in healthy human adults ( n = 32; μ = 23.1 years) both before and after 20 min of 1.5 mA anodal ( n = 18) or sham ( n = 14) tDCS applied to the right posterolateral cerebellum. In the sentence completion task, the first four words of the sentence modulated the predictability of the final target word. In some sentences, the preceding context strongly predicted the target word, whereas other sentences were nonpredictive. Completion of predictive sentences increased activation in right Crus I/II of the cerebellum. Relative to sham tDCS, anodal tDCS increased activation in right Crus I/II during semantic prediction and enhanced resting-state functional connectivity between hubs of the reading/language networks. These results are consistent with a role for the right posterolateral cerebellum beyond motor aspects of language, and suggest that cerebellar internal models of linguistic stimuli support semantic prediction. SIGNIFICANCE STATEMENT Cerebellar involvement in language tasks and language networks is now well established, yet the specific cerebellar contribution to language processing remains unclear. It is thought that the cerebellum acquires internal models of mental processes that enable prediction, allowing for the optimization of behavior. Here we combined neuroimaging and neuromodulation to provide evidence that the cerebellum is specifically involved in semantic prediction during sentence processing. We found that activation within right Crus I/II was enhanced when semantic predictions were made, and we show that modulation of this region with transcranial direct current stimulation alters both activation patterns and functional connectivity within whole-brain language networks. For the first time, these data show that cerebellar neuromodulation impacts activation patterns specifically during predictive language processing. Copyright © 2017 the authors 0270-6474/17/371604-10$15.00/0.

  18. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses

    PubMed Central

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-01-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. PMID:24462600

  19. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    PubMed

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Executive dysfunction, obsessive-compulsive symptoms, and attention deficit and hyperactivity disorder in Systemic Lupus Erythematosus: Evidence for basal ganglia dysfunction?

    PubMed

    Maciel, Ricardo Oliveira Horta; Ferreira, Gilda Aparecida; Akemy, Bárbara; Cardoso, Francisco

    2016-01-15

    Chorea is well described in a group of patients with Systemic Lupus Erythematosus (SLE). There is less information, however, on other movement disorders as well as non-motor neuropsychiatric features such as obsessive-compulsive symptoms (OCS), executive dysfunction and attention deficit and hyperactivity disorder (ADHD) in subjects with SLE. Fifty-four subjects with SLE underwent a battery of neuropsychiatric tests that included the Mini Mental State Examination, the Montreal Cognitive Assessment, the Frontal Assessment Battery (FAB), the FAS verbal and the categorical (animals) semantic fluency tests, the Obsessive and Compulsive Inventory - Revised, the Yale-Brown Obsessive and Compulsive Scale and Beck's Anxiety and Depression Scales. ADHD was diagnosed according to DSM-IV criteria. SLE disease activity and cumulative damage were evaluated according to the modified SLE Disease Activity Index 2000 (mSLEDAI-2K) and the SLICC/ACR, respectively. Six (11.1%) and 33 (61.1%) patients had cognitive impairment according to the MMSE and MoCA, respectively. Eleven (20.4%) had abnormal FAB scores, and 5 (9.3%) had lower semantic fluency scores than expected. The overall frequency of cognitive dysfunction was 72.2% (39 patients) and of neuropsychiatric SLE was 77.8% (42 patients). Two patients (3.7%) had movement disorders. Fifteen (27.8%) had OCS and 17 (31.5%) met diagnostic criteria for ADHD. ADHD and OCS correlated with higher disease activity, p=0.003 and 0.006, respectively. Higher cumulative damage correlated with lower FAB scores (p 0.026). Executive dysfunction, ADHD, OCS, and movement disorders are common in SLE. Our finding suggests that there is frequent basal ganglia dysfunction in SLE. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. 3D medical volume reconstruction using web services.

    PubMed

    Kooper, Rob; Shirk, Andrew; Lee, Sang-Chul; Lin, Amy; Folberg, Robert; Bajcsy, Peter

    2008-04-01

    We address the problem of 3D medical volume reconstruction using web services. The use of proposed web services is motivated by the fact that the problem of 3D medical volume reconstruction requires significant computer resources and human expertise in medical and computer science areas. Web services are implemented as an additional layer to a dataflow framework called data to knowledge. In the collaboration between UIC and NCSA, pre-processed input images at NCSA are made accessible to medical collaborators for registration. Every time UIC medical collaborators inspected images and selected corresponding features for registration, the web service at NCSA is contacted and the registration processing query is executed using the image to knowledge library of registration methods. Co-registered frames are returned for verification by medical collaborators in a new window. In this paper, we present 3D volume reconstruction problem requirements and the architecture of the developed prototype system at http://isda.ncsa.uiuc.edu/MedVolume. We also explain the tradeoffs of our system design and provide experimental data to support our system implementation. The prototype system has been used for multiple 3D volume reconstructions of blood vessels and vasculogenic mimicry patterns in histological sections of uveal melanoma studied by fluorescent confocal laser scanning microscope.

  2. OpenROCS: a software tool to control robotic observatories

    NASA Astrophysics Data System (ADS)

    Colomé, Josep; Sanz, Josep; Vilardell, Francesc; Ribas, Ignasi; Gil, Pere

    2012-09-01

    We present the Open Robotic Observatory Control System (OpenROCS), an open source software platform developed for the robotic control of telescopes. It acts as a software infrastructure that executes all the necessary processes to implement responses to the system events that appear in the routine and non-routine operations associated to data-flow and housekeeping control. The OpenROCS software design and implementation provides a high flexibility to be adapted to different observatory configurations and event-action specifications. It is based on an abstract model that is independent of the specific hardware or software and is highly configurable. Interfaces to the system components are defined in a simple manner to achieve this goal. We give a detailed description of the version 2.0 of this software, based on a modular architecture developed in PHP and XML configuration files, and using standard communication protocols to interface with applications for hardware monitoring and control, environment monitoring, scheduling of tasks, image processing and data quality control. We provide two examples of how it is used as the core element of the control system in two robotic observatories: the Joan Oró Telescope at the Montsec Astronomical Observatory (Catalonia, Spain) and the SuperWASP Qatar Telescope at the Roque de los Muchachos Observatory (Canary Islands, Spain).

  3. Semantic interoperability--HL7 Version 3 compared to advanced architecture standards.

    PubMed

    Blobel, B G M E; Engel, K; Pharow, P

    2006-01-01

    To meet the challenge for high quality and efficient care, highly specialized and distributed healthcare establishments have to communicate and co-operate in a semantically interoperable way. Information and communication technology must be open, flexible, scalable, knowledge-based and service-oriented as well as secure and safe. For enabling semantic interoperability, a unified process for defining and implementing the architecture, i.e. structure and functions of the cooperating systems' components, as well as the approach for knowledge representation, i.e. the used information and its interpretation, algorithms, etc. have to be defined in a harmonized way. Deploying the Generic Component Model, systems and their components, underlying concepts and applied constraints must be formally modeled, strictly separating platform-independent from platform-specific models. As HL7 Version 3 claims to represent the most successful standard for semantic interoperability, HL7 has been analyzed regarding the requirements for model-driven, service-oriented design of semantic interoperable information systems, thereby moving from a communication to an architecture paradigm. The approach is compared with advanced architectural approaches for information systems such as OMG's CORBA 3 or EHR systems such as GEHR/openEHR and CEN EN 13606 Electronic Health Record Communication. HL7 Version 3 is maturing towards an architectural approach for semantic interoperability. Despite current differences, there is a close collaboration between the teams involved guaranteeing a convergence between competing approaches.

  4. The Biomedical Resource Ontology (BRO) to Enable Resource Discovery in Clinical and Translational Research

    PubMed Central

    Tenenbaum, Jessica D.; Whetzel, Patricia L.; Anderson, Kent; Borromeo, Charles D.; Dinov, Ivo D.; Gabriel, Davera; Kirschner, Beth; Mirel, Barbara; Morris, Tim; Noy, Natasha; Nyulas, Csongor; Rubenson, David; Saxman, Paul R.; Singh, Harpreet; Whelan, Nancy; Wright, Zach; Athey, Brian D.; Becich, Michael J.; Ginsburg, Geoffrey S.; Musen, Mark A.; Smith, Kevin A.; Tarantal, Alice F.; Rubin, Daniel L; Lyster, Peter

    2010-01-01

    The biomedical research community relies on a diverse set of resources, both within their own institutions and at other research centers. In addition, an increasing number of shared electronic resources have been developed. Without effective means to locate and query these resources, it is challenging, if not impossible, for investigators to be aware of the myriad resources available, or to effectively perform resource discovery when the need arises. In this paper, we describe the development and use of the Biomedical Resource Ontology (BRO) to enable semantic annotation and discovery of biomedical resources. We also describe the Resource Discovery System (RDS) which is a federated, inter-institutional pilot project that uses the BRO to facilitate resource discovery on the Internet. Through the RDS framework and its associated Biositemaps infrastructure, the BRO facilitates semantic search and discovery of biomedical resources, breaking down barriers and streamlining scientific research that will improve human health. PMID:20955817

  5. An implementation and evaluation of the MPI 3.0 one-sided communication interface

    DOE PAGES

    Dinan, James S.; Balaji, Pavan; Buntinas, Darius T.; ...

    2016-01-09

    The Q1 Message Passing Interface (MPI) 3.0 standard includes a significant revision to MPI’s remote memory access (RMA) interface, which provides support for one-sided communication. MPI-3 RMA is expected to greatly enhance the usability and performance ofMPI RMA.We present the first complete implementation of MPI-3 RMA and document implementation techniques and performance optimization opportunities enabled by the new interface. Our implementation targets messaging-based networks and is publicly available in the latest release of the MPICH MPI implementation. Here using this implementation, we explore the performance impact of new MPI-3 functionality and semantics. Results indicate that the MPI-3 RMA interface providesmore » significant advantages over the MPI-2 interface by enabling increased communication concurrency through relaxed semantics in the interface and additional routines that provide new window types, synchronization modes, and atomic operations.« less

  6. An implementation and evaluation of the MPI 3.0 one-sided communication interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dinan, James S.; Balaji, Pavan; Buntinas, Darius T.

    The Q1 Message Passing Interface (MPI) 3.0 standard includes a significant revision to MPI’s remote memory access (RMA) interface, which provides support for one-sided communication. MPI-3 RMA is expected to greatly enhance the usability and performance ofMPI RMA.We present the first complete implementation of MPI-3 RMA and document implementation techniques and performance optimization opportunities enabled by the new interface. Our implementation targets messaging-based networks and is publicly available in the latest release of the MPICH MPI implementation. Here using this implementation, we explore the performance impact of new MPI-3 functionality and semantics. Results indicate that the MPI-3 RMA interface providesmore » significant advantages over the MPI-2 interface by enabling increased communication concurrency through relaxed semantics in the interface and additional routines that provide new window types, synchronization modes, and atomic operations.« less

  7. E-Government Goes Semantic Web: How Administrations Can Transform Their Information Processes

    NASA Astrophysics Data System (ADS)

    Klischewski, Ralf; Ukena, Stefan

    E-government applications and services are built mainly on access to, retrieval of, integration of, and delivery of relevant information to citizens, businesses, and administrative users. In order to perform such information processing automatically through the Semantic Web,1 machine-readable2 enhancements of web resources are needed, based on the understanding of the content and context of the information in focus. While these enhancements are far from trivial to produce, administrations in their role of information and service providers so far find little guidance on how to migrate their web resources and enable a new quality of information processing; even research is still seeking best practices. Therefore, the underlying research question of this chapter is: what are the appropriate approaches which guide administrations in transforming their information processes toward the Semantic Web? In search for answers, this chapter analyzes the challenges and possible solutions from the perspective of administrations: (a) the reconstruction of the information processing in the e-government in terms of how semantic technologies must be employed to support information provision and consumption through the Semantic Web; (b) the required contribution to the transformation is compared to the capabilities and expectations of administrations; and (c) available experience with the steps of transformation are reviewed and discussed as to what extent they can be expected to successfully drive the e-government to the Semantic Web. This research builds on studying the case of Schleswig-Holstein, Germany, where semantic technologies have been used within the frame of the Access-eGov3 project in order to semantically enhance electronic service interfaces with the aim of providing a new way of accessing and combining e-government services.

  8. Semantic Verbal Fluency in Children with and without Autism Spectrum Disorder: Relationship with Chronological Age and IQ

    PubMed Central

    Pastor-Cerezuela, Gemma; Fernández-Andrés, Maria-Inmaculada; Feo-Álvarez, Mireia; González-Sala, Francisco

    2016-01-01

    We administered a semantic verbal fluency (SVF) task to two groups of children (age range from 5 to 8): 47 diagnosed with Autism Spectrum Disorder (ASD Group) and 53 with typical development (Comparison Group), matched on gender, chronological age, and non-verbal IQ. Four specific indexes were calculated from the SVF task, reflecting the different underlying cognitive strategies used: clustering (component of generativity and lexical-semantic access), and switching (executive component, cognitive flexibility). First, we compared the performance of the two groups on the different SVF task indicators, with the ASD group scoring lower than the Comparison Group, although the difference was greater on switching than on clustering. Second, we analyzed the relationships between the different SVF measures and chronological age, verbal IQ and non-verbal IQ. While in the Comparison Group chronological age was the main predictor of performance on the SVF task, in the ASD Group verbal IQ was the best predictor. In the children with ASD, therefore, greater linguistic competence would be associated with better performance on the SVF task, which should be taken into account in speech therapies designed to achieve improvements in linguistic generativity and cognitive flexibility. PMID:27379002

  9. Interoperable Solar Data and Metadata via LISIRD 3

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D. M.; Pankratz, C. K.; Snow, M. A.; Woods, T. N.

    2015-12-01

    LISIRD 3 is a major upgrade of the LASP Interactive Solar Irradiance Data Center (LISIRD), which serves several dozen space based solar irradiance and related data products to the public. Through interactive plots, LISIRD 3 provides data browsing supported by data subsetting and aggregation. Incorporating a semantically enabled metadata repository, LISIRD 3 users see current, vetted, consistent information about the datasets offered. Users can now also search for datasets based on metadata fields such as dataset type and/or spectral or temporal range. This semantic database enables metadata browsing, so users can discover the relationships between datasets, instruments, spacecraft, mission and PI. The database also enables creation and publication of metadata records in a variety of formats, such as SPASE or ISO, making these datasets more discoverable. The database also enables the possibility of a public SPARQL endpoint, making the metadata browsable in an automated fashion. LISIRD 3's data access middleware, LaTiS, provides dynamic, on demand reformatting of data and timestamps, subsetting and aggregation, and other server side functionality via a RESTful OPeNDAP compliant API, enabling interoperability between LASP datasets and many common tools. LISIRD 3's templated front end design, coupled with the uniform data interface offered by LaTiS, allows easy integration of new datasets. Consequently the number and variety of datasets offered by LISIRD has grown to encompass several dozen, with many more to come. This poster will discuss design and implementation of LISIRD 3, including tools used, capabilities enabled, and issues encountered.

  10. Cheminformatics and the Semantic Web: adding value with linked data and enhanced provenance

    PubMed Central

    Frey, Jeremy G; Bird, Colin L

    2013-01-01

    Cheminformatics is evolving from being a field of study associated primarily with drug discovery into a discipline that embraces the distribution, management, access, and sharing of chemical data. The relationship with the related subject of bioinformatics is becoming stronger and better defined, owing to the influence of Semantic Web technologies, which enable researchers to integrate heterogeneous sources of chemical, biochemical, biological, and medical information. These developments depend on a range of factors: the principles of chemical identifiers and their role in relationships between chemical and biological entities; the importance of preserving provenance and properly curated metadata; and an understanding of the contribution that the Semantic Web can make at all stages of the research lifecycle. The movements toward open access, open source, and open collaboration all contribute to progress toward the goals of integration. PMID:24432050

  11. Graph Mining Meets the Semantic Web

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sangkeun; Sukumar, Sreenivas R; Lim, Seung-Hwan

    The Resource Description Framework (RDF) and SPARQL Protocol and RDF Query Language (SPARQL) were introduced about a decade ago to enable flexible schema-free data interchange on the Semantic Web. Today, data scientists use the framework as a scalable graph representation for integrating, querying, exploring and analyzing data sets hosted at different sources. With increasing adoption, the need for graph mining capabilities for the Semantic Web has emerged. We address that need through implementation of three popular iterative Graph Mining algorithms (Triangle count, Connected component analysis, and PageRank). We implement these algorithms as SPARQL queries, wrapped within Python scripts. We evaluatemore » the performance of our implementation on 6 real world data sets and show graph mining algorithms (that have a linear-algebra formulation) can indeed be unleashed on data represented as RDF graphs using the SPARQL query interface.« less

  12. Spatiotemporal integration of molecular and anatomical data in virtual reality using semantic mapping.

    PubMed

    Soh, Jung; Turinsky, Andrei L; Trinh, Quang M; Chang, Jasmine; Sabhaney, Ajay; Dong, Xiaoli; Gordon, Paul Mk; Janzen, Ryan Pw; Hau, David; Xia, Jianguo; Wishart, David S; Sensen, Christoph W

    2009-01-01

    We have developed a computational framework for spatiotemporal integration of molecular and anatomical datasets in a virtual reality environment. Using two case studies involving gene expression data and pharmacokinetic data, respectively, we demonstrate how existing knowledge bases for molecular data can be semantically mapped onto a standardized anatomical context of human body. Our data mapping methodology uses ontological representations of heterogeneous biomedical datasets and an ontology reasoner to create complex semantic descriptions of biomedical processes. This framework provides a means to systematically combine an increasing amount of biomedical imaging and numerical data into spatiotemporally coherent graphical representations. Our work enables medical researchers with different expertise to simulate complex phenomena visually and to develop insights through the use of shared data, thus paving the way for pathological inference, developmental pattern discovery and biomedical hypothesis testing.

  13. Representing nested semantic information in a linear string of text using XML.

    PubMed

    Krauthammer, Michael; Johnson, Stephen B; Hripcsak, George; Campbell, David A; Friedman, Carol

    2002-01-01

    XML has been widely adopted as an important data interchange language. The structure of XML enables sharing of data elements with variable degrees of nesting as long as the elements are grouped in a strict tree-like fashion. This requirement potentially restricts the usefulness of XML for marking up written text, which often includes features that do not properly nest within other features. We encountered this problem while marking up medical text with structured semantic information from a Natural Language Processor. Traditional approaches to this problem separate the structured information from the actual text mark up. This paper introduces an alternative solution, which tightly integrates the semantic structure with the text. The resulting XML markup preserves the linearity of the medical texts and can therefore be easily expanded with additional types of information.

  14. Representing nested semantic information in a linear string of text using XML.

    PubMed Central

    Krauthammer, Michael; Johnson, Stephen B.; Hripcsak, George; Campbell, David A.; Friedman, Carol

    2002-01-01

    XML has been widely adopted as an important data interchange language. The structure of XML enables sharing of data elements with variable degrees of nesting as long as the elements are grouped in a strict tree-like fashion. This requirement potentially restricts the usefulness of XML for marking up written text, which often includes features that do not properly nest within other features. We encountered this problem while marking up medical text with structured semantic information from a Natural Language Processor. Traditional approaches to this problem separate the structured information from the actual text mark up. This paper introduces an alternative solution, which tightly integrates the semantic structure with the text. The resulting XML markup preserves the linearity of the medical texts and can therefore be easily expanded with additional types of information. PMID:12463856

  15. A dictionary server for supplying context sensitive medical knowledge.

    PubMed

    Ruan, W; Bürkle, T; Dudeck, J

    2000-01-01

    The Giessen Data Dictionary Server (GDDS), developed at Giessen University Hospital, integrates clinical systems with on-line, context sensitive medical knowledge to help with making medical decisions. By "context" we mean the clinical information that is being presented at the moment the information need is occurring. The dictionary server makes use of a semantic network supported by a medical data dictionary to link terms from clinical applications to their proper information sources. It has been designed to analyze the network structure itself instead of knowing the layout of the semantic net in advance. This enables us to map appropriate information sources to various clinical applications, such as nursing documentation, drug prescription and cancer follow up systems. This paper describes the function of the dictionary server and shows how the knowledge stored in the semantic network is used in the dictionary service.

  16. Reduced prefrontal activation in pediatric patients with obsessive-compulsive disorder during verbal episodic memory encoding.

    PubMed

    Batistuzzo, Marcelo Camargo; Balardin, Joana Bisol; Martin, Maria da Graça Morais; Hoexter, Marcelo Queiroz; Bernardes, Elisa Teixeira; Borcato, Sonia; Souza, Marina de Marco E; Querido, Cicero Nardini; Morais, Rosa Magaly; de Alvarenga, Pedro Gomes; Lopes, Antonio Carlos; Shavitt, Roseli Gedanke; Savage, Cary R; Amaro, Edson; Miguel, Euripedes C; Polanczyk, Guilherme V; Miotto, Eliane C

    2015-10-01

    Patients with obsessive-compulsive disorder (OCD) often present with deficits in episodic memory, and there is evidence that these difficulties may be secondary to executive dysfunction, that is, impaired selection and/or application of memory-encoding strategies (mediation hypothesis). Semantic clustering is an effective strategy to enhance encoding of verbal episodic memory (VEM) when word lists are semantically related. Self-initiated mobilization of this strategy has been associated with increased activity in the prefrontal cortex, particularly the orbitofrontal cortex, a key region in the pathophysiology of OCD. We therefore studied children and adolescents with OCD during uncued semantic clustering strategy application in a VEM functional magnetic resonance imaging (fMRI)-encoding paradigm. A total of 25 pediatric patients with OCD (aged 8.1-17.5 years) and 25 healthy controls (HC, aged 8.1-16.9) matched for age, gender, handedness, and IQ were evaluated using a block design VEM paradigm that manipulated semantically related and unrelated words. The semantic clustering strategy score (SCS) predicted VEM performance in HC (p < .001, R(2) = 0.635), but not in patients (p = .099). Children with OCD also presented hypoactivation in the dorsomedial prefrontal cortex (cluster-corrected p < .001). Within-group analysis revealed a negative correlation between Yale-Brown Obsessive Compulsive Scale scores and activation of orbitofrontal cortex in the group with OCD. Finally, a positive correlation between age and SCS was found in HC (p = .001, r = 0.635), but not in patients with OCD (p = .936, r = 0.017). Children with OCD presented altered brain activation during the VEM paradigm and absence of expected correlation between SCS and age, and between SCS and total words recalled. These results suggest that different neural mechanisms underlie self-initiated semantic clustering in OCD. Copyright © 2015 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  17. Making species checklists understandable to machines - a shift from relational databases to ontologies.

    PubMed

    Laurenne, Nina; Tuominen, Jouni; Saarenmaa, Hannu; Hyvönen, Eero

    2014-01-01

    The scientific names of plants and animals play a major role in Life Sciences as information is indexed, integrated, and searched using scientific names. The main problem with names is their ambiguous nature, because more than one name may point to the same taxon and multiple taxa may share the same name. In addition, scientific names change over time, which makes them open to various interpretations. Applying machine-understandable semantics to these names enables efficient processing of biological content in information systems. The first step is to use unique persistent identifiers instead of name strings when referring to taxa. The most commonly used identifiers are Life Science Identifiers (LSID), which are traditionally used in relational databases, and more recently HTTP URIs, which are applied on the Semantic Web by Linked Data applications. We introduce two models for expressing taxonomic information in the form of species checklists. First, we show how species checklists are presented in a relational database system using LSIDs. Then, in order to gain a more detailed representation of taxonomic information, we introduce meta-ontology TaxMeOn to model the same content as Semantic Web ontologies where taxa are identified using HTTP URIs. We also explore how changes in scientific names can be managed over time. The use of HTTP URIs is preferable for presenting the taxonomic information of species checklists. An HTTP URI identifies a taxon and operates as a web address from which additional information about the taxon can be located, unlike LSID. This enables the integration of biological data from different sources on the web using Linked Data principles and prevents the formation of information silos. The Linked Data approach allows a user to assemble information and evaluate the complexity of taxonomical data based on conflicting views of taxonomic classifications. Using HTTP URIs and Semantic Web technologies also facilitate the representation of the semantics of biological data, and in this way, the creation of more "intelligent" biological applications and services.

  18. A Programmer’s Assistant for a Special-Purpose Dataflow Language.

    DTIC Science & Technology

    1985-12-01

    valueclasscheck ’strict)) load-qda-kbs Loads the 6DA knowledge bases (defun Ioad-qda-kbs 0) Idolist (kb foda -kbst) (kbload (strino-append ’host-dir...DeMarco, T., "Structured Analysis and System Specification," GUIDE 47 Proceedings, 1978. Reprinted in Classics in Software Engineering, edited by Edward

  19. Executive High School Internships

    ERIC Educational Resources Information Center

    Hirsch, Sharlene Pearlman

    1974-01-01

    The Executive High School Internships Program enables juniors and seniors to take a one-semester sabbatical from their studies to serve as special assistants to executives in government, business, non-profit organizations, and civic organizations. They perform a variety of duties, earning full academic credit for their participation. (AG)

  20. A concept ideation framework for medical device design.

    PubMed

    Hagedorn, Thomas J; Grosse, Ian R; Krishnamurty, Sundar

    2015-06-01

    Medical device design is a challenging process, often requiring collaboration between medical and engineering domain experts. This collaboration can be best institutionalized through systematic knowledge transfer between the two domains coupled with effective knowledge management throughout the design innovation process. Toward this goal, we present the development of a semantic framework for medical device design that unifies a large medical ontology with detailed engineering functional models along with the repository of design innovation information contained in the US Patent Database. As part of our development, existing medical, engineering, and patent document ontologies were modified and interlinked to create a comprehensive medical device innovation and design tool with appropriate properties and semantic relations to facilitate knowledge capture, enrich existing knowledge, and enable effective knowledge reuse for different scenarios. The result is a Concept Ideation Framework for Medical Device Design (CIFMeDD). Key features of the resulting framework include function-based searching and automated inter-domain reasoning to uniquely enable identification of functionally similar procedures, tools, and inventions from multiple domains based on simple semantic searches. The significance and usefulness of the resulting framework for aiding in conceptual design and innovation in the medical realm are explored via two case studies examining medical device design problems. Copyright © 2015 Elsevier Inc. All rights reserved.

Top