Sample records for neural decoding based

  1. Decoding small surface codes with feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  2. Decoding of Human Movements Based on Deep Brain Local Field Potentials Using Ensemble Neural Networks

    PubMed Central

    2017-01-01

    Decoding neural activities related to voluntary and involuntary movements is fundamental to understanding human brain motor circuits and neuromotor disorders and can lead to the development of neuromotor prosthetic devices for neurorehabilitation. This study explores using recorded deep brain local field potentials (LFPs) for robust movement decoding of Parkinson's disease (PD) and Dystonia patients. The LFP data from voluntary movement activities such as left and right hand index finger clicking were recorded from patients who underwent surgeries for implantation of deep brain stimulation electrodes. Movement-related LFP signal features were extracted by computing instantaneous power related to motor response in different neural frequency bands. An innovative neural network ensemble classifier has been proposed and developed for accurate prediction of finger movement and its forthcoming laterality. The ensemble classifier contains three base neural network classifiers, namely, feedforward, radial basis, and probabilistic neural networks. The majority voting rule is used to fuse the decisions of the three base classifiers to generate the final decision of the ensemble classifier. The overall decoding performance reaches a level of agreement (kappa value) at about 0.729 ± 0.16 for decoding movement from the resting state and about 0.671 ± 0.14 for decoding left and right visually cued movements. PMID:29201041

  3. Neural decoding with kernel-based metric learning.

    PubMed

    Brockmeier, Austin J; Choi, John S; Kriminger, Evan G; Francis, Joseph T; Principe, Jose C

    2014-06-01

    In studies of the nervous system, the choice of metric for the neural responses is a pivotal assumption. For instance, a well-suited distance metric enables us to gauge the similarity of neural responses to various stimuli and assess the variability of responses to a repeated stimulus-exploratory steps in understanding how the stimuli are encoded neurally. Here we introduce an approach where the metric is tuned for a particular neural decoding task. Neural spike train metrics have been used to quantify the information content carried by the timing of action potentials. While a number of metrics for individual neurons exist, a method to optimally combine single-neuron metrics into multineuron, or population-based, metrics is lacking. We pose the problem of optimizing multineuron metrics and other metrics using centered alignment, a kernel-based dependence measure. The approach is demonstrated on invasively recorded neural data consisting of both spike trains and local field potentials. The experimental paradigm consists of decoding the location of tactile stimulation on the forepaws of anesthetized rats. We show that the optimized metrics highlight the distinguishing dimensions of the neural response, significantly increase the decoding accuracy, and improve nonlinear dimensionality reduction methods for exploratory neural analysis.

  4. Decoding-Accuracy-Based Sequential Dimensionality Reduction of Spatio-Temporal Neural Activities

    NASA Astrophysics Data System (ADS)

    Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu

    Performance of a brain machine interface (BMI) critically depends on selection of input data because information embedded in the neural activities is highly redundant. In addition, properly selected input data with a reduced dimension leads to improvement of decoding generalization ability and decrease of computational efforts, both of which are significant advantages for the clinical applications. In the present paper, we propose an algorithm of sequential dimensionality reduction (SDR) that effectively extracts motor/sensory related spatio-temporal neural activities. The algorithm gradually reduces input data dimension by dropping neural data spatio-temporally so as not to undermine the decoding accuracy as far as possible. Support vector machine (SVM) was used as the decoder, and tone-induced neural activities in rat auditory cortices were decoded into the test tone frequencies. SDR reduced the input data dimension to a quarter and significantly improved the accuracy of decoding of novel data. Moreover, spatio-temporal neural activity patterns selected by SDR resulted in significantly higher accuracy than high spike rate patterns or conventionally used spatial patterns. These results suggest that the proposed algorithm can improve the generalization ability and decrease the computational effort of decoding.

  5. Kernel Temporal Differences for Neural Decoding

    PubMed Central

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  6. Deep Learning Methods for Improved Decoding of Linear Codes

    NASA Astrophysics Data System (ADS)

    Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair

    2018-02-01

    The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

  7. Decoding the non-stationary neuron spike trains by dual Monte Carlo point process estimation in motor Brain Machine Interfaces.

    PubMed

    Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang

    2014-01-01

    Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.

  8. A Tensor-Product-Kernel Framework for Multiscale Neural Activity Decoding and Control

    PubMed Central

    Li, Lin; Brockmeier, Austin J.; Choi, John S.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2014-01-01

    Brain machine interfaces (BMIs) have attracted intense attention as a promising technology for directly interfacing computers or prostheses with the brain's motor and sensory areas, thereby bypassing the body. The availability of multiscale neural recordings including spike trains and local field potentials (LFPs) brings potential opportunities to enhance computational modeling by enriching the characterization of the neural system state. However, heterogeneity on data type (spike timing versus continuous amplitude signals) and spatiotemporal scale complicates the model integration of multiscale neural activity. In this paper, we propose a tensor-product-kernel-based framework to integrate the multiscale activity and exploit the complementary information available in multiscale neural activity. This provides a common mathematical framework for incorporating signals from different domains. The approach is applied to the problem of neural decoding and control. For neural decoding, the framework is able to identify the nonlinear functional relationship between the multiscale neural responses and the stimuli using general purpose kernel adaptive filtering. In a sensory stimulation experiment, the tensor-product-kernel decoder outperforms decoders that use only a single neural data type. In addition, an adaptive inverse controller for delivering electrical microstimulation patterns that utilizes the tensor-product kernel achieves promising results in emulating the responses to natural stimulation. PMID:24829569

  9. Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields.

    PubMed

    Yildiz, Izzet B; Mesgarani, Nima; Deneve, Sophie

    2016-12-07

    A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by "explaining away," a divisive competition between alternative interpretations of the auditory scene. Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data. Copyright © 2016 Yildiz et al.

  10. Integrating robotic action with biologic perception: A brain-machine symbiosis theory

    NASA Astrophysics Data System (ADS)

    Mahmoudi, Babak

    In patients with motor disability the natural cyclic flow of information between the brain and external environment is disrupted by their limb impairment. Brain-Machine Interfaces (BMIs) aim to provide new communication channels between the brain and environment by direct translation of brain's internal states into actions. For enabling the user in a wide range of daily life activities, the challenge is designing neural decoders that autonomously adapt to different tasks, environments, and to changes in the pattern of neural activity. In this dissertation, a novel decoding framework for BMIs is developed in which a computational agent autonomously learns how to translate neural states into action based on maximization of a measure of shared goal between user and the agent. Since the agent and brain share the same goal, a symbiotic relationship between them will evolve therefore this decoding paradigm is called a Brain-Machine Symbiosis (BMS) framework. A decoding agent was implemented within the BMS framework based on the Actor-Critic method of Reinforcement Learning. The rule of the Actor as a neural decoder was to find mapping between the neural representation of motor states in the primary motor cortex (MI) and robot actions in order to solve reaching tasks. The Actor learned the optimal control policy using an evaluative feedback that was estimated by the Critic directly from the user's neural activity of the Nucleus Accumbens (NAcc). Through a series of computational neuroscience studies in a cohort of rats it was demonstrated that NAcc could provide a useful evaluative feedback by predicting the increase or decrease in the probability of earning reward based on the environmental conditions. Using a closed-loop BMI simulator it was demonstrated the Actor-Critic decoding architecture was able to adapt to different tasks as well as changes in the pattern of neural activity. The custom design of a dual micro-wire array enabled simultaneous implantation of MI and NAcc for the development of a full closed-loop system. The Actor-Critic decoding architecture was able to solve the brain-controlled reaching task using a robotic arm by capturing the interdependency between the simultaneous action representation in MI and reward expectation in NAcc.

  11. Efficient Embedded Decoding of Neural Network Language Models in a Machine Translation System.

    PubMed

    Zamora-Martinez, Francisco; Castro-Bleda, Maria Jose

    2018-02-22

    Neural Network Language Models (NNLMs) are a successful approach to Natural Language Processing tasks, such as Machine Translation. We introduce in this work a Statistical Machine Translation (SMT) system which fully integrates NNLMs in the decoding stage, breaking the traditional approach based on [Formula: see text]-best list rescoring. The neural net models (both language models (LMs) and translation models) are fully coupled in the decoding stage, allowing to more strongly influence the translation quality. Computational issues were solved by using a novel idea based on memorization and smoothing of the softmax constants to avoid their computation, which introduces a trade-off between LM quality and computational cost. These ideas were studied in a machine translation task with different combinations of neural networks used both as translation models and as target LMs, comparing phrase-based and [Formula: see text]-gram-based systems, showing that the integrated approach seems more promising for [Formula: see text]-gram-based systems, even with nonfull-quality NNLMs.

  12. A Symbiotic Brain-Machine Interface through Value-Based Decision Making

    PubMed Central

    Mahmoudi, Babak; Sanchez, Justin C.

    2011-01-01

    Background In the development of Brain Machine Interfaces (BMIs), there is a great need to enable users to interact with changing environments during the activities of daily life. It is expected that the number and scope of the learning tasks encountered during interaction with the environment as well as the pattern of brain activity will vary over time. These conditions, in addition to neural reorganization, pose a challenge to decoding neural commands for BMIs. We have developed a new BMI framework in which a computational agent symbiotically decoded users' intended actions by utilizing both motor commands and goal information directly from the brain through a continuous Perception-Action-Reward Cycle (PARC). Methodology The control architecture designed was based on Actor-Critic learning, which is a PARC-based reinforcement learning method. Our neurophysiology studies in rat models suggested that Nucleus Accumbens (NAcc) contained a rich representation of goal information in terms of predicting the probability of earning reward and it could be translated into an evaluative feedback for adaptation of the decoder with high precision. Simulated neural control experiments showed that the system was able to maintain high performance in decoding neural motor commands during novel tasks or in the presence of reorganization in the neural input. We then implanted a dual micro-wire array in the primary motor cortex (M1) and the NAcc of rat brain and implemented a full closed-loop system in which robot actions were decoded from the single unit activity in M1 based on an evaluative feedback that was estimated from NAcc. Conclusions Our results suggest that adapting the BMI decoder with an evaluative feedback that is directly extracted from the brain is a possible solution to the problem of operating BMIs in changing environments with dynamic neural signals. During closed-loop control, the agent was able to solve a reaching task by capturing the action and reward interdependency in the brain. PMID:21423797

  13. Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains.

    PubMed

    Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam

    2011-01-01

    One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.

  14. Decoding power-spectral profiles from FMRI brain activities during naturalistic auditory experience.

    PubMed

    Hu, Xintao; Guo, Lei; Han, Junwei; Liu, Tianming

    2017-02-01

    Recent studies have demonstrated a close relationship between computational acoustic features and neural brain activities, and have largely advanced our understanding of auditory information processing in the human brain. Along this line, we proposed a multidisciplinary study to examine whether power spectral density (PSD) profiles can be decoded from brain activities during naturalistic auditory experience. The study was performed on a high resolution functional magnetic resonance imaging (fMRI) dataset acquired when participants freely listened to the audio-description of the movie "Forrest Gump". Representative PSD profiles existing in the audio-movie were identified by clustering the audio samples according to their PSD descriptors. Support vector machine (SVM) classifiers were trained to differentiate the representative PSD profiles using corresponding fMRI brain activities. Based on PSD profile decoding, we explored how the neural decodability correlated to power intensity and frequency deviants. Our experimental results demonstrated that PSD profiles can be reliably decoded from brain activities. We also suggested a sigmoidal relationship between the neural decodability and power intensity deviants of PSD profiles. Our study in addition substantiates the feasibility and advantage of naturalistic paradigm for studying neural encoding of complex auditory information.

  15. Multi-Connection Pattern Analysis: Decoding the representational content of neural communication.

    PubMed

    Li, Yuanning; Richardson, Robert Mark; Ghuman, Avniel Singh

    2017-11-15

    The lack of multivariate methods for decoding the representational content of interregional neural communication has left it difficult to know what information is represented in distributed brain circuit interactions. Here we present Multi-Connection Pattern Analysis (MCPA), which works by learning mappings between the activity patterns of the populations as a factor of the information being processed. These maps are used to predict the activity from one neural population based on the activity from the other population. Successful MCPA-based decoding indicates the involvement of distributed computational processing and provides a framework for probing the representational structure of the interaction. Simulations demonstrate the efficacy of MCPA in realistic circumstances. In addition, we demonstrate that MCPA can be applied to different signal modalities to evaluate a variety of hypothesis associated with information coding in neural communications. We apply MCPA to fMRI and human intracranial electrophysiological data to provide a proof-of-concept of the utility of this method for decoding individual natural images and faces in functional connectivity data. We further use a MCPA-based representational similarity analysis to illustrate how MCPA may be used to test computational models of information transfer among regions of the visual processing stream. Thus, MCPA can be used to assess the information represented in the coupled activity of interacting neural circuits and probe the underlying principles of information transformation between regions. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Feature reconstruction of LFP signals based on PLSR in the neural information decoding study.

    PubMed

    Yonghui Dong; Zhigang Shang; Mengmeng Li; Xinyu Liu; Hong Wan

    2017-07-01

    To solve the problems of Signal-to-Noise Ratio (SNR) and multicollinearity when the Local Field Potential (LFP) signals is used for the decoding of animal motion intention, a feature reconstruction of LFP signals based on partial least squares regression (PLSR) in the neural information decoding study is proposed in this paper. Firstly, the feature information of LFP coding band is extracted based on wavelet transform. Then the PLSR model is constructed by the extracted LFP coding features. According to the multicollinearity characteristics among the coding features, several latent variables which contribute greatly to the steering behavior are obtained, and the new LFP coding features are reconstructed. Finally, the K-Nearest Neighbor (KNN) method is used to classify the reconstructed coding features to verify the decoding performance. The results show that the proposed method can achieve the highest accuracy compared to the other three methods and the decoding effect of the proposed method is robust.

  17. Extracting an evaluative feedback from the brain for adaptation of motor neuroprosthetic decoders.

    PubMed

    Mahmoudi, Babak; Principe, Jose C; Sanchez, Justin C

    2010-01-01

    The design of Brain-Machine Interface (BMI) neural decoders that have robust performance in changing environments encountered in daily life activity is a challenging problem. One solution to this problem is the design of neural decoders that are able to assist and adapt to the user by participating in their perception-action-reward cycle (PARC). Using inspiration both from artificial intelligence and neurobiology reinforcement learning theories, we have designed a novel decoding architecture that enables a symbiotic relationship between the user and an Intelligent Assistant (IA). By tapping into the motor and reward centers in the brain, the IA adapts the process of decoding neural motor commands into prosthetic actions based on the user's goals. The focus of this paper is on extraction of goal information directly from the brain and making it accessible to the IA as an evaluative feedback for adaptation. We have recorded the neural activity of the Nucleus Accumbens in behaving rats during a reaching task. The peri-event time histograms demonstrate a rich representation of the reward prediction in this subcortical structure that can be modeled on a single trial basis as a scalar evaluative feedback with high precision.

  18. Estimation of Muscle Force Based on Neural Drive in a Hemispheric Stroke Survivor.

    PubMed

    Dai, Chenyun; Zheng, Yang; Hu, Xiaogang

    2018-01-01

    Robotic assistant-based therapy holds great promise to improve the functional recovery of stroke survivors. Numerous neural-machine interface techniques have been used to decode the intended movement to control robotic systems for rehabilitation therapies. In this case report, we tested the feasibility of estimating finger extensor muscle forces of a stroke survivor, based on the decoded descending neural drive through population motoneuron discharge timings. Motoneuron discharge events were obtained by decomposing high-density surface electromyogram (sEMG) signals of the finger extensor muscle. The neural drive was extracted from the normalized frequency of the composite discharge of the motoneuron pool. The neural-drive-based estimation was also compared with the classic myoelectric-based estimation. Our results showed that the neural-drive-based approach can better predict the force output, quantified by lower estimation errors and higher correlations with the muscle force, compared with the myoelectric-based estimation. Our findings suggest that the neural-drive-based approach can potentially be used as a more robust interface signal for robotic therapies during the stroke rehabilitation.

  19. Decoding the dynamic representation of musical pitch from human brain activity.

    PubMed

    Sankaran, N; Thompson, W F; Carlile, S; Carlson, T A

    2018-01-16

    In music, the perception of pitch is governed largely by its tonal function given the preceding harmonic structure of the music. While behavioral research has advanced our understanding of the perceptual representation of musical pitch, relatively little is known about its representational structure in the brain. Using Magnetoencephalography (MEG), we recorded evoked neural responses to different tones presented within a tonal context. Multivariate Pattern Analysis (MVPA) was applied to "decode" the stimulus that listeners heard based on the underlying neural activity. We then characterized the structure of the brain's representation using decoding accuracy as a proxy for representational distance, and compared this structure to several well established perceptual and acoustic models. The observed neural representation was best accounted for by a model based on the Standard Tonal Hierarchy, whereby differences in the neural encoding of musical pitches correspond to their differences in perceived stability. By confirming that perceptual differences honor those in the underlying neuronal population coding, our results provide a crucial link in understanding the cognitive foundations of musical pitch across psychological and neural domains.

  20. Population coding and decoding in a neural field: a computational study.

    PubMed

    Wu, Si; Amari, Shun-Ichi; Nakahara, Hiroyuki

    2002-05-01

    This study uses a neural field model to investigate computational aspects of population coding and decoding when the stimulus is a single variable. A general prototype model for the encoding process is proposed, in which neural responses are correlated, with strength specified by a gaussian function of their difference in preferred stimuli. Based on the model, we study the effect of correlation on the Fisher information, compare the performances of three decoding methods that differ in the amount of encoding information being used, and investigate the implementation of the three methods by using a recurrent network. This study not only rediscovers main results in existing literatures in a unified way, but also reveals important new features, especially when the neural correlation is strong. As the neural correlation of firing becomes larger, the Fisher information decreases drastically. We confirm that as the width of correlation increases, the Fisher information saturates and no longer increases in proportion to the number of neurons. However, we prove that as the width increases further--wider than (sqrt)2 times the effective width of the turning function--the Fisher information increases again, and it increases without limit in proportion to the number of neurons. Furthermore, we clarify the asymptotic efficiency of the maximum likelihood inference (MLI) type of decoding methods for correlated neural signals. It shows that when the correlation covers a nonlocal range of population (excepting the uniform correlation and when the noise is extremely small), the MLI type of method, whose decoding error satisfies the Cauchy-type distribution, is not asymptotically efficient. This implies that the variance is no longer adequate to measure decoding accuracy.

  1. Decoding fMRI Signatures of Real-world Autobiographical Memory Retrieval.

    PubMed

    Rissman, Jesse; Chow, Tiffany E; Reggente, Nicco; Wagner, Anthony D

    2016-04-01

    Extant neuroimaging data implicate frontoparietal and medial-temporal lobe regions in episodic retrieval, and the specific pattern of activity within and across these regions is diagnostic of an individual's subjective mnemonic experience. For example, in laboratory-based paradigms, memories for recently encoded faces can be accurately decoded from single-trial fMRI patterns [Uncapher, M. R., Boyd-Meredith, J. T., Chow, T. E., Rissman, J., & Wagner, A. D. Goal-directed modulation of neural memory patterns: Implications for fMRI-based memory detection. Journal of Neuroscience, 35, 8531-8545, 2015; Rissman, J., Greely, H. T., & Wagner, A. D. Detecting individual memories through the neural decoding of memory states and past experience. Proceedings of the National Academy of Sciences, U.S.A., 107, 9849-9854, 2010]. Here, we investigated the neural patterns underlying memory for real-world autobiographical events, probed at 1- to 3-week retention intervals as well as whether distinct patterns are associated with different subjective memory states. For 3 weeks, participants (n = 16) wore digital cameras that captured photographs of their daily activities. One week later, they were scanned while making memory judgments about sequences of photos depicting events from their own lives or events captured by the cameras of others. Whole-brain multivoxel pattern analysis achieved near-perfect accuracy at distinguishing correctly recognized events from correctly rejected novel events, and decoding performance did not significantly vary with retention interval. Multivoxel pattern classifiers also differentiated recollection from familiarity and reliably decoded the subjective strength of recollection, of familiarity, or of novelty. Classification-based brain maps revealed dissociable neural signatures of these mnemonic states, with activity patterns in hippocampus, medial PFC, and ventral parietal cortex being particularly diagnostic of recollection. Finally, a classifier trained on previously acquired laboratory-based memory data achieved reliable decoding of autobiographical memory states. We discuss the implications for neuroscientific accounts of episodic retrieval and comment on the potential forensic use of fMRI for probing experiential knowledge.

  2. Neural network decoder for quantum error correcting codes

    NASA Astrophysics Data System (ADS)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  3. Bayesian decoding using unsorted spikes in the rat hippocampus

    PubMed Central

    Layton, Stuart P.; Chen, Zhe; Wilson, Matthew A.

    2013-01-01

    A fundamental task in neuroscience is to understand how neural ensembles represent information. Population decoding is a useful tool to extract information from neuronal populations based on the ensemble spiking activity. We propose a novel Bayesian decoding paradigm to decode unsorted spikes in the rat hippocampus. Our approach uses a direct mapping between spike waveform features and covariates of interest and avoids accumulation of spike sorting errors. Our decoding paradigm is nonparametric, encoding model-free for representing stimuli, and extracts information from all available spikes and their waveform features. We apply the proposed Bayesian decoding algorithm to a position reconstruction task for freely behaving rats based on tetrode recordings of rat hippocampal neuronal activity. Our detailed decoding analyses demonstrate that our approach is efficient and better utilizes the available information in the nonsortable hash than the standard sorting-based decoding algorithm. Our approach can be adapted to an online encoding/decoding framework for applications that require real-time decoding, such as brain-machine interfaces. PMID:24089403

  4. Nonlinear decoding of a complex movie from the mammalian retina

    PubMed Central

    Deny, Stéphane; Martius, Georg

    2018-01-01

    Retina is a paradigmatic system for studying sensory encoding: the transformation of light into spiking activity of ganglion cells. The inverse problem, where stimulus is reconstructed from spikes, has received less attention, especially for complex stimuli that should be reconstructed “pixel-by-pixel”. We recorded around a hundred neurons from a dense patch in a rat retina and decoded movies of multiple small randomly-moving discs. We constructed nonlinear (kernelized and neural network) decoders that improved significantly over linear results. An important contribution to this was the ability of nonlinear decoders to reliably separate between neural responses driven by locally fluctuating light signals, and responses at locally constant light driven by spontaneous-like activity. This improvement crucially depended on the precise, non-Poisson temporal structure of individual spike trains, which originated in the spike-history dependence of neural responses. We propose a general principle by which downstream circuitry could discriminate between spontaneous and stimulus-driven activity based solely on higher-order statistical structure in the incoming spike trains. PMID:29746463

  5. Design and validation of a real-time spiking-neural-network decoder for brain-machine interfaces.

    PubMed

    Dethier, Julie; Nuyujukian, Paul; Ryu, Stephen I; Shenoy, Krishna V; Boahen, Kwabena

    2013-06-01

    Cortically-controlled motor prostheses aim to restore functions lost to neurological disease and injury. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage cortex. One possible solution is to use ultra-low power neuromorphic chips to decode neural signals for these intracortical implants. The first step is to explore in simulation the feasibility of translating decoding algorithms for brain-machine interface (BMI) applications into spiking neural networks (SNNs). Here we demonstrate the validity of the approach by implementing an existing Kalman-filter-based decoder in a simulated SNN using the Neural Engineering Framework (NEF), a general method for mapping control algorithms onto SNNs. To measure this system's robustness and generalization, we tested it online in closed-loop BMI experiments with two rhesus monkeys. Across both monkeys, a Kalman filter implemented using a 2000-neuron SNN has comparable performance to that of a Kalman filter implemented using standard floating point techniques. These results demonstrate the tractability of SNN implementations of statistical signal processing algorithms on different monkeys and for several tasks, suggesting that a SNN decoder, implemented on a neuromorphic chip, may be a feasible computational platform for low-power fully-implanted prostheses. The validation of this closed-loop decoder system and the demonstration of its robustness and generalization hold promise for SNN implementations on an ultra-low power neuromorphic chip using the NEF.

  6. Neural Decoder for Topological Codes

    NASA Astrophysics Data System (ADS)

    Torlai, Giacomo; Melko, Roger G.

    2017-07-01

    We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.

  7. Spiking Neural Network Decoder for Brain-Machine Interfaces.

    PubMed

    Dethier, Julie; Gilja, Vikash; Nuyujukian, Paul; Elassaad, Shauki A; Shenoy, Krishna V; Boahen, Kwabena

    2011-01-01

    We used a spiking neural network (SNN) to decode neural data recorded from a 96-electrode array in premotor/motor cortex while a rhesus monkey performed a point-to-point reaching arm movement task. We mapped a Kalman-filter neural prosthetic decode algorithm developed to predict the arm's velocity on to the SNN using the Neural Engineering Framework and simulated it using Nengo , a freely available software package. A 20,000-neuron network matched the standard decoder's prediction to within 0.03% (normalized by maximum arm velocity). A 1,600-neuron version of this network was within 0.27%, and run in real-time on a 3GHz PC. These results demonstrate that a SNN can implement a statistical signal processing algorithm widely used as the decoder in high-performance neural prostheses (Kalman filter), and achieve similar results with just a few thousand neurons. Hardware SNN implementations-neuromorphic chips-may offer power savings, essential for realizing fully-implantable cortically controlled prostheses.

  8. Tracking neural coding of perceptual and semantic features of concrete nouns

    PubMed Central

    Sudre, Gustavo; Pomerleau, Dean; Palatucci, Mark; Wehbe, Leila; Fyshe, Alona; Salmelin, Riitta; Mitchell, Tom

    2015-01-01

    We present a methodological approach employing magnetoencephalography (MEG) and machine learning techniques to investigate the flow of perceptual and semantic information decodable from neural activity in the half second during which the brain comprehends the meaning of a concrete noun. Important information about the cortical location of neural activity related to the representation of nouns in the human brain has been revealed by past studies using fMRI. However, the temporal sequence of processing from sensory input to concept comprehension remains unclear, in part because of the poor time resolution provided by fMRI. In this study, subjects answered 20 questions (e.g. is it alive?) about the properties of 60 different nouns prompted by simultaneous presentation of a pictured item and its written name. Our results show that the neural activity observed with MEG encodes a variety of perceptual and semantic features of stimuli at different times relative to stimulus onset, and in different cortical locations. By decoding these features, our MEG-based classifier was able to reliably distinguish between two different concrete nouns that it had never seen before. The results demonstrate that there are clear differences between the time course of the magnitude of MEG activity and that of decodable semantic information. Perceptual features were decoded from MEG activity earlier in time than semantic features, and features related to animacy, size, and manipulability were decoded consistently across subjects. We also observed that regions commonly associated with semantic processing in the fMRI literature may not show high decoding results in MEG. We believe that this type of approach and the accompanying machine learning methods can form the basis for further modeling of the flow of neural information during language processing and a variety of other cognitive processes. PMID:22565201

  9. Toward Optimal Target Placement for Neural Prosthetic Devices

    PubMed Central

    Cunningham, John P.; Yu, Byron M.; Gilja, Vikash; Ryu, Stephen I.; Shenoy, Krishna V.

    2008-01-01

    Neural prosthetic systems have been designed to estimate continuous reach trajectories (motor prostheses) and to predict discrete reach targets (communication prostheses). In the latter case, reach targets are typically decoded from neural spiking activity during an instructed delay period before the reach begins. Such systems use targets placed in radially symmetric geometries independent of the tuning properties of the neurons available. Here we seek to automate the target placement process and increase decode accuracy in communication prostheses by selecting target locations based on the neural population at hand. Motor prostheses that incorporate intended target information could also benefit from this consideration. We present an optimal target placement algorithm that approximately maximizes decode accuracy with respect to target locations. In simulated neural spiking data fit from two monkeys, the optimal target placement algorithm yielded statistically significant improvements up to 8 and 9% for two and sixteen targets, respectively. For four and eight targets, gains were more modest, as the target layouts found by the algorithm closely resembled the canonical layouts. We trained a monkey in this paradigm and tested the algorithm with experimental neural data to confirm some of the results found in simulation. In all, the algorithm can serve not only to create new target layouts that outperform canonical layouts, but it can also confirm or help select among multiple canonical layouts. The optimal target placement algorithm developed here is the first algorithm of its kind, and it should both improve decode accuracy and help automate target placement for neural prostheses. PMID:18829845

  10. Efficient Decoding With Steady-State Kalman Filter in Neural Interface Systems

    PubMed Central

    Malik, Wasim Q.; Truccolo, Wilson; Brown, Emery N.; Hochberg, Leigh R.

    2011-01-01

    The Kalman filter is commonly used in neural interface systems to decode neural activity and estimate the desired movement kinematics. We analyze a low-complexity Kalman filter implementation in which the filter gain is approximated by its steady-state form, computed offline before real-time decoding commences. We evaluate its performance using human motor cortical spike train data obtained from an intracortical recording array as part of an ongoing pilot clinical trial. We demonstrate that the standard Kalman filter gain converges to within 95% of the steady-state filter gain in 1.5 ± 0.5 s (mean ± s.d.). The difference in the intended movement velocity decoded by the two filters vanishes within 5 s, with a correlation coefficient of 0.99 between the two decoded velocities over the session length. We also find that the steady-state Kalman filter reduces the computational load (algorithm execution time) for decoding the firing rates of 25 ± 3 single units by a factor of 7.0 ± 0.9. We expect that the gain in computational efficiency will be much higher in systems with larger neural ensembles. The steady-state filter can thus provide substantial runtime efficiency at little cost in terms of estimation accuracy. This far more efficient neural decoding approach will facilitate the practical implementation of future large-dimensional, multisignal neural interface systems. PMID:21078582

  11. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  12. Manifold decoding for neural representations of face viewpoint and gaze direction using magnetoencephalographic data.

    PubMed

    Kuo, Po-Chih; Chen, Yong-Sheng; Chen, Li-Fen

    2018-05-01

    The main challenge in decoding neural representations lies in linking neural activity to representational content or abstract concepts. The transformation from a neural-based to a low-dimensional representation may hold the key to encoding perceptual processes in the human brain. In this study, we developed a novel model by which to represent two changeable features of faces: face viewpoint and gaze direction. These features are embedded in spatiotemporal brain activity derived from magnetoencephalographic data. Our decoding results demonstrate that face viewpoint and gaze direction can be represented by manifold structures constructed from brain responses in the bilateral occipital face area and right superior temporal sulcus, respectively. Our results also show that the superposition of brain activity in the manifold space reveals the viewpoints of faces as well as directions of gazes as perceived by the subject. The proposed manifold representation model provides a novel opportunity to gain further insight into the processing of information in the human brain. © 2018 Wiley Periodicals, Inc.

  13. Design and validation of a real-time spiking-neural-network decoder for brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Dethier, Julie; Nuyujukian, Paul; Ryu, Stephen I.; Shenoy, Krishna V.; Boahen, Kwabena

    2013-06-01

    Objective. Cortically-controlled motor prostheses aim to restore functions lost to neurological disease and injury. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage cortex. Approach. One possible solution is to use ultra-low power neuromorphic chips to decode neural signals for these intracortical implants. The first step is to explore in simulation the feasibility of translating decoding algorithms for brain-machine interface (BMI) applications into spiking neural networks (SNNs). Main results. Here we demonstrate the validity of the approach by implementing an existing Kalman-filter-based decoder in a simulated SNN using the Neural Engineering Framework (NEF), a general method for mapping control algorithms onto SNNs. To measure this system’s robustness and generalization, we tested it online in closed-loop BMI experiments with two rhesus monkeys. Across both monkeys, a Kalman filter implemented using a 2000-neuron SNN has comparable performance to that of a Kalman filter implemented using standard floating point techniques. Significance. These results demonstrate the tractability of SNN implementations of statistical signal processing algorithms on different monkeys and for several tasks, suggesting that a SNN decoder, implemented on a neuromorphic chip, may be a feasible computational platform for low-power fully-implanted prostheses. The validation of this closed-loop decoder system and the demonstration of its robustness and generalization hold promise for SNN implementations on an ultra-low power neuromorphic chip using the NEF.

  14. Mapping visual stimuli to perceptual decisions via sparse decoding of mesoscopic neural activity.

    PubMed

    Sajda, Paul

    2010-01-01

    In this talk I will describe our work investigating sparse decoding of neural activity, given a realistic mapping of the visual scene to neuronal spike trains generated by a model of primary visual cortex (V1). We use a linear decoder which imposes sparsity via an L1 norm. The decoder can be viewed as a decoding neuron (linear summation followed by a sigmoidal nonlinearity) in which there are relatively few non-zero synaptic weights. We find: (1) the best decoding performance is for a representation that is sparse in both space and time, (2) decoding of a temporal code results in better performance than a rate code and is also a better fit to the psychophysical data, (3) the number of neurons required for decoding increases monotonically as signal-to-noise in the stimulus decreases, with as little as 1% of the neurons required for decoding at the highest signal-to-noise levels, and (4) sparse decoding results in a more accurate decoding of the stimulus and is a better fit to psychophysical performance than a distributed decoding, for example one imposed by an L2 norm. We conclude that sparse coding is well-justified from a decoding perspective in that it results in a minimum number of neurons and maximum accuracy when sparse representations can be decoded from the neural dynamics.

  15. Factor-Analysis Methods for Higher-Performance Neural Prostheses

    PubMed Central

    Santhanam, Gopal; Yu, Byron M.; Gilja, Vikash; Ryu, Stephen I.; Afshar, Afsheen; Sahani, Maneesh; Shenoy, Krishna V.

    2009-01-01

    Neural prostheses aim to provide treatment options for individuals with nervous-system disease or injury. It is necessary, however, to increase the performance of such systems before they can be clinically viable for patients with motor dysfunction. One performance limitation is the presence of correlated trial-to-trial variability that can cause neural responses to wax and wane in concert as the subject is, for example, more attentive or more fatigued. If a system does not properly account for this variability, it may mistakenly interpret such variability as an entirely different intention by the subject. We report here the design and characterization of factor-analysis (FA)–based decoding algorithms that can contend with this confound. We characterize the decoders (classifiers) on experimental data where monkeys performed both a real reach task and a prosthetic cursor task while we recorded from 96 electrodes implanted in dorsal premotor cortex. The decoder attempts to infer the underlying factors that comodulate the neurons' responses and can use this information to substantially lower error rates (one of eight reach endpoint predictions) by ≲75% (e.g., ∼20% total prediction error using traditional independent Poisson models reduced to ∼5%). We also examine additional key aspects of these new algorithms: the effect of neural integration window length on performance, an extension of the algorithms to use Poisson statistics, and the effect of training set size on the decoding accuracy of test data. We found that FA-based methods are most effective for integration windows >150 ms, although still advantageous at shorter timescales, that Gaussian-based algorithms performed better than the analogous Poisson-based algorithms and that the FA algorithm is robust even with a limited amount of training data. We propose that FA-based methods are effective in modeling correlated trial-to-trial neural variability and can be used to substantially increase overall prosthetic system performance. PMID:19297518

  16. Decoding Trajectories from Posterior Parietal Cortex Ensembles

    PubMed Central

    Mulliken, Grant H.; Musallam, Sam; Andersen, Richard A.

    2009-01-01

    High-level cognitive signals in the posterior parietal cortex (PPC) have previously been used to decode the intended endpoint of a reach, providing the first evidence that PPC can be used for direct control of a neural prosthesis (Musallam et al., 2004). Here we expand on this work by showing that PPC neural activity can be harnessed to estimate not only the endpoint but also to continuously control the trajectory of an end effector. Specifically, we trained two monkeys to use a joystick to guide a cursor on a computer screen to peripheral target locations while maintaining central ocular fixation. We found that we could accurately reconstruct the trajectory of the cursor using a relatively small ensemble of simultaneously recorded PPC neurons. Using a goal-based Kalman filter that incorporates target information into the state-space, we showed that the decoded estimate of cursor position could be significantly improved. Finally, we tested whether we could decode trajectories during closed-loop brain control sessions, in which the real-time position of the cursor was determined solely by a monkey’s neural activity in PPC. The monkey learned to perform brain control trajectories at 80% success rate(for 8 targets) after just 4–5 sessions. This improvement in behavioral performance was accompanied by a corresponding enhancement in neural tuning properties (i.e., increased tuning depth and coverage of encoding parameter space) as well as an increase in off-line decoding performance of the PPC ensemble. PMID:19036985

  17. Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia*

    PubMed Central

    Kim, Sung-Phil; Simeral, John D; Hochberg, Leigh R; Donoghue, John P; Black, Michael J

    2010-01-01

    Computer-mediated connections between human motor cortical neurons and assistive devices promise to improve or restore lost function in people with paralysis. Recently, a pilot clinical study of an intracortical neural interface system demonstrated that a tetraplegic human was able to obtain continuous two-dimensional control of a computer cursor using neural activity recorded from his motor cortex. This control, however, was not sufficiently accurate for reliable use in many common computer control tasks. Here, we studied several central design choices for such a system including the kinematic representation for cursor movement, the decoding method that translates neuronal ensemble spiking activity into a control signal and the cursor control task used during training for optimizing the parameters of the decoding method. In two tetraplegic participants, we found that controlling a cursor's velocity resulted in more accurate closed-loop control than controlling its position directly and that cursor velocity control was achieved more rapidly than position control. Control quality was further improved over conventional linear filters by using a probabilistic method, the Kalman filter, to decode human motor cortical activity. Performance assessment based on standard metrics used for the evaluation of a wide range of pointing devices demonstrated significantly improved cursor control with velocity rather than position decoding. PMID:19015583

  18. Decoding 3-D Reach and Grasp Kinematics from High-Frequency Local Field Potentials in Primate Primary Motor Cortex

    PubMed Central

    Zhuang, Jun; Vargas-Irwin, Carlos; Donoghue, John P.

    2011-01-01

    Intracortical microelectrode array recordings generate a variety of neural signals with potential application as control signals in neural interface systems. Previous studies have focused on single and multiunit activity, as well as low frequency local field potentials (LFPs), but have not explored higher frequency (>200 Hz) LFPs. In addition, the potential to decode three dimensional (3-D) reach and grasp kinematics based on LFPs has not been demonstrated. Here, we use mutual information and decoding analyses to probe the information content about 3-D reaching and grasping of 7 different LFP frequency bands in the range of 0.3 Hz – 400 Hz. LFPs were recorded via 96-microelectrode arrays in primary motor cortex (M1) of two monkeys performing free reaching to grasp moving objects. Mutual information analyses revealed that higher frequency bands (e.g. 100 – 200 Hz and 200 – 400 Hz) carried the most information about the examined kinematics. Furthermore, Kalman filter decoding revealed that broadband high frequency LFPs, likely reflecting multiunit activity, provided the best decoding performance as well as substantial accuracy in reconstructing reach kinematics, grasp aperture and aperture velocity. These results indicate that LFPs, especially high frequency bands, could be useful signals for neural interfaces controlling 3-D reach and grasp kinematics. PMID:20403782

  19. Robust decoding of selective auditory attention from MEG in a competing-speaker environment via state-space modeling✩

    PubMed Central

    Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z.; Shamma, Shihab A.; Babadi, Behtash

    2015-01-01

    The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy. PMID:26436490

  20. State-Dependent Decoding Algorithms Improve the Performance of a Bidirectional BMI in Anesthetized Rats.

    PubMed

    De Feo, Vito; Boi, Fabio; Safaai, Houman; Onken, Arno; Panzeri, Stefano; Vato, Alessandro

    2017-01-01

    Brain-machine interfaces (BMIs) promise to improve the quality of life of patients suffering from sensory and motor disabilities by creating a direct communication channel between the brain and the external world. Yet, their performance is currently limited by the relatively small amount of information that can be decoded from neural activity recorded form the brain. We have recently proposed that such decoding performance may be improved when using state-dependent decoding algorithms that predict and discount the large component of the trial-to-trial variability of neural activity which is due to the dependence of neural responses on the network's current internal state. Here we tested this idea by using a bidirectional BMI to investigate the gain in performance arising from using a state-dependent decoding algorithm. This BMI, implemented in anesthetized rats, controlled the movement of a dynamical system using neural activity decoded from motor cortex and fed back to the brain the dynamical system's position by electrically microstimulating somatosensory cortex. We found that using state-dependent algorithms that tracked the dynamics of ongoing activity led to an increase in the amount of information extracted form neural activity by 22%, with a consequently increase in all of the indices measuring the BMI's performance in controlling the dynamical system. This suggests that state-dependent decoding algorithms may be used to enhance BMIs at moderate computational cost.

  1. Local-learning-based neuron selection for grasping gesture prediction in motor brain machine interfaces

    NASA Astrophysics Data System (ADS)

    Xu, Kai; Wang, Yiwen; Wang, Yueming; Wang, Fang; Hao, Yaoyao; Zhang, Shaomin; Zhang, Qiaosheng; Chen, Weidong; Zheng, Xiaoxiang

    2013-04-01

    Objective. The high-dimensional neural recordings bring computational challenges to movement decoding in motor brain machine interfaces (mBMI), especially for portable applications. However, not all recorded neural activities relate to the execution of a certain movement task. This paper proposes to use a local-learning-based method to perform neuron selection for the gesture prediction in a reaching and grasping task. Approach. Nonlinear neural activities are decomposed into a set of linear ones in a weighted feature space. A margin is defined to measure the distance between inter-class and intra-class neural patterns. The weights, reflecting the importance of neurons, are obtained by minimizing a margin-based exponential error function. To find the most dominant neurons in the task, 1-norm regularization is introduced to the objective function for sparse weights, where near-zero weights indicate irrelevant neurons. Main results. The signals of only 10 neurons out of 70 selected by the proposed method could achieve over 95% of the full recording's decoding accuracy of gesture predictions, no matter which different decoding methods are used (support vector machine and K-nearest neighbor). The temporal activities of the selected neurons show visually distinguishable patterns associated with various hand states. Compared with other algorithms, the proposed method can better eliminate the irrelevant neurons with near-zero weights and provides the important neuron subset with the best decoding performance in statistics. The weights of important neurons converge usually within 10-20 iterations. In addition, we study the temporal and spatial variation of neuron importance along a period of one and a half months in the same task. A high decoding performance can be maintained by updating the neuron subset. Significance. The proposed algorithm effectively ascertains the neuronal importance without assuming any coding model and provides a high performance with different decoding models. It shows better robustness of identifying the important neurons with noisy signals presented. The low demand of computational resources which, reflected by the fast convergence, indicates the feasibility of the method applied in portable BMI systems. The ascertainment of the important neurons helps to inspect neural patterns visually associated with the movement task. The elimination of irrelevant neurons greatly reduces the computational burden of mBMI systems and maintains the performance with better robustness.

  2. Clusterless Decoding of Position From Multiunit Activity Using A Marked Point Process Filter

    PubMed Central

    Deng, Xinyi; Liu, Daniel F.; Kay, Kenneth; Frank, Loren M.; Eden, Uri T.

    2016-01-01

    Point process filters have been applied successfully to decode neural signals and track neural dynamics. Traditionally, these methods assume that multiunit spiking activity has already been correctly spike-sorted. As a result, these methods are not appropriate for situations where sorting cannot be performed with high precision such as real-time decoding for brain-computer interfaces. As the unsupervised spike-sorting problem remains unsolved, we took an alternative approach that takes advantage of recent insights about clusterless decoding. Here we present a new point process decoding algorithm that does not require multiunit signals to be sorted into individual units. We use the theory of marked point processes to construct a function that characterizes the relationship between a covariate of interest (in this case, the location of a rat on a track) and features of the spike waveforms. In our example, we use tetrode recordings, and the marks represent a four-dimensional vector of the maximum amplitudes of the spike waveform on each of the four electrodes. In general, the marks may represent any features of the spike waveform. We then use Bayes’ rule to estimate spatial location from hippocampal neural activity. We validate our approach with a simulation study and with experimental data recorded in the hippocampus of a rat moving through a linear environment. Our decoding algorithm accurately reconstructs the rat’s position from unsorted multiunit spiking activity. We then compare the quality of our decoding algorithm to that of a traditional spike-sorting and decoding algorithm. Our analyses show that the proposed decoding algorithm performs equivalently or better than algorithms based on sorted single-unit activity. These results provide a path toward accurate real-time decoding of spiking patterns that could be used to carry out content-specific manipulations of population activity in hippocampus or elsewhere in the brain. PMID:25973549

  3. Grasp movement decoding from premotor and parietal cortex.

    PubMed

    Townsend, Benjamin R; Subasi, Erk; Scherberger, Hansjörg

    2011-10-05

    Despite recent advances in harnessing cortical motor-related activity to control computer cursors and robotic devices, the ability to decode and execute different grasping patterns remains a major obstacle. Here we demonstrate a simple Bayesian decoder for real-time classification of grip type and wrist orientation in macaque monkeys that uses higher-order planning signals from anterior intraparietal cortex (AIP) and ventral premotor cortex (area F5). Real-time decoding was based on multiunit signals, which had similar tuning properties to cells in previous single-unit recording studies. Maximum decoding accuracy for two grasp types (power and precision grip) and five wrist orientations was 63% (chance level, 10%). Analysis of decoder performance showed that grip type decoding was highly accurate (90.6%), with most errors occurring during orientation classification. In a subsequent off-line analysis, we found small but significant performance improvements (mean, 6.25 percentage points) when using an optimized spike-sorting method (superparamagnetic clustering). Furthermore, we observed significant differences in the contributions of F5 and AIP for grasp decoding, with F5 being better suited for classification of the grip type and AIP contributing more toward decoding of object orientation. However, optimum decoding performance was maximal when using neural activity simultaneously from both areas. Overall, these results highlight quantitative differences in the functional representation of grasp movements in AIP and F5 and represent a first step toward using these signals for developing functional neural interfaces for hand grasping.

  4. Monte Carlo point process estimation of electromyographic envelopes from motor cortical spikes for brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Liao, Yuxi; She, Xiwei; Wang, Yiwen; Zhang, Shaomin; Zhang, Qiaosheng; Zheng, Xiaoxiang; Principe, Jose C.

    2015-12-01

    Objective. Representation of movement in the motor cortex (M1) has been widely studied in brain-machine interfaces (BMIs). The electromyogram (EMG) has greater bandwidth than the conventional kinematic variables (such as position, velocity), and is functionally related to the discharge of cortical neurons. As the stochastic information of EMG is derived from the explicit spike time structure, point process (PP) methods will be a good solution for decoding EMG directly from neural spike trains. Previous studies usually assume linear or exponential tuning curves between neural firing and EMG, which may not be true. Approach. In our analysis, we estimate the tuning curves in a data-driven way and find both the traditional functional-excitatory and functional-inhibitory neurons, which are widely found across a rat’s motor cortex. To accurately decode EMG envelopes from M1 neural spike trains, the Monte Carlo point process (MCPP) method is implemented based on such nonlinear tuning properties. Main results. Better reconstruction of EMG signals is shown on baseline and extreme high peaks, as our method can better preserve the nonlinearity of the neural tuning during decoding. The MCPP improves the prediction accuracy (the normalized mean squared error) 57% and 66% on average compared with the adaptive point process filter using linear and exponential tuning curves respectively, for all 112 data segments across six rats. Compared to a Wiener filter using spike rates with an optimal window size of 50 ms, MCPP decoding EMG from a point process improves the normalized mean square error (NMSE) by 59% on average. Significance. These results suggest that neural tuning is constantly changing during task execution and therefore, the use of spike timing methodologies and estimation of appropriate tuning curves needs to be undertaken for better EMG decoding in motor BMIs.

  5. A High-Performance Neural Prosthesis Incorporating Discrete State Selection With Hidden Markov Models.

    PubMed

    Kao, Jonathan C; Nuyujukian, Paul; Ryu, Stephen I; Shenoy, Krishna V

    2017-04-01

    Communication neural prostheses aim to restore efficient communication to people with motor neurological injury or disease by decoding neural activity into control signals. These control signals are both analog (e.g., the velocity of a computer mouse) and discrete (e.g., clicking an icon with a computer mouse) in nature. Effective, high-performing, and intuitive-to-use communication prostheses should be capable of decoding both analog and discrete state variables seamlessly. However, to date, the highest-performing autonomous communication prostheses rely on precise analog decoding and typically do not incorporate high-performance discrete decoding. In this report, we incorporated a hidden Markov model (HMM) into an intracortical communication prosthesis to enable accurate and fast discrete state decoding in parallel with analog decoding. In closed-loop experiments with nonhuman primates implanted with multielectrode arrays, we demonstrate that incorporating an HMM into a neural prosthesis can increase state-of-the-art achieved bitrate by 13.9% and 4.2% in two monkeys ( ). We found that the transition model of the HMM is critical to achieving this performance increase. Further, we found that using an HMM resulted in the highest achieved peak performance we have ever observed for these monkeys, achieving peak bitrates of 6.5, 5.7, and 4.7 bps in Monkeys J, R, and L, respectively. Finally, we found that this neural prosthesis was robustly controllable for the duration of entire experimental sessions. These results demonstrate that high-performance discrete decoding can be beneficially combined with analog decoding to achieve new state-of-the-art levels of performance.

  6. Modality-independent representations of small quantities based on brain activation patterns.

    PubMed

    Damarla, Saudamini Roy; Cherkassky, Vladimir L; Just, Marcel Adam

    2016-04-01

    Machine learning or MVPA (Multi Voxel Pattern Analysis) studies have shown that the neural representation of quantities of objects can be decoded from fMRI patterns, in cases where the quantities were visually displayed. Here we apply these techniques to investigate whether neural representations of quantities depicted in one modality (say, visual) can be decoded from brain activation patterns evoked by quantities depicted in the other modality (say, auditory). The main finding demonstrated, for the first time, that quantities of dots were decodable by a classifier that was trained on the neural patterns evoked by quantities of auditory tones, and vice-versa. The representations that were common across modalities were mainly right-lateralized in frontal and parietal regions. A second finding was that the neural patterns in parietal cortex that represent quantities were common across participants. These findings demonstrate a common neuronal foundation for the representation of quantities across sensory modalities and participants and provide insight into the role of parietal cortex in the representation of quantity information. © 2016 Wiley Periodicals, Inc.

  7. Neuron selection based on deflection coefficient maximization for the neural decoding of dexterous finger movements.

    PubMed

    Kim, Yong-Hee; Thakor, Nitish V; Schieber, Marc H; Kim, Hyoung-Nam

    2015-05-01

    Future generations of brain-machine interface (BMI) will require more dexterous motion control such as hand and finger movements. Since a population of neurons in the primary motor cortex (M1) area is correlated with finger movements, neural activities recorded in M1 area are used to reconstruct an intended finger movement. In a BMI system, decoding discrete finger movements from a large number of input neurons does not guarantee a higher decoding accuracy in spite of the increase in computational burden. Hence, we hypothesize that selecting neurons important for coding dexterous flexion/extension of finger movements would improve the BMI performance. In this paper, two metrics are presented to quantitatively measure the importance of each neuron based on Bayes risk minimization and deflection coefficient maximization in a statistical decision problem. Since motor cortical neurons are active with movements of several different fingers, the proposed method is more suitable for a discrete decoding of flexion-extension finger movements than the previous methods for decoding reaching movements. In particular, the proposed metrics yielded high decoding accuracies across all subjects and also in the case of including six combined two-finger movements. While our data acquisition and analysis was done off-line and post processing, our results point to the significance of highly coding neurons in improving BMI performance.

  8. Neuron Selection Based on Deflection Coefficient Maximization for the Neural Decoding of Dexterous Finger Movements

    PubMed Central

    Kim, Yong-Hee; Thakor, Nitish V.; Schieber, Marc H.; Kim, Hyoung-Nam

    2015-01-01

    Future generations of brain-machine interface (BMI) will require more dexterous motion control such as hand and finger movements. Since a population of neurons in the primary motor cortex (M1) area is correlated with finger movements, neural activities recorded in M1 area are used to reconstruct an intended finger movement. In a BMI system, decoding discrete finger movements from a large number of input neurons does not guarantee a higher decoding accuracy in spite of the increase in computational burden. Hence, we hypothesize that selecting neurons important for coding dexterous flexion/extension of finger movements would improve the BMI performance. In this paper, two metrics are presented to quantitatively measure the importance of each neuron based on Bayes risk minimization and deflection coefficient maximization in a statistical decision problem. Since motor cortical neurons are active with movements of several different fingers, the proposed method is more suitable for a discrete decoding of flexion-extension finger movements than the previous methods for decoding reaching movements. In particular, the proposed metrics yielded high decoding accuracies across all subjects and also in the case of including six combined two-finger movements. While our data acquisition and analysis was done off-line and post processing, our results point to the significance of highly coding neurons in improving BMI performance. PMID:25347884

  9. Goal-Directed Modulation of Neural Memory Patterns: Implications for fMRI-Based Memory Detection.

    PubMed

    Uncapher, Melina R; Boyd-Meredith, J Tyler; Chow, Tiffany E; Rissman, Jesse; Wagner, Anthony D

    2015-06-03

    Remembering a past event elicits distributed neural patterns that can be distinguished from patterns elicited when encountering novel information. These differing patterns can be decoded with relatively high diagnostic accuracy for individual memories using multivoxel pattern analysis (MVPA) of fMRI data. Brain-based memory detection--if valid and reliable--would have clear utility beyond the domain of cognitive neuroscience, in the realm of law, marketing, and beyond. However, a significant boundary condition on memory decoding validity may be the deployment of "countermeasures": strategies used to mask memory signals. Here we tested the vulnerability of fMRI-based memory detection to countermeasures, using a paradigm that bears resemblance to eyewitness identification. Participants were scanned while performing two tasks on previously studied and novel faces: (1) a standard recognition memory task; and (2) a task wherein they attempted to conceal their true memory state. Univariate analyses revealed that participants were able to strategically modulate neural responses, averaged across trials, in regions implicated in memory retrieval, including the hippocampus and angular gyrus. Moreover, regions associated with goal-directed shifts of attention and thought substitution supported memory concealment, and those associated with memory generation supported novelty concealment. Critically, whereas MVPA enabled reliable classification of memory states when participants reported memory truthfully, the ability to decode memory on individual trials was compromised, even reversing, during attempts to conceal memory. Together, these findings demonstrate that strategic goal states can be deployed to mask memory-related neural patterns and foil memory decoding technology, placing a significant boundary condition on their real-world utility. Copyright © 2015 the authors 0270-6474/15/358531-15$15.00/0.

  10. A Bidirectional Brain-Machine Interface Featuring a Neuromorphic Hardware Decoder.

    PubMed

    Boi, Fabio; Moraitis, Timoleon; De Feo, Vito; Diotalevi, Francesco; Bartolozzi, Chiara; Indiveri, Giacomo; Vato, Alessandro

    2016-01-01

    Bidirectional brain-machine interfaces (BMIs) establish a two-way direct communication link between the brain and the external world. A decoder translates recorded neural activity into motor commands and an encoder delivers sensory information collected from the environment directly to the brain creating a closed-loop system. These two modules are typically integrated in bulky external devices. However, the clinical support of patients with severe motor and sensory deficits requires compact, low-power, and fully implantable systems that can decode neural signals to control external devices. As a first step toward this goal, we developed a modular bidirectional BMI setup that uses a compact neuromorphic processor as a decoder. On this chip we implemented a network of spiking neurons built using its ultra-low-power mixed-signal analog/digital circuits. On-chip on-line spike-timing-dependent plasticity synapse circuits enabled the network to learn to decode neural signals recorded from the brain into motor outputs controlling the movements of an external device. The modularity of the BMI allowed us to tune the individual components of the setup without modifying the whole system. In this paper, we present the features of this modular BMI and describe how we configured the network of spiking neuron circuits to implement the decoder and to coordinate it with the encoder in an experimental BMI paradigm that connects bidirectionally the brain of an anesthetized rat with an external object. We show that the chip learned the decoding task correctly, allowing the interfaced brain to control the object's trajectories robustly. Based on our demonstration, we propose that neuromorphic technology is mature enough for the development of BMI modules that are sufficiently low-power and compact, while being highly computationally powerful and adaptive.

  11. A Bidirectional Brain-Machine Interface Featuring a Neuromorphic Hardware Decoder

    PubMed Central

    Boi, Fabio; Moraitis, Timoleon; De Feo, Vito; Diotalevi, Francesco; Bartolozzi, Chiara; Indiveri, Giacomo; Vato, Alessandro

    2016-01-01

    Bidirectional brain-machine interfaces (BMIs) establish a two-way direct communication link between the brain and the external world. A decoder translates recorded neural activity into motor commands and an encoder delivers sensory information collected from the environment directly to the brain creating a closed-loop system. These two modules are typically integrated in bulky external devices. However, the clinical support of patients with severe motor and sensory deficits requires compact, low-power, and fully implantable systems that can decode neural signals to control external devices. As a first step toward this goal, we developed a modular bidirectional BMI setup that uses a compact neuromorphic processor as a decoder. On this chip we implemented a network of spiking neurons built using its ultra-low-power mixed-signal analog/digital circuits. On-chip on-line spike-timing-dependent plasticity synapse circuits enabled the network to learn to decode neural signals recorded from the brain into motor outputs controlling the movements of an external device. The modularity of the BMI allowed us to tune the individual components of the setup without modifying the whole system. In this paper, we present the features of this modular BMI and describe how we configured the network of spiking neuron circuits to implement the decoder and to coordinate it with the encoder in an experimental BMI paradigm that connects bidirectionally the brain of an anesthetized rat with an external object. We show that the chip learned the decoding task correctly, allowing the interfaced brain to control the object's trajectories robustly. Based on our demonstration, we propose that neuromorphic technology is mature enough for the development of BMI modules that are sufficiently low-power and compact, while being highly computationally powerful and adaptive. PMID:28018162

  12. Contribution of correlated noise and selective decoding to choice probability measurements in extrastriate visual cortex.

    PubMed

    Gu, Yong; Angelaki, Dora E; DeAngelis, Gregory C

    2014-07-01

    Trial by trial covariations between neural activity and perceptual decisions (quantified by choice Probability, CP) have been used to probe the contribution of sensory neurons to perceptual decisions. CPs are thought to be determined by both selective decoding of neural activity and by the structure of correlated noise among neurons, but the respective roles of these factors in creating CPs have been controversial. We used biologically-constrained simulations to explore this issue, taking advantage of a peculiar pattern of CPs exhibited by multisensory neurons in area MSTd that represent self-motion. Although models that relied on correlated noise or selective decoding could both account for the peculiar pattern of CPs, predictions of the selective decoding model were substantially more consistent with various features of the neural and behavioral data. While correlated noise is essential to observe CPs, our findings suggest that selective decoding of neuronal signals also plays important roles.

  13. Online decoding of object-based attention using real-time fMRI.

    PubMed

    Niazi, Adnan M; van den Broek, Philip L C; Klanke, Stefan; Barth, Markus; Poel, Mannes; Desain, Peter; van Gerven, Marcel A J

    2014-01-01

    Visual attention is used to selectively filter relevant information depending on current task demands and goals. Visual attention is called object-based attention when it is directed to coherent forms or objects in the visual field. This study used real-time functional magnetic resonance imaging for moment-to-moment decoding of attention to spatially overlapped objects belonging to two different object categories. First, a whole-brain classifier was trained on pictures of faces and places. Subjects then saw transparently overlapped pictures of a face and a place, and attended to only one of them while ignoring the other. The category of the attended object, face or place, was decoded on a scan-by-scan basis using the previously trained decoder. The decoder performed at 77.6% accuracy indicating that despite competing bottom-up sensory input, object-based visual attention biased neural patterns towards that of the attended object. Furthermore, a comparison between different classification approaches indicated that the representation of faces and places is distributed rather than focal. This implies that real-time decoding of object-based attention requires a multivariate decoding approach that can detect these distributed patterns of cortical activity. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion.

    PubMed

    Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.

  15. Decoding Lower Limb Muscle Activity and Kinematics from Cortical Neural Spike Trains during Monkey Performing Stand and Squat Movements

    PubMed Central

    Ma, Xuan; Ma, Chaolin; Huang, Jian; Zhang, Peng; Xu, Jiang; He, Jiping

    2017-01-01

    Extensive literatures have shown approaches for decoding upper limb kinematics or muscle activity using multichannel cortical spike recordings toward brain machine interface (BMI) applications. However, similar topics regarding lower limb remain relatively scarce. We previously reported a system for training monkeys to perform visually guided stand and squat tasks. The current study, as a follow-up extension, investigates whether lower limb kinematics and muscle activity characterized by electromyography (EMG) signals during monkey performing stand/squat movements can be accurately decoded from neural spike trains in primary motor cortex (M1). Two monkeys were used in this study. Subdermal intramuscular EMG electrodes were implanted to 8 right leg/thigh muscles. With ample data collected from neurons from a large brain area, we performed a spike triggered average (SpTA) analysis and got a series of density contours which revealed the spatial distributions of different muscle-innervating neurons corresponding to each given muscle. Based on the guidance of these results, we identified the locations optimal for chronic electrode implantation and subsequently carried on chronic neural data recordings. A recursive Bayesian estimation framework was proposed for decoding EMG signals together with kinematics from M1 spike trains. Two specific algorithms were implemented: a standard Kalman filter and an unscented Kalman filter. For the latter one, an artificial neural network was incorporated to deal with the nonlinearity in neural tuning. High correlation coefficient and signal to noise ratio between the predicted and the actual data were achieved for both EMG signals and kinematics on both monkeys. Higher decoding accuracy and faster convergence rate could be achieved with the unscented Kalman filter. These results demonstrate that lower limb EMG signals and kinematics during monkey stand/squat can be accurately decoded from a group of M1 neurons with the proposed algorithms. Our findings provide new insights for extending current BMI design concepts and techniques on upper limbs to lower limb circumstances. Brain controlled exoskeleton, prostheses or neuromuscular electrical stimulators for lower limbs are expected to be developed, which enables the subject to manipulate complex biomechatronic devices with mind in more harmonized manner. PMID:28223914

  16. Neural decoding of treadmill walking from noninvasive electroencephalographic signals

    PubMed Central

    Presacco, Alessandro; Goodman, Ronald; Forrester, Larry

    2011-01-01

    Chronic recordings from ensembles of cortical neurons in primary motor and somatosensory areas in rhesus macaques provide accurate information about bipedal locomotion (Fitzsimmons NA, Lebedev MA, Peikon ID, Nicolelis MA. Front Integr Neurosci 3: 3, 2009). Here we show that the linear and angular kinematics of the ankle, knee, and hip joints during both normal and precision (attentive) human treadmill walking can be inferred from noninvasive scalp electroencephalography (EEG) with decoding accuracies comparable to those from neural decoders based on multiple single-unit activities (SUAs) recorded in nonhuman primates. Six healthy adults were recorded. Participants were asked to walk on a treadmill at their self-selected comfortable speed while receiving visual feedback of their lower limbs (i.e., precision walking), to repeatedly avoid stepping on a strip drawn on the treadmill belt. Angular and linear kinematics of the left and right hip, knee, and ankle joints and EEG were recorded, and neural decoders were designed and optimized with cross-validation procedures. Of note, the optimal set of electrodes of these decoders were also used to accurately infer gait trajectories in a normal walking task that did not require subjects to control and monitor their foot placement. Our results indicate a high involvement of a fronto-posterior cortical network in the control of both precision and normal walking and suggest that EEG signals can be used to study in real time the cortical dynamics of walking and to develop brain-machine interfaces aimed at restoring human gait function. PMID:21768121

  17. Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Phil; Simeral, John D.; Hochberg, Leigh R.; Donoghue, John P.; Black, Michael J.

    2008-12-01

    Computer-mediated connections between human motor cortical neurons and assistive devices promise to improve or restore lost function in people with paralysis. Recently, a pilot clinical study of an intracortical neural interface system demonstrated that a tetraplegic human was able to obtain continuous two-dimensional control of a computer cursor using neural activity recorded from his motor cortex. This control, however, was not sufficiently accurate for reliable use in many common computer control tasks. Here, we studied several central design choices for such a system including the kinematic representation for cursor movement, the decoding method that translates neuronal ensemble spiking activity into a control signal and the cursor control task used during training for optimizing the parameters of the decoding method. In two tetraplegic participants, we found that controlling a cursor's velocity resulted in more accurate closed-loop control than controlling its position directly and that cursor velocity control was achieved more rapidly than position control. Control quality was further improved over conventional linear filters by using a probabilistic method, the Kalman filter, to decode human motor cortical activity. Performance assessment based on standard metrics used for the evaluation of a wide range of pointing devices demonstrated significantly improved cursor control with velocity rather than position decoding. Disclosure. JPD is the Chief Scientific Officer and a director of Cyberkinetics Neurotechnology Systems (CYKN); he holds stock and receives compensation. JDS has been a consultant for CYKN. LRH receives clinical trial support from CYKN.

  18. Neural control of cursor trajectory and click by a human with tetraplegia 1000 days after implant of an intracortical microelectrode array

    NASA Astrophysics Data System (ADS)

    Simeral, J. D.; Kim, S.-P.; Black, M. J.; Donoghue, J. P.; Hochberg, L. R.

    2011-04-01

    The ongoing pilot clinical trial of the BrainGate neural interface system aims in part to assess the feasibility of using neural activity obtained from a small-scale, chronically implanted, intracortical microelectrode array to provide control signals for a neural prosthesis system. Critical questions include how long implanted microelectrodes will record useful neural signals, how reliably those signals can be acquired and decoded, and how effectively they can be used to control various assistive technologies such as computers and robotic assistive devices, or to enable functional electrical stimulation of paralyzed muscles. Here we examined these questions by assessing neural cursor control and BrainGate system characteristics on five consecutive days 1000 days after implant of a 4 × 4 mm array of 100 microelectrodes in the motor cortex of a human with longstanding tetraplegia subsequent to a brainstem stroke. On each of five prospectively-selected days we performed time-amplitude sorting of neuronal spiking activity, trained a population-based Kalman velocity decoding filter combined with a linear discriminant click state classifier, and then assessed closed-loop point-and-click cursor control. The participant performed both an eight-target center-out task and a random target Fitts metric task which was adapted from a human-computer interaction ISO standard used to quantify performance of computer input devices. The neural interface system was further characterized by daily measurement of electrode impedances, unit waveforms and local field potentials. Across the five days, spiking signals were obtained from 41 of 96 electrodes and were successfully decoded to provide neural cursor point-and-click control with a mean task performance of 91.3% ± 0.1% (mean ± s.d.) correct target acquisition. Results across five consecutive days demonstrate that a neural interface system based on an intracortical microelectrode array can provide repeatable, accurate point-and-click control of a computer interface to an individual with tetraplegia 1000 days after implantation of this sensor.

  19. Neural control of cursor trajectory and click by a human with tetraplegia 1000 days after implant of an intracortical microelectrode array

    PubMed Central

    Simeral, J D; Kim, S-P; Black, M J; Donoghue, J P; Hochberg, L R

    2013-01-01

    The ongoing pilot clinical trial of the BrainGate neural interface system aims in part to assess the feasibility of using neural activity obtained from a small-scale, chronically implanted, intracortical microelectrode array to provide control signals for a neural prosthesis system. Critical questions include how long implanted microelectrodes will record useful neural signals, how reliably those signals can be acquired and decoded, and how effectively they can be used to control various assistive technologies such as computers and robotic assistive devices, or to enable functional electrical stimulation of paralyzed muscles. Here we examined these questions by assessing neural cursor control and BrainGate system characteristics on five consecutive days 1000 days after implant of a 4 × 4 mm array of 100 microelectrodes in the motor cortex of a human with longstanding tetraplegia subsequent to a brainstem stroke. On each of five prospectively-selected days we performed time-amplitude sorting of neuronal spiking activity, trained a population-based Kalman velocity decoding filter combined with a linear discriminant click state classifier, and then assessed closed-loop point-and-click cursor control. The participant performed both an eight-target center-out task and a random target Fitts metric task which was adapted from a human-computer interaction ISO standard used to quantify performance of computer input devices. The neural interface system was further characterized by daily measurement of electrode impedances, unit waveforms and local field potentials. Across the five days, spiking signals were obtained from 41 of 96 electrodes and were successfully decoded to provide neural cursor point-and-click control with a mean task performance of 91.3% ± 0.1% (mean ± s.d.) correct target acquisition. Results across five consecutive days demonstrate that a neural interface system based on an intracortical microelectrode array can provide repeatable, accurate point-and-click control of a computer interface to an individual with tetraplegia 1000 days after implantation of this sensor. PMID:21436513

  20. Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task

    NASA Astrophysics Data System (ADS)

    Revechkis, Boris; Aflalo, Tyson NS; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A.

    2014-12-01

    Objective. To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. Approach. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like ‘Face in a Crowd’ task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the ‘Crowd’) using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a ‘Crowd Off’ condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Main results. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Significance. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.

  1. Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task.

    PubMed

    Revechkis, Boris; Aflalo, Tyson N S; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A

    2014-12-01

    To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like 'Face in a Crowd' task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the 'Crowd') using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a 'Crowd Off' condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.

  2. Brain basis of cognitive resilience: Prefrontal cortex predicts better reading comprehension in relation to decoding

    PubMed Central

    Patael, Smadar Z.; Farris, Emily A.; Black, Jessica M.; Hancock, Roeland; Gabrieli, John D. E.; Cutting, Laurie E.; Hoeft, Fumiko

    2018-01-01

    Objective The ultimate goal of reading is to understand written text. To accomplish this, children must first master decoding, the ability to translate printed words into sounds. Although decoding and reading comprehension are highly interdependent, some children struggle to decode but comprehend well, whereas others with good decoding skills fail to comprehend. The neural basis underlying individual differences in this discrepancy between decoding and comprehension abilities is virtually unknown. Methods We investigated the neural basis underlying reading discrepancy, defined as the difference between reading comprehension and decoding skills, in a three-part study: 1) The neuroanatomical basis of reading discrepancy in a cross-sectional sample of school-age children with a wide range of reading abilities (Experiment-1; n = 55); 2) Whether a discrepancy-related neural signature is present in beginning readers and predictive of future discrepancy (Experiment-2; n = 43); and 3) Whether discrepancy-related regions are part of a domain-general or a language specialized network, utilizing the 1000 Functional Connectome data and large-scale reverse inference from Neurosynth.org (Experiment-3). Results Results converged onto the left dorsolateral prefrontal cortex (DLPFC), as related to having discrepantly higher reading comprehension relative to decoding ability. Increased gray matter volume (GMV) was associated with greater discrepancy (Experiment-1). Region-of-interest (ROI) analyses based on the left DLPFC cluster identified in Experiment-1 revealed that regional GMV within this ROI in beginning readers predicted discrepancy three years later (Experiment-2). This region was associated with the fronto-parietal network that is considered fundamental for working memory and cognitive control (Experiment-3). Interpretation Processes related to the prefrontal cortex might be linked to reading discrepancy. The findings may be important for understanding cognitive resilience, which we operationalize as those individuals with greater higher-order reading skills such as reading comprehension compared to lower-order reading skills such as decoding skills. Our study provides insights into reading development, existing theories of reading, and cognitive processes that are potentially significant to a wide range of reading disorders. PMID:29902208

  3. Decoding Spontaneous Emotional States in the Human Brain

    PubMed Central

    Kragel, Philip A.; Knodt, Annchen R.; Hariri, Ahmad R.; LaBar, Kevin S.

    2016-01-01

    Pattern classification of human brain activity provides unique insight into the neural underpinnings of diverse mental states. These multivariate tools have recently been used within the field of affective neuroscience to classify distributed patterns of brain activation evoked during emotion induction procedures. Here we assess whether neural models developed to discriminate among distinct emotion categories exhibit predictive validity in the absence of exteroceptive emotional stimulation. In two experiments, we show that spontaneous fluctuations in human resting-state brain activity can be decoded into categories of experience delineating unique emotional states that exhibit spatiotemporal coherence, covary with individual differences in mood and personality traits, and predict on-line, self-reported feelings. These findings validate objective, brain-based models of emotion and show how emotional states dynamically emerge from the activity of separable neural systems. PMID:27627738

  4. Adaptive Offset Correction for Intracortical Brain Computer Interfaces

    PubMed Central

    Homer, Mark L.; Perge, János A.; Black, Michael J.; Harrison, Matthew T.; Cash, Sydney S.; Hochberg, Leigh R.

    2014-01-01

    Intracortical brain computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user’s ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called MOCA, was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors (10.6 ±10.1%; p<0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs. PMID:24196868

  5. Adaptive offset correction for intracortical brain-computer interfaces.

    PubMed

    Homer, Mark L; Perge, Janos A; Black, Michael J; Harrison, Matthew T; Cash, Sydney S; Hochberg, Leigh R

    2014-03-01

    Intracortical brain-computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user's ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called multiple offset correction algorithm (MOCA), was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors ( 10.6 ± 10.1% ; p < 0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs.

  6. Commonalities and differences in the neural representations of English, Portuguese, and Mandarin sentences: When knowledge of the brain-language mappings for two languages is better than one.

    PubMed

    Yang, Ying; Wang, Jing; Bailer, Cyntia; Cherkassky, Vladimir; Just, Marcel Adam

    2017-12-01

    This study extended cross-language semantic decoding (based on a concept's fMRI signature) to the decoding of sentences across three different languages (English, Portuguese and Mandarin). A classifier was trained on either the mapping between words and activation patterns in one language or the mappings in two languages (using an equivalent amount of training data), and then tested on its ability to decode the semantic content of a third language. The model trained on two languages was reliably more accurate than a classifier trained on one language for all three pairs of languages. This two-language advantage was selective to abstract concept domains such as social interactions and mental activity. Representational Similarity Analyses (RSA) of the inter-sentence neural similarities resulted in similar clustering of sentences in all the three languages, indicating a shared neural concept space among languages. These findings identify semantic domains that are common across these three languages versus those that are more language or culture-specific. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Decoding magnetoencephalographic rhythmic activity using spectrospatial information.

    PubMed

    Kauppi, Jukka-Pekka; Parkkonen, Lauri; Hari, Riitta; Hyvärinen, Aapo

    2013-12-01

    We propose a new data-driven decoding method called Spectral Linear Discriminant Analysis (Spectral LDA) for the analysis of magnetoencephalography (MEG). The method allows investigation of changes in rhythmic neural activity as a result of different stimuli and tasks. The introduced classification model only assumes that each "brain state" can be characterized as a combination of neural sources, each of which shows rhythmic activity at one or several frequency bands. Furthermore, the model allows the oscillation frequencies to be different for each such state. We present decoding results from 9 subjects in a four-category classification problem defined by an experiment involving randomly alternating epochs of auditory, visual and tactile stimuli interspersed with rest periods. The performance of Spectral LDA was very competitive compared with four alternative classifiers based on different assumptions concerning the organization of rhythmic brain activity. In addition, the spectral and spatial patterns extracted automatically on the basis of trained classifiers showed that Spectral LDA offers a novel and interesting way of analyzing spectrospatial oscillatory neural activity across the brain. All the presented classification methods and visualization tools are freely available as a Matlab toolbox. © 2013.

  8. Contribution of correlated noise and selective decoding to choice probability measurements in extrastriate visual cortex

    PubMed Central

    Gu, Yong; Angelaki, Dora E; DeAngelis, Gregory C

    2014-01-01

    Trial by trial covariations between neural activity and perceptual decisions (quantified by choice Probability, CP) have been used to probe the contribution of sensory neurons to perceptual decisions. CPs are thought to be determined by both selective decoding of neural activity and by the structure of correlated noise among neurons, but the respective roles of these factors in creating CPs have been controversial. We used biologically-constrained simulations to explore this issue, taking advantage of a peculiar pattern of CPs exhibited by multisensory neurons in area MSTd that represent self-motion. Although models that relied on correlated noise or selective decoding could both account for the peculiar pattern of CPs, predictions of the selective decoding model were substantially more consistent with various features of the neural and behavioral data. While correlated noise is essential to observe CPs, our findings suggest that selective decoding of neuronal signals also plays important roles. DOI: http://dx.doi.org/10.7554/eLife.02670.001 PMID:24986734

  9. Neural speech recognition: continuous phoneme decoding using spatiotemporal representations of human cortical activity

    NASA Astrophysics Data System (ADS)

    Moses, David A.; Mesgarani, Nima; Leonard, Matthew K.; Chang, Edward F.

    2016-10-01

    Objective. The superior temporal gyrus (STG) and neighboring brain regions play a key role in human language processing. Previous studies have attempted to reconstruct speech information from brain activity in the STG, but few of them incorporate the probabilistic framework and engineering methodology used in modern speech recognition systems. In this work, we describe the initial efforts toward the design of a neural speech recognition (NSR) system that performs continuous phoneme recognition on English stimuli with arbitrary vocabulary sizes using the high gamma band power of local field potentials in the STG and neighboring cortical areas obtained via electrocorticography. Approach. The system implements a Viterbi decoder that incorporates phoneme likelihood estimates from a linear discriminant analysis model and transition probabilities from an n-gram phonemic language model. Grid searches were used in an attempt to determine optimal parameterizations of the feature vectors and Viterbi decoder. Main results. The performance of the system was significantly improved by using spatiotemporal representations of the neural activity (as opposed to purely spatial representations) and by including language modeling and Viterbi decoding in the NSR system. Significance. These results emphasize the importance of modeling the temporal dynamics of neural responses when analyzing their variations with respect to varying stimuli and demonstrate that speech recognition techniques can be successfully leveraged when decoding speech from neural signals. Guided by the results detailed in this work, further development of the NSR system could have applications in the fields of automatic speech recognition and neural prosthetics.

  10. Decoding Information for Grasping from the Macaque Dorsomedial Visual Stream.

    PubMed

    Filippini, Matteo; Breveglieri, Rossella; Akhras, M Ali; Bosco, Annalisa; Chinellato, Eris; Fattori, Patrizia

    2017-04-19

    Neurodecoders have been developed by researchers mostly to control neuroprosthetic devices, but also to shed new light on neural functions. In this study, we show that signals representing grip configurations can be reliably decoded from neural data acquired from area V6A of the monkey medial posterior parietal cortex. Two Macaca fascicularis monkeys were trained to perform an instructed-delay reach-to-grasp task in the dark and in the light toward objects of different shapes. Population neural activity was extracted at various time intervals on vision of the objects, the delay before movement, and grasp execution. This activity was used to train and validate a Bayes classifier used for decoding objects and grip types. Recognition rates were well over chance level for all the epochs analyzed in this study. Furthermore, we detected slightly different decoding accuracies, depending on the task's visual condition. Generalization analysis was performed by training and testing the system during different time intervals. This analysis demonstrated that a change of code occurred during the course of the task. Our classifier was able to discriminate grasp types fairly well in advance with respect to grasping onset. This feature might be important when the timing is critical to send signals to external devices before the movement start. Our results suggest that the neural signals from the dorsomedial visual pathway can be a good substrate to feed neural prostheses for prehensile actions. SIGNIFICANCE STATEMENT Recordings of neural activity from nonhuman primate frontal and parietal cortex have led to the development of methods of decoding movement information to restore coordinated arm actions in paralyzed human beings. Our results show that the signals measured from the monkey medial posterior parietal cortex are valid for correctly decoding information relevant for grasping. Together with previous studies on decoding reach trajectories from the medial posterior parietal cortex, this highlights the medial parietal cortex as a target site for transforming neural activity into control signals to command prostheses to allow human patients to dexterously perform grasping actions. Copyright © 2017 the authors 0270-6474/17/374311-12$15.00/0.

  11. Hierarchical Neural Representation of Dreamed Objects Revealed by Brain Decoding with Deep Neural Network Features.

    PubMed

    Horikawa, Tomoyasu; Kamitani, Yukiyasu

    2017-01-01

    Dreaming is generally thought to be generated by spontaneous brain activity during sleep with patterns common to waking experience. This view is supported by a recent study demonstrating that dreamed objects can be predicted from brain activity during sleep using statistical decoders trained with stimulus-induced brain activity. However, it remains unclear whether and how visual image features associated with dreamed objects are represented in the brain. In this study, we used a deep neural network (DNN) model for object recognition as a proxy for hierarchical visual feature representation, and DNN features for dreamed objects were analyzed with brain decoding of fMRI data collected during dreaming. The decoders were first trained with stimulus-induced brain activity labeled with the feature values of the stimulus image from multiple DNN layers. The decoders were then used to decode DNN features from the dream fMRI data, and the decoded features were compared with the averaged features of each object category calculated from a large-scale image database. We found that the feature values decoded from the dream fMRI data positively correlated with those associated with dreamed object categories at mid- to high-level DNN layers. Using the decoded features, the dreamed object category could be identified at above-chance levels by matching them to the averaged features for candidate categories. The results suggest that dreaming recruits hierarchical visual feature representations associated with objects, which may support phenomenal aspects of dream experience.

  12. Modulation Depth Estimation and Variable Selection in State-Space Models for Neural Interfaces

    PubMed Central

    Hochberg, Leigh R.; Donoghue, John P.; Brown, Emery N.

    2015-01-01

    Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems. PMID:25265627

  13. Performance sustaining intracortical neural prostheses

    NASA Astrophysics Data System (ADS)

    Nuyujukian, Paul; Kao, Jonathan C.; Fan, Joline M.; Stavisky, Sergey D.; Ryu, Stephen I.; Shenoy, Krishna V.

    2014-12-01

    Objective. Neural prostheses, or brain-machine interfaces, aim to restore efficient communication and movement ability to those suffering from paralysis. A major challenge these systems face is robust performance, particularly with aging signal sources. The aim in this study was to develop a neural prosthesis that could sustain high performance in spite of signal instability while still minimizing retraining time. Approach. We trained two rhesus macaques implanted with intracortical microelectrode arrays 1-4 years prior to this study to acquire targets with a neurally-controlled cursor. We measured their performance via achieved bitrate (bits per second, bps). This task was repeated over contiguous days to evaluate the sustained performance across time. Main results. We found that in the monkey with a younger (i.e., two year old) implant and better signal quality, a fixed decoder could sustain performance for a month at a rate of 4 bps, the highest achieved communication rate reported to date. This fixed decoder was evaluated across 22 months and experienced a performance decline at a rate of 0.24 bps yr-1. In the monkey with the older (i.e., 3.5 year old) implant and poorer signal quality, a fixed decoder could not sustain performance for more than a few days. Nevertheless, performance in this monkey was maintained for two weeks without requiring additional online retraining time by utilizing prior days’ experimental data. Upon analysis of the changes in channel tuning, we found that this stability appeared partially attributable to the cancelling-out of neural tuning fluctuations when projected to two-dimensional cursor movements. Significance. The findings in this study (1) document the highest-performing communication neural prosthesis in monkeys, (2) confirm and extend prior reports of the stability of fixed decoders, and (3) demonstrate a protocol for system stability under conditions where fixed decoders would otherwise fail. These improvements to decoder stability are important for minimizing training time and should make neural prostheses more practical to use.

  14. Electrophysiological difference between mental state decoding and mental state reasoning.

    PubMed

    Cao, Bihua; Li, Yiyuan; Li, Fuhong; Li, Hong

    2012-06-29

    Previous studies have explored the neural mechanism of Theory of Mind (ToM), but the neural correlates of its two components, mental state decoding and mental state reasoning, remain unclear. In the present study, participants were presented with various photographs, showing an actor looking at 1 of 2 objects, either with a happy or an unhappy expression. They were asked to either decode the emotion of the actor (mental state decoding task), predict which object would be chosen by the actor (mental state reasoning task), or judge at which object the actor was gazing (physical task), while scalp potentials were recorded. Results showed that (1) the reasoning task elicited an earlier N2 peak than the decoding task did over the prefrontal scalp sites; and (2) during the late positive component (240-440 ms), the reasoning task elicited a more positive deflection than the other two tasks did at the prefrontal scalp sites. In addition, neither the decoding task nor the reasoning task has no left/right hemisphere difference. These findings imply that mental state reasoning differs from mental state decoding early (210 ms) after stimulus onset, and that the prefrontal lobe is the neural basis of mental state reasoning. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Brain-to-text: decoding spoken phrases from phone representations in the brain.

    PubMed

    Herff, Christian; Heger, Dominic; de Pesters, Adriana; Telaar, Dominic; Brunner, Peter; Schalk, Gerwin; Schultz, Tanja

    2015-01-01

    It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech.

  16. Brain-to-text: decoding spoken phrases from phone representations in the brain

    PubMed Central

    Herff, Christian; Heger, Dominic; de Pesters, Adriana; Telaar, Dominic; Brunner, Peter; Schalk, Gerwin; Schultz, Tanja

    2015-01-01

    It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To-Text system described in this paper represents an important step toward human-machine communication based on imagined speech. PMID:26124702

  17. Resting-state brain activity in the motor cortex reflects task-induced activity: A multi-voxel pattern analysis.

    PubMed

    Kusano, Toshiki; Kurashige, Hiroki; Nambu, Isao; Moriguchi, Yoshiya; Hanakawa, Takashi; Wada, Yasuhiro; Osu, Rieko

    2015-08-01

    It has been suggested that resting-state brain activity reflects task-induced brain activity patterns. In this study, we examined whether neural representations of specific movements can be observed in the resting-state brain activity patterns of motor areas. First, we defined two regions of interest (ROIs) to examine brain activity associated with two different behavioral tasks. Using multi-voxel pattern analysis with regularized logistic regression, we designed a decoder to detect voxel-level neural representations corresponding to the tasks in each ROI. Next, we applied the decoder to resting-state brain activity. We found that the decoder discriminated resting-state neural activity with accuracy comparable to that associated with task-induced neural activity. The distribution of learned weighted parameters for each ROI was similar for resting-state and task-induced activities. Large weighted parameters were mainly located on conjunctive areas. Moreover, the accuracy of detection was higher than that for a decoder whose weights were randomly shuffled, indicating that the resting-state brain activity includes multi-voxel patterns similar to the neural representation for the tasks. Therefore, these results suggest that the neural representation of resting-state brain activity is more finely organized and more complex than conventionally considered.

  18. Brain-state classification and a dual-state decoder dramatically improve the control of cursor movement through a brain-machine interface

    NASA Astrophysics Data System (ADS)

    Sachs, Nicholas A.; Ruiz-Torres, Ricardo; Perreault, Eric J.; Miller, Lee E.

    2016-02-01

    Objective. It is quite remarkable that brain machine interfaces (BMIs) can be used to control complex movements with fewer than 100 neurons. Success may be due in part to the limited range of dynamical conditions under which most BMIs are tested. Achieving high-quality control that spans these conditions with a single linear mapping will be more challenging. Even for simple reaching movements, existing BMIs must reduce the stochastic noise of neurons by averaging the control signals over time, instead of over the many neurons that normally control movement. This forces a compromise between a decoder with dynamics allowing rapid movement and one that allows postures to be maintained with little jitter. Our current work presents a method for addressing this compromise, which may also generalize to more highly varied dynamical situations, including movements with more greatly varying speed. Approach. We have developed a system that uses two independent Wiener filters as individual components in a single decoder, one optimized for movement, and the other for postural control. We computed an LDA classifier using the same neural inputs. The decoder combined the outputs of the two filters in proportion to the likelihood assigned by the classifier to each state. Main results. We have performed online experiments with two monkeys using this neural-classifier, dual-state decoder, comparing it to a standard, single-state decoder as well as to a dual-state decoder that switched states automatically based on the cursor’s proximity to a target. The performance of both monkeys using the classifier decoder was markedly better than that of the single-state decoder and comparable to the proximity decoder. Significance. We have demonstrated a novel strategy for dealing with the need to make rapid movements while also maintaining precise cursor control when approaching and stabilizing within targets. Further gains can undoubtedly be realized by optimizing the performance of the individual movement and posture decoders.

  19. Brain-state classification and a dual-state decoder dramatically improve the control of cursor movement through a brain-machine interface.

    PubMed

    Sachs, Nicholas A; Ruiz-Torres, Ricardo; Perreault, Eric J; Miller, Lee E

    2016-02-01

    It is quite remarkable that brain machine interfaces (BMIs) can be used to control complex movements with fewer than 100 neurons. Success may be due in part to the limited range of dynamical conditions under which most BMIs are tested. Achieving high-quality control that spans these conditions with a single linear mapping will be more challenging. Even for simple reaching movements, existing BMIs must reduce the stochastic noise of neurons by averaging the control signals over time, instead of over the many neurons that normally control movement. This forces a compromise between a decoder with dynamics allowing rapid movement and one that allows postures to be maintained with little jitter. Our current work presents a method for addressing this compromise, which may also generalize to more highly varied dynamical situations, including movements with more greatly varying speed. We have developed a system that uses two independent Wiener filters as individual components in a single decoder, one optimized for movement, and the other for postural control. We computed an LDA classifier using the same neural inputs. The decoder combined the outputs of the two filters in proportion to the likelihood assigned by the classifier to each state. We have performed online experiments with two monkeys using this neural-classifier, dual-state decoder, comparing it to a standard, single-state decoder as well as to a dual-state decoder that switched states automatically based on the cursor's proximity to a target. The performance of both monkeys using the classifier decoder was markedly better than that of the single-state decoder and comparable to the proximity decoder. We have demonstrated a novel strategy for dealing with the need to make rapid movements while also maintaining precise cursor control when approaching and stabilizing within targets. Further gains can undoubtedly be realized by optimizing the performance of the individual movement and posture decoders.

  20. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion

    PubMed Central

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain–computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method. PMID:28558002

  1. Motion Direction Biases and Decoding in Human Visual Cortex

    PubMed Central

    Wang, Helena X.; Merriam, Elisha P.; Freeman, Jeremy

    2014-01-01

    Functional magnetic resonance imaging (fMRI) studies have relied on multivariate analysis methods to decode visual motion direction from measurements of cortical activity. Above-chance decoding has been commonly used to infer the motion-selective response properties of the underlying neural populations. Moreover, patterns of reliable response biases across voxels that underlie decoding have been interpreted to reflect maps of functional architecture. Using fMRI, we identified a direction-selective response bias in human visual cortex that: (1) predicted motion-decoding accuracy; (2) depended on the shape of the stimulus aperture rather than the absolute direction of motion, such that response amplitudes gradually decreased with distance from the stimulus aperture edge corresponding to motion origin; and 3) was present in V1, V2, V3, but not evident in MT+, explaining the higher motion-decoding accuracies reported previously in early visual cortex. These results demonstrate that fMRI-based motion decoding has little or no dependence on the underlying functional organization of motion selectivity. PMID:25209297

  2. A Novel Nonparametric Approach for Neural Encoding and Decoding Models of Multimodal Receptive Fields.

    PubMed

    Agarwal, Rahul; Chen, Zhe; Kloosterman, Fabian; Wilson, Matthew A; Sarma, Sridevi V

    2016-07-01

    Pyramidal neurons recorded from the rat hippocampus and entorhinal cortex, such as place and grid cells, have diverse receptive fields, which are either unimodal or multimodal. Spiking activity from these cells encodes information about the spatial position of a freely foraging rat. At fine timescales, a neuron's spike activity also depends significantly on its own spike history. However, due to limitations of current parametric modeling approaches, it remains a challenge to estimate complex, multimodal neuronal receptive fields while incorporating spike history dependence. Furthermore, efforts to decode the rat's trajectory in one- or two-dimensional space from hippocampal ensemble spiking activity have mainly focused on spike history-independent neuronal encoding models. In this letter, we address these two important issues by extending a recently introduced nonparametric neural encoding framework that allows modeling both complex spatial receptive fields and spike history dependencies. Using this extended nonparametric approach, we develop novel algorithms for decoding a rat's trajectory based on recordings of hippocampal place cells and entorhinal grid cells. Results show that both encoding and decoding models derived from our new method performed significantly better than state-of-the-art encoding and decoding models on 6 minutes of test data. In addition, our model's performance remains invariant to the apparent modality of the neuron's receptive field.

  3. Unsupervised Decoding of Long-Term, Naturalistic Human Neural Recordings with Automated Video and Audio Annotations

    PubMed Central

    Wang, Nancy X. R.; Olson, Jared D.; Ojemann, Jeffrey G.; Rao, Rajesh P. N.; Brunton, Bingni W.

    2016-01-01

    Fully automated decoding of human activities and intentions from direct neural recordings is a tantalizing challenge in brain-computer interfacing. Implementing Brain Computer Interfaces (BCIs) outside carefully controlled experiments in laboratory settings requires adaptive and scalable strategies with minimal supervision. Here we describe an unsupervised approach to decoding neural states from naturalistic human brain recordings. We analyzed continuous, long-term electrocorticography (ECoG) data recorded over many days from the brain of subjects in a hospital room, with simultaneous audio and video recordings. We discovered coherent clusters in high-dimensional ECoG recordings using hierarchical clustering and automatically annotated them using speech and movement labels extracted from audio and video. To our knowledge, this represents the first time techniques from computer vision and speech processing have been used for natural ECoG decoding. Interpretable behaviors were decoded from ECoG data, including moving, speaking and resting; the results were assessed by comparison with manual annotation. Discovered clusters were projected back onto the brain revealing features consistent with known functional areas, opening the door to automated functional brain mapping in natural settings. PMID:27148018

  4. Decoding of visual activity patterns from fMRI responses using multivariate pattern analyses and convolutional neural network.

    PubMed

    Zafar, Raheel; Kamel, Nidal; Naufal, Mohamad; Malik, Aamir Saeed; Dass, Sarat C; Ahmad, Rana Fayyaz; Abdullah, Jafri M; Reza, Faruque

    2017-01-01

    Decoding of human brain activity has always been a primary goal in neuroscience especially with functional magnetic resonance imaging (fMRI) data. In recent years, Convolutional neural network (CNN) has become a popular method for the extraction of features due to its higher accuracy, however it needs a lot of computation and training data. In this study, an algorithm is developed using Multivariate pattern analysis (MVPA) and modified CNN to decode the behavior of brain for different images with limited data set. Selection of significant features is an important part of fMRI data analysis, since it reduces the computational burden and improves the prediction performance; significant features are selected using t-test. MVPA uses machine learning algorithms to classify different brain states and helps in prediction during the task. General linear model (GLM) is used to find the unknown parameters of every individual voxel and the classification is done using multi-class support vector machine (SVM). MVPA-CNN based proposed algorithm is compared with region of interest (ROI) based method and MVPA based estimated values. The proposed method showed better overall accuracy (68.6%) compared to ROI (61.88%) and estimation values (64.17%).

  5. A Brain-Machine Interface Operating with a Real-Time Spiking Neural Network Control Algorithm.

    PubMed

    Dethier, Julie; Nuyujukian, Paul; Eliasmith, Chris; Stewart, Terry; Elassaad, Shauki A; Shenoy, Krishna V; Boahen, Kwabena

    2011-01-01

    Motor prostheses aim to restore function to disabled patients. Despite compelling proof of concept systems, barriers to clinical translation remain. One challenge is to develop a low-power, fully-implantable system that dissipates only minimal power so as not to damage tissue. To this end, we implemented a Kalman-filter based decoder via a spiking neural network (SNN) and tested it in brain-machine interface (BMI) experiments with a rhesus monkey. The Kalman filter was trained to predict the arm's velocity and mapped on to the SNN using the Neural Engineering Framework (NEF). A 2,000-neuron embedded Matlab SNN implementation runs in real-time and its closed-loop performance is quite comparable to that of the standard Kalman filter. The success of this closed-loop decoder holds promise for hardware SNN implementations of statistical signal processing algorithms on neuromorphic chips, which may offer power savings necessary to overcome a major obstacle to the successful clinical translation of neural motor prostheses.

  6. Squeeze-SegNet: a new fast deep convolutional neural network for semantic segmentation

    NASA Astrophysics Data System (ADS)

    Nanfack, Geraldin; Elhassouny, Azeddine; Oulad Haj Thami, Rachid

    2018-04-01

    The recent researches in Deep Convolutional Neural Network have focused their attention on improving accuracy that provide significant advances. However, if they were limited to classification tasks, nowadays with contributions from Scientific Communities who are embarking in this field, they have become very useful in higher level tasks such as object detection and pixel-wise semantic segmentation. Thus, brilliant ideas in the field of semantic segmentation with deep learning have completed the state of the art of accuracy, however this architectures become very difficult to apply in embedded systems as is the case for autonomous driving. We present a new Deep fully Convolutional Neural Network for pixel-wise semantic segmentation which we call Squeeze-SegNet. The architecture is based on Encoder-Decoder style. We use a SqueezeNet-like encoder and a decoder formed by our proposed squeeze-decoder module and upsample layer using downsample indices like in SegNet and we add a deconvolution layer to provide final multi-channel feature map. On datasets like Camvid or City-states, our net gets SegNet-level accuracy with less than 10 times fewer parameters than SegNet.

  7. Identification of cytokine-specific sensory neural signals by decoding murine vagus nerve activity.

    PubMed

    Zanos, Theodoros P; Silverman, Harold A; Levy, Todd; Tsaava, Tea; Battinelli, Emily; Lorraine, Peter W; Ashe, Jeffrey M; Chavan, Sangeeta S; Tracey, Kevin J; Bouton, Chad E

    2018-05-22

    The nervous system maintains physiological homeostasis through reflex pathways that modulate organ function. This process begins when changes in the internal milieu (e.g., blood pressure, temperature, or pH) activate visceral sensory neurons that transmit action potentials along the vagus nerve to the brainstem. IL-1β and TNF, inflammatory cytokines produced by immune cells during infection and injury, and other inflammatory mediators have been implicated in activating sensory action potentials in the vagus nerve. However, it remains unclear whether neural responses encode cytokine-specific information. Here we develop methods to isolate and decode specific neural signals to discriminate between two different cytokines. Nerve impulses recorded from the vagus nerve of mice exposed to IL-1β and TNF were sorted into groups based on their shape and amplitude, and their respective firing rates were computed. This revealed sensory neural groups responding specifically to TNF and IL-1β in a dose-dependent manner. These cytokine-mediated responses were subsequently decoded using a Naive Bayes algorithm that discriminated between no exposure and exposures to IL-1β and TNF (mean successful identification rate 82.9 ± 17.8%, chance level 33%). Recordings obtained in IL-1 receptor-KO mice were devoid of IL-1β-related signals but retained their responses to TNF. Genetic ablation of TRPV1 neurons attenuated the vagus neural signals mediated by IL-1β, and distal lidocaine nerve block attenuated all vagus neural signals recorded. The results obtained in this study using the methodological framework suggest that cytokine-specific information is present in sensory neural signals within the vagus nerve. Copyright © 2018 the Author(s). Published by PNAS.

  8. Generic decoding of seen and imagined objects using hierarchical visual features.

    PubMed

    Horikawa, Tomoyasu; Kamitani, Yukiyasu

    2017-05-22

    Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.

  9. Neural population encoding and decoding of sound source location across sound level in the rabbit inferior colliculus

    PubMed Central

    Delgutte, Bertrand

    2015-01-01

    At lower levels of sensory processing, the representation of a stimulus feature in the response of a neural population can vary in complex ways across different stimulus intensities, potentially changing the amount of feature-relevant information in the response. How higher-level neural circuits could implement feature decoding computations that compensate for these intensity-dependent variations remains unclear. Here we focused on neurons in the inferior colliculus (IC) of unanesthetized rabbits, whose firing rates are sensitive to both the azimuthal position of a sound source and its sound level. We found that the azimuth tuning curves of an IC neuron at different sound levels tend to be linear transformations of each other. These transformations could either increase or decrease the mutual information between source azimuth and spike count with increasing level for individual neurons, yet population azimuthal information remained constant across the absolute sound levels tested (35, 50, and 65 dB SPL), as inferred from the performance of a maximum-likelihood neural population decoder. We harnessed evidence of level-dependent linear transformations to reduce the number of free parameters in the creation of an accurate cross-level population decoder of azimuth. Interestingly, this decoder predicts monotonic azimuth tuning curves, broadly sensitive to contralateral azimuths, in neurons at higher levels in the auditory pathway. PMID:26490292

  10. Encoder-Decoder Optimization for Brain-Computer Interfaces

    PubMed Central

    Merel, Josh; Pianto, Donald M.; Cunningham, John P.; Paninski, Liam

    2015-01-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages. PMID:26029919

  11. Encoder-decoder optimization for brain-computer interfaces.

    PubMed

    Merel, Josh; Pianto, Donald M; Cunningham, John P; Paninski, Liam

    2015-06-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.

  12. Signal acquisition and analysis for cortical control of neuroprosthetics.

    PubMed

    Tillery, Stephen I Helms; Taylor, Dawn M

    2004-12-01

    Work in cortically controlled neuroprosthetic systems has concentrated on decoding natural behaviors from neural activity, with the idea that if the behavior could be fully decoded it could be duplicated using an artificial system. Initial estimates from this approach suggested that a high-fidelity signal comprised of many hundreds of neurons would be required to control a neuroprosthetic system successfully. However, recent studies are showing hints that these systems can be controlled effectively using only a few tens of neurons. Attempting to decode the pre-existing relationship between neural activity and natural behavior is not nearly as important as choosing a decoding scheme that can be more readily deployed and trained to generate the desired actions of the artificial system. These artificial systems need not resemble or behave similarly to any natural biological system. Effective matching of discrete and continuous neural command signals to appropriately configured device functions will enable effective control of both natural and abstract artificial systems using compatible thought processes.

  13. State-space decoding of primary afferent neuron firing rates

    NASA Astrophysics Data System (ADS)

    Wagenaar, J. B.; Ventura, V.; Weber, D. J.

    2011-02-01

    Kinematic state feedback is important for neuroprostheses to generate stable and adaptive movements of an extremity. State information, represented in the firing rates of populations of primary afferent (PA) neurons, can be recorded at the level of the dorsal root ganglia (DRG). Previous work in cats showed the feasibility of using DRG recordings to predict the kinematic state of the hind limb using reverse regression. Although accurate decoding results were attained, reverse regression does not make efficient use of the information embedded in the firing rates of the neural population. In this paper, we present decoding results based on state-space modeling, and show that it is a more principled and more efficient method for decoding the firing rates in an ensemble of PA neurons. In particular, we show that we can extract confounded information from neurons that respond to multiple kinematic parameters, and that including velocity components in the firing rate models significantly increases the accuracy of the decoded trajectory. We show that, on average, state-space decoding is twice as efficient as reverse regression for decoding joint and endpoint kinematics.

  14. A stimulus-dependent spike threshold is an optimal neural coder

    PubMed Central

    Jones, Douglas L.; Johnson, Erik C.; Ratnam, Rama

    2015-01-01

    A neural code based on sequences of spikes can consume a significant portion of the brain's energy budget. Thus, energy considerations would dictate that spiking activity be kept as low as possible. However, a high spike-rate improves the coding and representation of signals in spike trains, particularly in sensory systems. These are competing demands, and selective pressure has presumably worked to optimize coding by apportioning a minimum number of spikes so as to maximize coding fidelity. The mechanisms by which a neuron generates spikes while maintaining a fidelity criterion are not known. Here, we show that a signal-dependent neural threshold, similar to a dynamic or adapting threshold, optimizes the trade-off between spike generation (encoding) and fidelity (decoding). The threshold mimics a post-synaptic membrane (a low-pass filter) and serves as an internal decoder. Further, it sets the average firing rate (the energy constraint). The decoding process provides an internal copy of the coding error to the spike-generator which emits a spike when the error equals or exceeds a spike threshold. When optimized, the trade-off leads to a deterministic spike firing-rule that generates optimally timed spikes so as to maximize fidelity. The optimal coder is derived in closed-form in the limit of high spike-rates, when the signal can be approximated as a piece-wise constant signal. The predicted spike-times are close to those obtained experimentally in the primary electrosensory afferent neurons of weakly electric fish (Apteronotus leptorhynchus) and pyramidal neurons from the somatosensory cortex of the rat. We suggest that KCNQ/Kv7 channels (underlying the M-current) are good candidates for the decoder. They are widely coupled to metabolic processes and do not inactivate. We conclude that the neural threshold is optimized to generate an energy-efficient and high-fidelity neural code. PMID:26082710

  15. Intra-day signal instabilities affect decoding performance in an intracortical neural interface system.

    PubMed

    Perge, János A; Homer, Mark L; Malik, Wasim Q; Cash, Sydney; Eskandar, Emad; Friehs, Gerhard; Donoghue, John P; Hochberg, Leigh R

    2013-06-01

    Motor neural interface systems (NIS) aim to convert neural signals into motor prosthetic or assistive device control, allowing people with paralysis to regain movement or control over their immediate environment. Effector or prosthetic control can degrade if the relationship between recorded neural signals and intended motor behavior changes. Therefore, characterizing both biological and technological sources of signal variability is important for a reliable NIS. To address the frequency and causes of neural signal variability in a spike-based NIS, we analyzed within-day fluctuations in spiking activity and action potential amplitude recorded with silicon microelectrode arrays implanted in the motor cortex of three people with tetraplegia (BrainGate pilot clinical trial, IDE). 84% of the recorded units showed a statistically significant change in apparent firing rate (3.8 ± 8.71 Hz or 49% of the mean rate) across several-minute epochs of tasks performed on a single session, and 74% of the units showed a significant change in spike amplitude (3.7 ± 6.5 µV or 5.5% of mean spike amplitude). 40% of the recording sessions showed a significant correlation in the occurrence of amplitude changes across electrodes, suggesting array micro-movement. Despite the relatively frequent amplitude changes, only 15% of the observed within-day rate changes originated from recording artifacts such as spike amplitude change or electrical noise, while 85% of the rate changes most likely emerged from physiological mechanisms. Computer simulations confirmed that systematic rate changes of individual neurons could produce a directional 'bias' in the decoded neural cursor movements. Instability in apparent neuronal spike rates indeed yielded a directional bias in 56% of all performance assessments in participant cursor control (n = 2 participants, 108 and 20 assessments over two years), resulting in suboptimal performance in these sessions. We anticipate that signal acquisition and decoding methods that can adapt to the reported instabilities will further improve the performance of intracortically-based NISs.

  16. Deep learning with convolutional neural networks for EEG decoding and visualization

    PubMed Central

    Springenberg, Jost Tobias; Fiederer, Lukas Dominique Josef; Glasstetter, Martin; Eggensperger, Katharina; Tangermann, Michael; Hutter, Frank; Burgard, Wolfram; Ball, Tonio

    2017-01-01

    Abstract Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end‐to‐end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end‐to‐end EEG analysis, but a better understanding of how to design and train ConvNets for end‐to‐end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task‐related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG‐based brain mapping. Hum Brain Mapp 38:5391–5420, 2017. © 2017 Wiley Periodicals, Inc. PMID:28782865

  17. Deep learning with convolutional neural networks for EEG decoding and visualization.

    PubMed

    Schirrmeister, Robin Tibor; Springenberg, Jost Tobias; Fiederer, Lukas Dominique Josef; Glasstetter, Martin; Eggensperger, Katharina; Tangermann, Michael; Hutter, Frank; Burgard, Wolfram; Ball, Tonio

    2017-11-01

    Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end-to-end EEG analysis, but a better understanding of how to design and train ConvNets for end-to-end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task-related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG-based brain mapping. Hum Brain Mapp 38:5391-5420, 2017. © 2017 Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  18. Neural Decoding Reveals Impaired Face Configural Processing in the Right Fusiform Face Area of Individuals with Developmental Prosopagnosia

    PubMed Central

    Zhang, Jiedong; Liu, Jia

    2015-01-01

    Most of human daily social interactions rely on the ability to successfully recognize faces. Yet ∼2% of the human population suffers from face blindness without any acquired brain damage [this is also known as developmental prosopagnosia (DP) or congenital prosopagnosia]). Despite the presence of severe behavioral face recognition deficits, surprisingly, a majority of DP individuals exhibit normal face selectivity in the right fusiform face area (FFA), a key brain region involved in face configural processing. This finding, together with evidence showing impairments downstream from the right FFA in DP individuals, has led some to argue that perhaps the right FFA is largely intact in DP individuals. Using fMRI multivoxel pattern analysis, here we report the discovery of a neural impairment in the right FFA of DP individuals that may play a critical role in mediating their face-processing deficits. In seven individuals with DP, we discovered that, despite the right FFA's preference for faces and it showing decoding for the different face parts, it exhibited impaired face configural decoding and did not contain distinct neural response patterns for the intact and the scrambled face configurations. This abnormality was not present throughout the ventral visual cortex, as normal neural decoding was found in an adjacent object-processing region. To our knowledge, this is the first direct neural evidence showing impaired face configural processing in the right FFA in individuals with DP. The discovery of this neural impairment provides a new clue to our understanding of the neural basis of DP. PMID:25632131

  19. Continuous movement decoding using a target-dependent model with EMG inputs.

    PubMed

    Sachs, Nicholas A; Corbett, Elaine A; Miller, Lee E; Perreault, Eric J

    2011-01-01

    Trajectory-based models that incorporate target position information have been shown to accurately decode reaching movements from bio-control signals, such as muscle (EMG) and cortical activity (neural spikes). One major hurdle in implementing such models for neuroprosthetic control is that they are inherently designed to decode single reaches from a position of origin to a specific target. Gaze direction can be used to identify appropriate targets, however information regarding movement intent is needed to determine when a reach is meant to begin and when it has been completed. We used linear discriminant analysis to classify limb states into movement classes based on recorded EMG from a sparse set of shoulder muscles. We then used the detected state transitions to update target information in a mixture of Kalman filters that incorporated target position explicitly in the state, and used EMG activity to decode arm movements. Updating the target position initiated movement along new trajectories, allowing a sequence of appropriately timed single reaches to be decoded in series and enabling highly accurate continuous control.

  20. To sort or not to sort: the impact of spike-sorting on neural decoding performance.

    PubMed

    Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie

    2014-10-01

    Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.

  1. To sort or not to sort: the impact of spike-sorting on neural decoding performance

    NASA Astrophysics Data System (ADS)

    Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie

    2014-10-01

    Objective. Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. Approach. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Main results. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Significance. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive methods used here. Even basic spike-sorting adds value to the low-threshold waveform-crossing methods often employed in BCI decoding.

  2. Key considerations in designing a speech brain-computer interface.

    PubMed

    Bocquelet, Florent; Hueber, Thomas; Girin, Laurent; Chabardès, Stéphan; Yvert, Blaise

    2016-11-01

    Restoring communication in case of aphasia is a key challenge for neurotechnologies. To this end, brain-computer strategies can be envisioned to allow artificial speech synthesis from the continuous decoding of neural signals underlying speech imagination. Such speech brain-computer interfaces do not exist yet and their design should consider three key choices that need to be made: the choice of appropriate brain regions to record neural activity from, the choice of an appropriate recording technique, and the choice of a neural decoding scheme in association with an appropriate speech synthesis method. These key considerations are discussed here in light of (1) the current understanding of the functional neuroanatomy of cortical areas underlying overt and covert speech production, (2) the available literature making use of a variety of brain recording techniques to better characterize and address the challenge of decoding cortical speech signals, and (3) the different speech synthesis approaches that can be considered depending on the level of speech representation (phonetic, acoustic or articulatory) envisioned to be decoded at the core of a speech BCI paradigm. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  3. Using Neural Pattern Classifiers to Quantify the Modularity of Conflict–Control Mechanisms in the Human Brain

    PubMed Central

    Jiang, Jiefeng; Egner, Tobias

    2014-01-01

    Resolving conflicting sensory and motor representations is a core function of cognitive control, but it remains uncertain to what degree control over different sources of conflict is implemented by shared (domain general) or distinct (domain specific) neural resources. Behavioral data suggest conflict–control to be domain specific, but results from neuroimaging studies have been ambivalent. Here, we employed multivoxel pattern analyses that can decode a brain region's informational content, allowing us to distinguish incidental activation overlap from actual shared information processing. We trained independent sets of “searchlight” classifiers on functional magnetic resonance imaging data to decode control processes associated with stimulus-conflict (Stroop task) and ideomotor-conflict (Simon task). Quantifying the proportion of domain-specific searchlights (capable of decoding only one type of conflict) and domain-general searchlights (capable of decoding both conflict types) in each subject, we found both domain-specific and domain-general searchlights, though the former were more common. When mapping anatomical loci of these searchlights across subjects, neural substrates of stimulus- and ideomotor-specific conflict–control were found to be anatomically consistent across subjects, whereas the substrates of domain-general conflict–control were not. Overall, these findings suggest a hybrid neural architecture of conflict–control that entails both modular (domain specific) and global (domain general) components. PMID:23402762

  4. Feedback control policies employed by people using intracortical brain-computer interfaces.

    PubMed

    Willett, Francis R; Pandarinath, Chethan; Jarosiewicz, Beata; Murphy, Brian A; Memberg, William D; Blabe, Christine H; Saab, Jad; Walter, Benjamin L; Sweet, Jennifer A; Miller, Jonathan P; Henderson, Jaimie M; Shenoy, Krishna V; Simeral, John D; Hochberg, Leigh R; Kirsch, Robert F; Ajiboye, A Bolu

    2017-02-01

    When using an intracortical BCI (iBCI), users modulate their neural population activity to move an effector towards a target, stop accurately, and correct for movement errors. We call the rules that govern this modulation a 'feedback control policy'. A better understanding of these policies may inform the design of higher-performing neural decoders. We studied how three participants in the BrainGate2 pilot clinical trial used an iBCI to control a cursor in a 2D target acquisition task. Participants used a velocity decoder with exponential smoothing dynamics. Through offline analyses, we characterized the users' feedback control policies by modeling their neural activity as a function of cursor state and target position. We also tested whether users could adapt their policy to different decoder dynamics by varying the gain (speed scaling) and temporal smoothing parameters of the iBCI. We demonstrate that control policy assumptions made in previous studies do not fully describe the policies of our participants. To account for these discrepancies, we propose a new model that captures (1) how the user's neural population activity gradually declines as the cursor approaches the target from afar, then decreases more sharply as the cursor comes into contact with the target, (2) how the user makes constant feedback corrections even when the cursor is on top of the target, and (3) how the user actively accounts for the cursor's current velocity to avoid overshooting the target. Further, we show that users can adapt their control policy to decoder dynamics by attenuating neural modulation when the cursor gain is high and by damping the cursor velocity more strongly when the smoothing dynamics are high. Our control policy model may help to build better decoders, understand how neural activity varies during active iBCI control, and produce better simulations of closed-loop iBCI movements.

  5. Feedback control policies employed by people using intracortical brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Willett, Francis R.; Pandarinath, Chethan; Jarosiewicz, Beata; Murphy, Brian A.; Memberg, William D.; Blabe, Christine H.; Saab, Jad; Walter, Benjamin L.; Sweet, Jennifer A.; Miller, Jonathan P.; Henderson, Jaimie M.; Shenoy, Krishna V.; Simeral, John D.; Hochberg, Leigh R.; Kirsch, Robert F.; Bolu Ajiboye, A.

    2017-02-01

    Objective. When using an intracortical BCI (iBCI), users modulate their neural population activity to move an effector towards a target, stop accurately, and correct for movement errors. We call the rules that govern this modulation a ‘feedback control policy’. A better understanding of these policies may inform the design of higher-performing neural decoders. Approach. We studied how three participants in the BrainGate2 pilot clinical trial used an iBCI to control a cursor in a 2D target acquisition task. Participants used a velocity decoder with exponential smoothing dynamics. Through offline analyses, we characterized the users’ feedback control policies by modeling their neural activity as a function of cursor state and target position. We also tested whether users could adapt their policy to different decoder dynamics by varying the gain (speed scaling) and temporal smoothing parameters of the iBCI. Main results. We demonstrate that control policy assumptions made in previous studies do not fully describe the policies of our participants. To account for these discrepancies, we propose a new model that captures (1) how the user’s neural population activity gradually declines as the cursor approaches the target from afar, then decreases more sharply as the cursor comes into contact with the target, (2) how the user makes constant feedback corrections even when the cursor is on top of the target, and (3) how the user actively accounts for the cursor’s current velocity to avoid overshooting the target. Further, we show that users can adapt their control policy to decoder dynamics by attenuating neural modulation when the cursor gain is high and by damping the cursor velocity more strongly when the smoothing dynamics are high. Significance. Our control policy model may help to build better decoders, understand how neural activity varies during active iBCI control, and produce better simulations of closed-loop iBCI movements.

  6. A simple approach to ignoring irrelevant variables by population decoding based on multisensory neurons

    PubMed Central

    Kim, HyungGoo R.; Pitkow, Xaq; Angelaki, Dora E.

    2016-01-01

    Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: “congruent” and “opposite” cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs. PMID:27334948

  7. Neural Control of a Tracking Task via Attention-Gated Reinforcement Learning for Brain-Machine Interfaces.

    PubMed

    Wang, Yiwen; Wang, Fang; Xu, Kai; Zhang, Qiaosheng; Zhang, Shaomin; Zheng, Xiaoxiang

    2015-05-01

    Reinforcement learning (RL)-based brain machine interfaces (BMIs) enable the user to learn from the environment through interactions to complete the task without desired signals, which is promising for clinical applications. Previous studies exploited Q-learning techniques to discriminate neural states into simple directional actions providing the trial initial timing. However, the movements in BMI applications can be quite complicated, and the action timing explicitly shows the intention when to move. The rich actions and the corresponding neural states form a large state-action space, imposing generalization difficulty on Q-learning. In this paper, we propose to adopt attention-gated reinforcement learning (AGREL) as a new learning scheme for BMIs to adaptively decode high-dimensional neural activities into seven distinct movements (directional moves, holdings and resting) due to the efficient weight-updating. We apply AGREL on neural data recorded from M1 of a monkey to directly predict a seven-action set in a time sequence to reconstruct the trajectory of a center-out task. Compared to Q-learning techniques, AGREL could improve the target acquisition rate to 90.16% in average with faster convergence and more stability to follow neural activity over multiple days, indicating the potential to achieve better online decoding performance for more complicated BMI tasks.

  8. Massively parallel neural circuits for stereoscopic color vision: encoding, decoding and identification.

    PubMed

    Lazar, Aurel A; Slutskiy, Yevgeniy B; Zhou, Yiyin

    2015-03-01

    Past work demonstrated how monochromatic visual stimuli could be faithfully encoded and decoded under Nyquist-type rate conditions. Color visual stimuli were then traditionally encoded and decoded in multiple separate monochromatic channels. The brain, however, appears to mix information about color channels at the earliest stages of the visual system, including the retina itself. If information about color is mixed and encoded by a common pool of neurons, how can colors be demixed and perceived? We present Color Video Time Encoding Machines (Color Video TEMs) for encoding color visual stimuli that take into account a variety of color representations within a single neural circuit. We then derive a Color Video Time Decoding Machine (Color Video TDM) algorithm for color demixing and reconstruction of color visual scenes from spikes produced by a population of visual neurons. In addition, we formulate Color Video Channel Identification Machines (Color Video CIMs) for functionally identifying color visual processing performed by a spiking neural circuit. Furthermore, we derive a duality between TDMs and CIMs that unifies the two and leads to a general theory of neural information representation for stereoscopic color vision. We provide examples demonstrating that a massively parallel color visual neural circuit can be first identified with arbitrary precision and its spike trains can be subsequently used to reconstruct the encoded stimuli. We argue that evaluation of the functional identification methodology can be effectively and intuitively performed in the stimulus space. In this space, a signal reconstructed from spike trains generated by the identified neural circuit can be compared to the original stimulus. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Modulation of neural activity by reward in medial intraparietal cortex is sensitive to temporal sequence of reward

    PubMed Central

    Rajalingham, Rishi; Stacey, Richard Greg; Tsoulfas, Georgios

    2014-01-01

    To restore movements to paralyzed patients, neural prosthetic systems must accurately decode patients' intentions from neural signals. Despite significant advancements, current systems are unable to restore complex movements. Decoding reward-related signals from the medial intraparietal area (MIP) could enhance prosthetic performance. However, the dynamics of reward sensitivity in MIP is not known. Furthermore, reward-related modulation in premotor areas has been attributed to behavioral confounds. Here we investigated the stability of reward encoding in MIP by assessing the effect of reward history on reward sensitivity. We recorded from neurons in MIP while monkeys performed a delayed-reach task under two reward schedules. In the variable schedule, an equal number of small- and large-rewards trials were randomly interleaved. In the constant schedule, one reward size was delivered for a block of trials. The memory period firing rate of most neurons in response to identical rewards varied according to schedule. Using systems identification tools, we attributed the schedule sensitivity to the dependence of neural activity on the history of reward. We did not find schedule-dependent behavioral changes, suggesting that reward modulates neural activity in MIP. Neural discrimination between rewards was less in the variable than in the constant schedule, degrading our ability to decode reach target and reward simultaneously. The effect of schedule was mitigated by adding Haar wavelet coefficients to the decoding model. This raises the possibility of multiple encoding schemes at different timescales and reinforces the potential utility of reward information for prosthetic performance. PMID:25008408

  10. Modulation of neural activity by reward in medial intraparietal cortex is sensitive to temporal sequence of reward.

    PubMed

    Rajalingham, Rishi; Stacey, Richard Greg; Tsoulfas, Georgios; Musallam, Sam

    2014-10-01

    To restore movements to paralyzed patients, neural prosthetic systems must accurately decode patients' intentions from neural signals. Despite significant advancements, current systems are unable to restore complex movements. Decoding reward-related signals from the medial intraparietal area (MIP) could enhance prosthetic performance. However, the dynamics of reward sensitivity in MIP is not known. Furthermore, reward-related modulation in premotor areas has been attributed to behavioral confounds. Here we investigated the stability of reward encoding in MIP by assessing the effect of reward history on reward sensitivity. We recorded from neurons in MIP while monkeys performed a delayed-reach task under two reward schedules. In the variable schedule, an equal number of small- and large-rewards trials were randomly interleaved. In the constant schedule, one reward size was delivered for a block of trials. The memory period firing rate of most neurons in response to identical rewards varied according to schedule. Using systems identification tools, we attributed the schedule sensitivity to the dependence of neural activity on the history of reward. We did not find schedule-dependent behavioral changes, suggesting that reward modulates neural activity in MIP. Neural discrimination between rewards was less in the variable than in the constant schedule, degrading our ability to decode reach target and reward simultaneously. The effect of schedule was mitigated by adding Haar wavelet coefficients to the decoding model. This raises the possibility of multiple encoding schemes at different timescales and reinforces the potential utility of reward information for prosthetic performance. Copyright © 2014 the American Physiological Society.

  11. Robust pattern decoding in shape-coded structured light

    NASA Astrophysics Data System (ADS)

    Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai

    2017-09-01

    Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.

  12. Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text.

    PubMed

    Anderson, Andrew James; Bruni, Elia; Lopopolo, Alessandro; Poesio, Massimo; Baroni, Marco

    2015-10-15

    Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Distinct neural patterns enable grasp types decoding in monkey dorsal premotor cortex.

    PubMed

    Hao, Yaoyao; Zhang, Qiaosheng; Controzzi, Marco; Cipriani, Christian; Li, Yue; Li, Juncheng; Zhang, Shaomin; Wang, Yiwen; Chen, Weidong; Chiara Carrozza, Maria; Zheng, Xiaoxiang

    2014-12-01

    Recent studies have shown that dorsal premotor cortex (PMd), a cortical area in the dorsomedial grasp pathway, is involved in grasp movements. However, the neural ensemble firing property of PMd during grasp movements and the extent to which it can be used for grasp decoding are still unclear. To address these issues, we used multielectrode arrays to record both spike and local field potential (LFP) signals in PMd in macaque monkeys performing reaching and grasping of one of four differently shaped objects. Single and population neuronal activity showed distinct patterns during execution of different grip types. Cluster analysis of neural ensemble signals indicated that the grasp related patterns emerged soon (200-300 ms) after the go cue signal, and faded away during the hold period. The timing and duration of the patterns varied depending on the behaviors of individual monkey. Application of support vector machine model to stable activity patterns revealed classification accuracies of 94% and 89% for each of the two monkeys, indicating a robust, decodable grasp pattern encoded in the PMd. Grasp decoding using LFPs, especially the high-frequency bands, also produced high decoding accuracies. This study is the first to specify the neuronal population encoding of grasp during the time course of grasp. We demonstrate high grasp decoding performance in PMd. These findings, combined with previous evidence for reach related modulation studies, suggest that PMd may play an important role in generation and maintenance of grasp action and may be a suitable locus for brain-machine interface applications.

  14. Global cortical activity predicts shape of hand during grasping

    PubMed Central

    Agashe, Harshavardhan A.; Paek, Andrew Y.; Zhang, Yuhang; Contreras-Vidal, José L.

    2015-01-01

    Recent studies show that the amplitude of cortical field potentials is modulated in the time domain by grasping kinematics. However, it is unknown if these low frequency modulations persist and contain enough information to decode grasp kinematics in macro-scale activity measured at the scalp via electroencephalography (EEG). Further, it is unclear as to whether joint angle velocities or movement synergies are the optimal kinematics spaces to decode. In this offline decoding study, we infer from human EEG, hand joint angular velocities as well as synergistic trajectories as subjects perform natural reach-to-grasp movements. Decoding accuracy, measured as the correlation coefficient (r) between the predicted and actual movement kinematics, was r = 0.49 ± 0.02 across 15 hand joints. Across the first three kinematic synergies, decoding accuracies were r = 0.59 ± 0.04, 0.47 ± 0.06, and 0.32 ± 0.05. The spatial-temporal pattern of EEG channel recruitment showed early involvement of contralateral frontal-central scalp areas followed by later activation of central electrodes over primary sensorimotor cortical areas. Information content in EEG about the grasp type peaked at 250 ms after movement onset. The high decoding accuracies in this study are significant not only as evidence for time-domain modulation in macro-scale brain activity, but for the field of brain-machine interfaces as well. Our decoding strategy, which harnesses the neural “symphony” as opposed to local members of the neural ensemble (as in intracranial approaches), may provide a means of extracting information about motor intent for grasping without the need for penetrating electrodes and suggests that it may be soon possible to develop non-invasive neural interfaces for the control of prosthetic limbs. PMID:25914616

  15. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights

    PubMed Central

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks. PMID:26973503

  16. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights.

    PubMed

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.

  17. Efficiency turns the table on neural encoding, decoding and noise.

    PubMed

    Deneve, Sophie; Chalk, Matthew

    2016-04-01

    Sensory neurons are usually described with an encoding model, for example, a function that predicts their response from the sensory stimulus using a receptive field (RF) or a tuning curve. However, central to theories of sensory processing is the notion of 'efficient coding'. We argue here that efficient coding implies a completely different neural coding strategy. Instead of a fixed encoding model, neural populations would be described by a fixed decoding model (i.e. a model reconstructing the stimulus from the neural responses). Because the population solves a global optimization problem, individual neurons are variable, but not noisy, and have no truly invariant tuning curve or receptive field. We review recent experimental evidence and implications for neural noise correlations, robustness and adaptation. Copyright © 2016. Published by Elsevier Ltd.

  18. Decoding of finger trajectory from ECoG using deep learning.

    PubMed

    Xie, Ziqian; Schwartz, Odelia; Prasad, Abhishek

    2018-06-01

    Conventional decoding pipeline for brain-machine interfaces (BMIs) consists of chained different stages of feature extraction, time-frequency analysis and statistical learning models. Each of these stages uses a different algorithm trained in a sequential manner, which makes it difficult to make the whole system adaptive. The goal was to create an adaptive online system with a single objective function and a single learning algorithm so that the whole system can be trained in parallel to increase the decoding performance. Here, we used deep neural networks consisting of convolutional neural networks (CNN) and a special kind of recurrent neural network (RNN) called long short term memory (LSTM) to address these needs. We used electrocorticography (ECoG) data collected by Kubanek et al. The task consisted of individual finger flexions upon a visual cue. Our model combined a hierarchical feature extractor CNN and a RNN that was able to process sequential data and recognize temporal dynamics in the neural data. CNN was used as the feature extractor and LSTM was used as the regression algorithm to capture the temporal dynamics of the signal. We predicted the finger trajectory using ECoG signals and compared results for the least angle regression (LARS), CNN-LSTM, random forest, LSTM model (LSTM_HC, for using hard-coded features) and a decoding pipeline consisting of band-pass filtering, energy extraction, feature selection and linear regression. The results showed that the deep learning models performed better than the commonly used linear model. The deep learning models not only gave smoother and more realistic trajectories but also learned the transition between movement and rest state. This study demonstrated a decoding network for BMI that involved a convolutional and recurrent neural network model. It integrated the feature extraction pipeline into the convolution and pooling layer and used LSTM layer to capture the state transitions. The discussed network eliminated the need to separately train the model at each step in the decoding pipeline. The whole system can be jointly optimized using stochastic gradient descent and is capable of online learning.

  19. Decoding of finger trajectory from ECoG using deep learning

    NASA Astrophysics Data System (ADS)

    Xie, Ziqian; Schwartz, Odelia; Prasad, Abhishek

    2018-06-01

    Objective. Conventional decoding pipeline for brain-machine interfaces (BMIs) consists of chained different stages of feature extraction, time-frequency analysis and statistical learning models. Each of these stages uses a different algorithm trained in a sequential manner, which makes it difficult to make the whole system adaptive. The goal was to create an adaptive online system with a single objective function and a single learning algorithm so that the whole system can be trained in parallel to increase the decoding performance. Here, we used deep neural networks consisting of convolutional neural networks (CNN) and a special kind of recurrent neural network (RNN) called long short term memory (LSTM) to address these needs. Approach. We used electrocorticography (ECoG) data collected by Kubanek et al. The task consisted of individual finger flexions upon a visual cue. Our model combined a hierarchical feature extractor CNN and a RNN that was able to process sequential data and recognize temporal dynamics in the neural data. CNN was used as the feature extractor and LSTM was used as the regression algorithm to capture the temporal dynamics of the signal. Main results. We predicted the finger trajectory using ECoG signals and compared results for the least angle regression (LARS), CNN-LSTM, random forest, LSTM model (LSTM_HC, for using hard-coded features) and a decoding pipeline consisting of band-pass filtering, energy extraction, feature selection and linear regression. The results showed that the deep learning models performed better than the commonly used linear model. The deep learning models not only gave smoother and more realistic trajectories but also learned the transition between movement and rest state. Significance. This study demonstrated a decoding network for BMI that involved a convolutional and recurrent neural network model. It integrated the feature extraction pipeline into the convolution and pooling layer and used LSTM layer to capture the state transitions. The discussed network eliminated the need to separately train the model at each step in the decoding pipeline. The whole system can be jointly optimized using stochastic gradient descent and is capable of online learning.

  20. A four-dimensional virtual hand brain-machine interface using active dimension selection.

    PubMed

    Rouse, Adam G

    2016-06-01

    Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. ADS utilizes a two stage decoder by using neural signals to both (i) select an active dimension being controlled and (ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits s(-1) for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand.

  1. Intra-day signal instabilities affect decoding performance in an intracortical neural interface system

    PubMed Central

    Perge, János A.; Homer, Mark L.; Malik, Wasim Q.; Cash, Sydney; Eskandar, Emad; Friehs, Gerhard; Donoghue, John P.; Hochberg, Leigh R.

    2013-01-01

    Objective Motor Neural Interface Systems (NIS) aim to convert neural signals into motor prosthetic or assistive device control, allowing people with paralysis to regain movement or control over their immediate environment. Effector or prosthetic control can degrade if the relationship between recorded neural signals and intended motor behavior changes. Therefore, characterizing both biological and technological sources of signal variability is important for a reliable NIS. Approach To address the frequency and causes of neural signal variability in a spike-based NIS, we analyzed within-day fluctuations in spiking activity and action potential amplitude recorded with silicon microelectrode arrays implanted in the motor cortex of three people with tetraplegia (BrainGate pilot clinical trial, IDE). Main results Eighty-four percent of the recorded units showed a statistically significant change in apparent firing rate (3.8±8.71Hz or 49% of the mean rate) across several-minute epochs of tasks performed on a single session, and seventy-four percent of the units showed a significant change in spike amplitude (3.7±6.5μV or 5.5% of mean spike amplitude). Forty percent of the recording sessions showed a significant correlation in the occurrence of amplitude changes across electrodes, suggesting array micro-movement. Despite the relatively frequent amplitude changes, only 15% of the observed within-day rate changes originated from recording artifacts such as spike amplitude change or electrical noise, while 85% of the rate changes most likely emerged from physiological mechanisms. Computer simulations confirmed that systematic rate changes of individual neurons could produce a directional “bias” in the decoded neural cursor movements. Instability in apparent neuronal spike rates indeed yielded a directional bias in fifty-six percent of all performance assessments in participant cursor control (n=2 participants, 108 and 20 assessments over two years), resulting in suboptimal performance in these sessions. Significance We anticipate that signal acquisition and decoding methods that can adapt to the reported instabilities will further improve the performance of intracortically-based NISs. PMID:23574741

  2. Using neural pattern classifiers to quantify the modularity of conflict-control mechanisms in the human brain.

    PubMed

    Jiang, Jiefeng; Egner, Tobias

    2014-07-01

    Resolving conflicting sensory and motor representations is a core function of cognitive control, but it remains uncertain to what degree control over different sources of conflict is implemented by shared (domain general) or distinct (domain specific) neural resources. Behavioral data suggest conflict-control to be domain specific, but results from neuroimaging studies have been ambivalent. Here, we employed multivoxel pattern analyses that can decode a brain region's informational content, allowing us to distinguish incidental activation overlap from actual shared information processing. We trained independent sets of "searchlight" classifiers on functional magnetic resonance imaging data to decode control processes associated with stimulus-conflict (Stroop task) and ideomotor-conflict (Simon task). Quantifying the proportion of domain-specific searchlights (capable of decoding only one type of conflict) and domain-general searchlights (capable of decoding both conflict types) in each subject, we found both domain-specific and domain-general searchlights, though the former were more common. When mapping anatomical loci of these searchlights across subjects, neural substrates of stimulus- and ideomotor-specific conflict-control were found to be anatomically consistent across subjects, whereas the substrates of domain-general conflict-control were not. Overall, these findings suggest a hybrid neural architecture of conflict-control that entails both modular (domain specific) and global (domain general) components. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Distinct neural patterns enable grasp types decoding in monkey dorsal premotor cortex

    NASA Astrophysics Data System (ADS)

    Hao, Yaoyao; Zhang, Qiaosheng; Controzzi, Marco; Cipriani, Christian; Li, Yue; Li, Juncheng; Zhang, Shaomin; Wang, Yiwen; Chen, Weidong; Chiara Carrozza, Maria; Zheng, Xiaoxiang

    2014-12-01

    Objective. Recent studies have shown that dorsal premotor cortex (PMd), a cortical area in the dorsomedial grasp pathway, is involved in grasp movements. However, the neural ensemble firing property of PMd during grasp movements and the extent to which it can be used for grasp decoding are still unclear. Approach. To address these issues, we used multielectrode arrays to record both spike and local field potential (LFP) signals in PMd in macaque monkeys performing reaching and grasping of one of four differently shaped objects. Main results. Single and population neuronal activity showed distinct patterns during execution of different grip types. Cluster analysis of neural ensemble signals indicated that the grasp related patterns emerged soon (200-300 ms) after the go cue signal, and faded away during the hold period. The timing and duration of the patterns varied depending on the behaviors of individual monkey. Application of support vector machine model to stable activity patterns revealed classification accuracies of 94% and 89% for each of the two monkeys, indicating a robust, decodable grasp pattern encoded in the PMd. Grasp decoding using LFPs, especially the high-frequency bands, also produced high decoding accuracies. Significance. This study is the first to specify the neuronal population encoding of grasp during the time course of grasp. We demonstrate high grasp decoding performance in PMd. These findings, combined with previous evidence for reach related modulation studies, suggest that PMd may play an important role in generation and maintenance of grasp action and may be a suitable locus for brain-machine interface applications.

  4. Neural Decoding and "Inner" Psychophysics: A Distance-to-Bound Approach for Linking Mind, Brain, and Behavior.

    PubMed

    Ritchie, J Brendan; Carlson, Thomas A

    2016-01-01

    A fundamental challenge for cognitive neuroscience is characterizing how the primitives of psychological theory are neurally implemented. Attempts to meet this challenge are a manifestation of what Fechner called "inner" psychophysics: the theory of the precise mapping between mental quantities and the brain. In his own time, inner psychophysics remained an unrealized ambition for Fechner. We suggest that, today, multivariate pattern analysis (MVPA), or neural "decoding," methods provide a promising starting point for developing an inner psychophysics. A cornerstone of these methods are simple linear classifiers applied to neural activity in high-dimensional activation spaces. We describe an approach to inner psychophysics based on the shared architecture of linear classifiers and observers under decision boundary models such as signal detection theory. Under this approach, distance from a decision boundary through activation space, as estimated by linear classifiers, can be used to predict reaction time in accordance with signal detection theory, and distance-to-bound models of reaction time. Our "neural distance-to-bound" approach is potentially quite general, and simple to implement. Furthermore, our recent work on visual object recognition suggests it is empirically viable. We believe the approach constitutes an important step along the path to an inner psychophysics that links mind, brain, and behavior.

  5. Neuroimaging of decoding and language comprehension in young very low birth weight (VLBW) adolescents: Indications for compensatory mechanisms.

    PubMed

    van Ettinger-Veenstra, Helene; Widén, Carin; Engström, Maria; Karlsson, Thomas; Leijon, Ingemar; Nelson, Nina

    2017-01-01

    In preterm children with very low birth weight (VLBW ≤ 1500 g), reading problems are often observed. Reading comprehension is dependent on word decoding and language comprehension. We investigated neural activation-within brain regions important for reading-related to components of reading comprehension in young VLBW adolescents in direct comparison to normal birth weight (NBW) term-born peers, with the use of functional magnetic resonance imaging (fMRI). We hypothesized that the decoding mechanisms will be affected by VLBW, and expect to see increased neural activity for VLBW which may be modulated by task performance and cognitive ability. The study investigated 13 (11 included in fMRI) young adolescents (ages 12 to 14 years) born preterm with VLBW and in 13 NBW controls (ages 12-14 years) for performance on the Block Design and Vocabulary subtests of the Wechsler Intelligence Scale for Children; and for semantic, orthographic, and phonological processing during an fMRI paradigm. The VLBW group showed increased phonological activation in left inferior frontal gyrus, decreased orthographic activation in right supramarginal gyrus, and decreased semantic activation in left inferior frontal gyrus. Block Design was related to altered right-hemispheric activation, and VLBW showed lower WISC Block Design scores. Left angular gyrus showed activation increase specific for VLBW with high accuracy on the semantic test. Young VLBW adolescents showed no accuracy and reaction time performance differences on our fMRI language tasks, but they did exhibit altered neural activation during these tasks. This altered activation for VLBW was observed as increased activation during phonological decoding, and as mainly decreased activation during orthographic and semantic processing. Correlations of neural activation with accuracy on the semantic fMRI task and with decreased WISC Block Design performance were specific for the VLBW group. Together, results suggest compensatory mechanisms by recruiting additional brain regions upon altered neural development of decoding for VLBW.

  6. A Gaussian mixture model based adaptive classifier for fNIRS brain-computer interfaces and its testing via simulation

    NASA Astrophysics Data System (ADS)

    Li, Zheng; Jiang, Yi-han; Duan, Lian; Zhu, Chao-zhe

    2017-08-01

    Objective. Functional near infra-red spectroscopy (fNIRS) is a promising brain imaging technology for brain-computer interfaces (BCI). Future clinical uses of fNIRS will likely require operation over long time spans, during which neural activation patterns may change. However, current decoders for fNIRS signals are not designed to handle changing activation patterns. The objective of this study is to test via simulations a new adaptive decoder for fNIRS signals, the Gaussian mixture model adaptive classifier (GMMAC). Approach. GMMAC can simultaneously classify and track activation pattern changes without the need for ground-truth labels. This adaptive classifier uses computationally efficient variational Bayesian inference to label new data points and update mixture model parameters, using the previous model parameters as priors. We test GMMAC in simulations in which neural activation patterns change over time and compare to static decoders and unsupervised adaptive linear discriminant analysis classifiers. Main results. Our simulation experiments show GMMAC can accurately decode under time-varying activation patterns: shifts of activation region, expansions of activation region, and combined contractions and shifts of activation region. Furthermore, the experiments show the proposed method can track the changing shape of the activation region. Compared to prior work, GMMAC performed significantly better than the other unsupervised adaptive classifiers on a difficult activation pattern change simulation: 99% versus  <54% in two-choice classification accuracy. Significance. We believe GMMAC will be useful for clinical fNIRS-based brain-computer interfaces, including neurofeedback training systems, where operation over long time spans is required.

  7. Real-Time Neural Signals Decoding onto Off-the-Shelf DSP Processors for Neuroprosthetic Applications.

    PubMed

    Pani, Danilo; Barabino, Gianluca; Citi, Luca; Meloni, Paolo; Raspopovic, Stanisa; Micera, Silvestro; Raffo, Luigi

    2016-09-01

    The control of upper limb neuroprostheses through the peripheral nervous system (PNS) can allow restoring motor functions in amputees. At present, the important aspect of the real-time implementation of neural decoding algorithms on embedded systems has been often overlooked, notwithstanding the impact that limited hardware resources have on the efficiency/effectiveness of any given algorithm. Present study is addressing the optimization of a template matching based algorithm for PNS signals decoding that is a milestone for its real-time, full implementation onto a floating-point digital signal processor (DSP). The proposed optimized real-time algorithm achieves up to 96% of correct classification on real PNS signals acquired through LIFE electrodes on animals, and can correctly sort spikes of a synthetic cortical dataset with sufficiently uncorrelated spike morphologies (93% average correct classification) comparably to the results obtained with top spike sorter (94% on average on the same dataset). The power consumption enables more than 24 h processing at the maximum load, and latency model has been derived to enable a fair performance assessment. The final embodiment demonstrates the real-time performance onto a low-power off-the-shelf DSP, opening to experiments exploiting the efferent signals to control a motor neuroprosthesis.

  8. Restoration of fMRI Decodability Does Not Imply Latent Working Memory States

    PubMed Central

    Schneegans, Sebastian; Bays, Paul M.

    2018-01-01

    Recent imaging studies have challenged the prevailing view that working memory is mediated by sustained neural activity. Using machine learning methods to reconstruct memory content, these studies found that previously diminished representations can be restored by retrospective cueing or other forms of stimulation. These findings have been interpreted as evidence for an activity-silent working memory state that can be reactivated dependent on task demands. Here, we test the validity of this conclusion by formulating a neural process model of working memory based on sustained activity and using this model to emulate a spatial recall task with retrocueing. The simulation reproduces both behavioral and fMRI results previously taken as evidence for latent states, in particular the restoration of spatial reconstruction quality following an informative cue. Our results demonstrate that recovery of the decodability of an imaging signal does not provide compelling evidence for an activity-silent working memory state. PMID:28820674

  9. Focal versus distributed temporal cortex activity for speech sound category assignment

    PubMed Central

    Bouton, Sophie; Chambon, Valérian; Tyrand, Rémi; Seeck, Margitta; Karkar, Sami; van de Ville, Dimitri; Giraud, Anne-Lise

    2018-01-01

    Percepts and words can be decoded from distributed neural activity measures. However, the existence of widespread representations might conflict with the more classical notions of hierarchical processing and efficient coding, which are especially relevant in speech processing. Using fMRI and magnetoencephalography during syllable identification, we show that sensory and decisional activity colocalize to a restricted part of the posterior superior temporal gyrus (pSTG). Next, using intracortical recordings, we demonstrate that early and focal neural activity in this region distinguishes correct from incorrect decisions and can be machine-decoded to classify syllables. Crucially, significant machine decoding was possible from neuronal activity sampled across different regions of the temporal and frontal lobes, despite weak or absent sensory or decision-related responses. These findings show that speech-sound categorization relies on an efficient readout of focal pSTG neural activity, while more distributed activity patterns, although classifiable by machine learning, instead reflect collateral processes of sensory perception and decision. PMID:29363598

  10. Error-based analysis of optimal tuning functions explains phenomena observed in sensory neurons.

    PubMed

    Yaeli, Steve; Meir, Ron

    2010-01-01

    Biological systems display impressive capabilities in effectively responding to environmental signals in real time. There is increasing evidence that organisms may indeed be employing near optimal Bayesian calculations in their decision-making. An intriguing question relates to the properties of optimal encoding methods, namely determining the properties of neural populations in sensory layers that optimize performance, subject to physiological constraints. Within an ecological theory of neural encoding/decoding, we show that optimal Bayesian performance requires neural adaptation which reflects environmental changes. Specifically, we predict that neuronal tuning functions possess an optimal width, which increases with prior uncertainty and environmental noise, and decreases with the decoding time window. Furthermore, even for static stimuli, we demonstrate that dynamic sensory tuning functions, acting at relatively short time scales, lead to improved performance. Interestingly, the narrowing of tuning functions as a function of time was recently observed in several biological systems. Such results set the stage for a functional theory which may explain the high reliability of sensory systems, and the utility of neuronal adaptation occurring at multiple time scales.

  11. Cross-cultural reading the mind in the eyes: an fMRI investigation.

    PubMed

    Adams, Reginald B; Rule, Nicholas O; Franklin, Robert G; Wang, Elsie; Stevenson, Michael T; Yoshikawa, Sakiko; Nomura, Mitsue; Sato, Wataru; Kveraga, Kestutis; Ambady, Nalini

    2010-01-01

    The ability to infer others' thoughts, intentions, and feelings is regarded as uniquely human. Over the last few decades, this remarkable ability has captivated the attention of philosophers, primatologists, clinical and developmental psychologists, anthropologists, social psychologists, and cognitive neuroscientists. Most would agree that the capacity to reason about others' mental states is innately prepared, essential for successful human social interaction. Whether this ability is culturally tuned, however, remains entirely uncharted on both the behavioral and neural levels. Here we provide the first behavioral and neural evidence for an intracultural advantage (better performance for same- vs. other-culture) in mental state decoding in a sample of native Japanese and white American participants. We examined the neural correlates of this intracultural advantage using fMRI, revealing greater bilateral posterior superior temporal sulci recruitment during same- versus other-culture mental state decoding in both cultural groups. These findings offer preliminary support for cultural consistency in the neurological architecture subserving high-level mental state reasoning, as well as its differential recruitment based on cultural group membership.

  12. Model-based choices involve prospective neural activity

    PubMed Central

    Doll, Bradley B.; Duncan, Katherine D.; Simon, Dylan A.; Shohamy, Daphna; Daw, Nathaniel D.

    2015-01-01

    Decisions may arise via “model-free” repetition of previously reinforced actions, or by “model-based” evaluation, which is widely thought to follow from prospective anticipation of action consequences using a learned map or model. While choices and neural correlates of decision variables sometimes reflect knowledge of their consequences, it remains unclear whether this actually arises from prospective evaluation. Using functional MRI and a sequential reward-learning task in which paths contained decodable object categories, we found that humans’ model-based choices were associated with neural signatures of future paths observed at decision time, suggesting a prospective mechanism for choice. Prospection also covaried with the degree of model-based influences on neural correlates of decision variables, and was inversely related to prediction error signals thought to underlie model-free learning. These results dissociate separate mechanisms underlying model-based and model-free evaluation and support the hypothesis that model-based influences on choices and neural decision variables result from prospection. PMID:25799041

  13. A four-dimensional virtual hand brain-machine interface using active dimension selection

    NASA Astrophysics Data System (ADS)

    Rouse, Adam G.

    2016-06-01

    Objective. Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. Approach. ADS utilizes a two stage decoder by using neural signals to both (i) select an active dimension being controlled and (ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Main results. Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits s-1 for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. Significance. ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand.

  14. A four-dimensional virtual hand brain-machine interface using active dimension selection

    PubMed Central

    Rouse, Adam G.

    2018-01-01

    Objective Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. Approach ADS utilizes a two stage decoder by using neural signals to both i) select an active dimension being controlled and ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Main Results Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits/s for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. Significance ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand. PMID:27171896

  15. Decoding with limited neural data: a mixture of time-warped trajectory models for directional reaches.

    PubMed

    Corbett, Elaine A; Perreault, Eric J; Körding, Konrad P

    2012-06-01

    Neuroprosthetic devices promise to allow paralyzed patients to perform the necessary functions of everyday life. However, to allow patients to use such tools it is necessary to decode their intent from neural signals such as electromyograms (EMGs). Because these signals are noisy, state of the art decoders integrate information over time. One systematic way of doing this is by taking into account the natural evolution of the state of the body--by using a so-called trajectory model. Here we use two insights about movements to enhance our trajectory model: (1) at any given time, there is a small set of likely movement targets, potentially identified by gaze; (2) reaches are produced at varying speeds. We decoded natural reaching movements using EMGs of muscles that might be available from an individual with spinal cord injury. Target estimates found from tracking eye movements were incorporated into the trajectory model, while a mixture model accounted for the inherent uncertainty in these estimates. Warping the trajectory model in time using a continuous estimate of the reach speed enabled accurate decoding of faster reaches. We found that the choice of richer trajectory models, such as those incorporating target or speed, improves decoding particularly when there is a small number of EMGs available.

  16. Neuro-fuzzy decoding of sensory information from ensembles of simultaneously recorded dorsal root ganglion neurons for functional electrical stimulation applications

    NASA Astrophysics Data System (ADS)

    Rigosa, J.; Weber, D. J.; Prochazka, A.; Stein, R. B.; Micera, S.

    2011-08-01

    Functional electrical stimulation (FES) is used to improve motor function after injury to the central nervous system. Some FES systems use artificial sensors to switch between finite control states. To optimize FES control of the complex behavior of the musculo-skeletal system in activities of daily life, it is highly desirable to implement feedback control. In theory, sensory neural signals could provide the required control signals. Recent studies have demonstrated the feasibility of deriving limb-state estimates from the firing rates of primary afferent neurons recorded in dorsal root ganglia (DRG). These studies used multiple linear regression (MLR) methods to generate estimates of limb position and velocity based on a weighted sum of firing rates in an ensemble of simultaneously recorded DRG neurons. The aim of this study was to test whether the use of a neuro-fuzzy (NF) algorithm (the generalized dynamic fuzzy neural networks (GD-FNN)) could improve the performance, robustness and ability to generalize from training to test sets compared to the MLR technique. NF and MLR decoding methods were applied to ensemble DRG recordings obtained during passive and active limb movements in anesthetized and freely moving cats. The GD-FNN model provided more accurate estimates of limb state and generalized better to novel movement patterns. Future efforts will focus on implementing these neural recording and decoding methods in real time to provide closed-loop control of FES using the information extracted from sensory neurons.

  17. Different spatio-temporal electroencephalography features drive the successful decoding of binaural and monaural cues for sound localization.

    PubMed

    Bednar, Adam; Boland, Francis M; Lalor, Edmund C

    2017-03-01

    The human ability to localize sound is essential for monitoring our environment and helps us to analyse complex auditory scenes. Although the acoustic cues mediating sound localization have been established, it remains unknown how these cues are represented in human cortex. In particular, it is still a point of contention whether binaural and monaural cues are processed by the same or distinct cortical networks. In this study, participants listened to a sequence of auditory stimuli from different spatial locations while we recorded their neural activity using electroencephalography (EEG). The stimuli were presented over a loudspeaker array, which allowed us to deliver realistic, free-field stimuli in both the horizontal and vertical planes. Using a multivariate classification approach, we showed that it is possible to decode sound source location from scalp-recorded EEG. Robust and consistent decoding was shown for stimuli that provide binaural cues (i.e. Left vs. Right stimuli). Decoding location when only monaural cues were available (i.e. Front vs. Rear and elevational stimuli) was successful for a subset of subjects and showed less consistency. Notably, the spatio-temporal pattern of EEG features that facilitated decoding differed based on the availability of binaural and monaural cues. In particular, we identified neural processing of binaural cues at around 120 ms post-stimulus and found that monaural cues are processed later between 150 and 200 ms. Furthermore, different spatial activation patterns emerged for binaural and monaural cue processing. These spatio-temporal dissimilarities suggest the involvement of separate cortical mechanisms in monaural and binaural acoustic cue processing. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. Linear methods for reducing EMG contamination in peripheral nerve motor decodes.

    PubMed

    Kagan, Zachary B; Wendelken, Suzanne; Page, David M; Davis, Tyler; Hutchinson, Douglas T; Clark, Gregory A; Warren, David J

    2016-08-01

    Signals recorded from the peripheral nervous system (PNS) with high channel count penetrating microelectrode arrays, such as the Utah Slanted Electrode Array (USEA), often have electromyographic (EMG) signals contaminating the neural signal. This common-mode signal source may prevent single neural units from successfully being detected, thus hindering motor decode algorithms. Reducing this EMG contamination may lead to more accurate motor decode performance. A virtual reference (VR), created by a weighted linear combination of signals from a subset of all available channels, can be used to reduce this EMG contamination. Four methods of determining individual channel weights and six different methods of selecting subsets of channels were investigated (24 different VR types in total). The methods of determining individual channel weights were equal weighting, regression-based weighting, and two different proximity-based weightings. The subsets of channels were selected by a radius-based criteria, such that a channel was included if it was within a particular radius of inclusion from the target channel. These six radii of inclusion were 1.5, 2.9, 3.2, 5, 8.4, and 12.8 electrode-distances; the 12.8 electrode radius includes all USEA electrodes. We found that application of a VR improves the detectability of neural events via increasing the SNR, but we found no statistically meaningful difference amongst the VR types we examined. The computational complexity of implementation varies with respect to the method of determining channel weights and the number of channels in a subset, but does not correlate with VR performance. Hence, we examined the computational costs of calculating and applying the VR and based on these criteria, we recommend an equal weighting method of assigning weights with a 3.2 electrode-distance radius of inclusion. Further, we found empirically that application of the recommended VR will require less than 1 ms for 33.3 ms of data from one USEA.

  19. Offline decoding of end-point forces using neural ensembles: application to a brain-machine interface.

    PubMed

    Gupta, Rahul; Ashe, James

    2009-06-01

    Brain-machine interfaces (BMIs) hold a lot of promise for restoring some level of motor function to patients with neuronal disease or injury. Current BMI approaches fall into two broad categories--those that decode discrete properties of limb movement (such as movement direction and movement intent) and those that decode continuous variables (such as position and velocity). However, to enable the prosthetic devices to be useful for common everyday tasks, precise control of the forces applied by the end-point of the prosthesis (e.g., the hand) is also essential. Here, we used linear regression and Kalman filter methods to show that neural activity recorded from the motor cortex of the monkey during movements in a force field can be used to decode the end-point forces applied by the subject successfully and with high fidelity. Furthermore, the models exhibit some generalization to novel task conditions. We also demonstrate how the simultaneous prediction of kinematics and kinetics can be easily achieved using the same framework, without any degradation in decoding quality. Our results represent a useful extension of the current BMI technology, making dynamic control of a prosthetic device a distinct possibility in the near future.

  20. Within-Category Decoding of Information in Different Attentional States in Short-Term Memory.

    PubMed

    LaRocque, Joshua J; Riggall, Adam C; Emrich, Stephen M; Postle, Bradley R

    2017-10-01

    A long-standing assumption of cognitive neuroscience has been that working memory (WM) is accomplished by sustained, elevated neural activity. More recently, theories of WM have expanded this view by describing different attentional states in WM with differing activation levels. Several studies have used multivariate pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) data to study neural activity corresponding to these WM states. Intriguingly, no evidence was found for active neural representations for information held in WM outside the focus of attention ("unattended memory items," UMIs), suggesting that only attended memory items (AMIs) are accompanied by an active trace. However, these results depended on category-level decoding, which lacks sensitivity to neural representations of individual items. Therefore, we employed a WM task in which subjects remembered the directions of motion of two dot arrays, with a retrocue indicating which was relevant for an imminent memory probe (the AMI). This design allowed MVPA decoding of delay-period fMRI signal at the stimulus-item level, affording a more sensitive test of the neural representation of UMIs. Whereas evidence for the AMI was reliably high, evidence for the UMI dropped to baseline, consistent with the notion that different WM attentional states may have qualitatively different mechanisms of retention. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Interfacing to the brain’s motor decisions

    PubMed Central

    2017-01-01

    It has been long known that neural activity, recorded with electrophysiological methods, contains rich information about a subject’s motor intentions, sensory experiences, allocation of attention, action planning, and even abstract thoughts. All these functions have been the subject of neurophysiological investigations, with the goal of understanding how neuronal activity represents behavioral parameters, sensory inputs, and cognitive functions. The field of brain-machine interfaces (BMIs) strives for a somewhat different goal: it endeavors to extract information from neural modulations to create a communication link between the brain and external devices. Although many remarkable successes have been already achieved in the BMI field, questions remain regarding the possibility of decoding high-order neural representations, such as decision making. Could BMIs be employed to decode the neural representations of decisions underlying goal-directed actions? In this review we lay out a framework that describes the computations underlying goal-directed actions as a multistep process performed by multiple cortical and subcortical areas. We then discuss how BMIs could connect to different decision-making steps and decode the neural processing ongoing before movements are initiated. Such decision-making BMIs could operate as a system with prediction that offers many advantages, such as shorter reaction time, better error processing, and improved unsupervised learning. To present the current state of the art, we review several recent BMIs incorporating decision-making components. PMID:28003406

  2. Decoding word and category-specific spatiotemporal representations from MEG and EEG

    PubMed Central

    Chan, Alexander M.; Halgren, Eric; Marinkovic, Ksenija; Cash, Sydney S.

    2010-01-01

    The organization and localization of lexico-semantic information in the brain has been debated for many years. Specifically, lesion and imaging studies have attempted to map the brain areas representing living versus non-living objects, however, results remain variable. This may be due, in part, to the fact that the univariate statistical mapping analyses used to detect these brain areas are typically insensitive to subtle, but widespread, effects. Decoding techniques, on the other hand, allow for a powerful multivariate analysis of multichannel neural data. In this study, we utilize machine-learning algorithms to first demonstrate that semantic category, as well as individual words, can be decoded from EEG and MEG recordings of subjects performing a language task. Mean accuracies of 76% (chance = 50%) and 83% (chance = 20%) were obtained for the decoding of living vs. non-living category or individual words respectively. Furthermore, we utilize this decoding analysis to demonstrate that the representations of words and semantic category are highly distributed both spatially and temporally. In particular, bilateral anterior temporal, bilateral inferior frontal, and left inferior temporal-occipital sensors are most important for discrimination. Successful intersubject and intermodality decoding shows that semantic representations between stimulus modalities and individuals are reasonably consistent. These results suggest that both word and category-specific information are present in extracranially recorded neural activity and that these representations may be more distributed, both spatially and temporally, than previous studies suggest. PMID:21040796

  3. Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision.

    PubMed

    Wen, Haiguang; Shi, Junxing; Zhang, Yizhen; Lu, Kun-Han; Cao, Jiayue; Liu, Zhongming

    2017-10-20

    Convolutional neural network (CNN) driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships between the CNN and the brain. Through the encoding models, the CNN-predicted areas covered not only the ventral stream, but also the dorsal stream, albeit to a lesser degree; single-voxel response was visualized as the specific pixel pattern that drove the response, revealing the distinct representation of individual cortical location; cortical activation was synthesized from natural images with high-throughput to map category representation, contrast, and selectivity. Through the decoding models, fMRI signals were directly decoded to estimate the feature representations in both visual and semantic spaces, for direct visual reconstruction and semantic categorization, respectively. These results corroborate, generalize, and extend previous findings, and highlight the value of using deep learning, as an all-in-one model of the visual cortex, to understand and decode natural vision. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Decoding visual object categories from temporal correlations of ECoG signals.

    PubMed

    Majima, Kei; Matsuo, Takeshi; Kawasaki, Keisuke; Kawai, Kensuke; Saito, Nobuhito; Hasegawa, Isao; Kamitani, Yukiyasu

    2014-04-15

    How visual object categories are represented in the brain is one of the key questions in neuroscience. Studies on low-level visual features have shown that relative timings or phases of neural activity between multiple brain locations encode information. However, whether such temporal patterns of neural activity are used in the representation of visual objects is unknown. Here, we examined whether and how visual object categories could be predicted (or decoded) from temporal patterns of electrocorticographic (ECoG) signals from the temporal cortex in five patients with epilepsy. We used temporal correlations between electrodes as input features, and compared the decoding performance with features defined by spectral power and phase from individual electrodes. While using power or phase alone, the decoding accuracy was significantly better than chance, correlations alone or those combined with power outperformed other features. Decoding performance with correlations was degraded by shuffling the order of trials of the same category in each electrode, indicating that the relative time series between electrodes in each trial is critical. Analysis using a sliding time window revealed that decoding performance with correlations began to rise earlier than that with power. This earlier increase in performance was replicated by a model using phase differences to encode categories. These results suggest that activity patterns arising from interactions between multiple neuronal units carry additional information on visual object categories. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Linear feature projection-based real-time decoding of limb state from dorsal root ganglion recordings.

    PubMed

    Han, Sungmin; Chu, Jun-Uk; Park, Jong Woong; Youn, Inchan

    2018-05-15

    Proprioceptive afferent activities recorded by a multichannel microelectrode have been used to decode limb movements to provide sensory feedback signals for closed-loop control in a functional electrical stimulation (FES) system. However, analyzing the high dimensionality of neural activity is one of the major challenges in real-time applications. This paper proposes a linear feature projection method for the real-time decoding of ankle and knee joint angles. Single-unit activity was extracted as a feature vector from proprioceptive afferent signals that were recorded from the L7 dorsal root ganglion during passive movements of ankle and knee joints. The dimensionality of this feature vector was then reduced using a linear feature projection composed of projection pursuit and negentropy maximization (PP/NEM). Finally, a time-delayed Kalman filter was used to estimate the ankle and knee joint angles. The PP/NEM approach had a better decoding performance than did other feature projection methods, and all processes were completed within the real-time constraints. These results suggested that the proposed method could be a useful decoding method to provide real-time feedback signals in closed-loop FES systems.

  6. Decoding Task and Stimulus Representations in Face-responsive Cortex

    PubMed Central

    Kliemann, Dorit; Jacoby, Nir; Anzellotti, Stefano; Saxe, Rebecca R.

    2017-01-01

    Faces provide rich social information about others’ stable traits (e.g., age) and fleeting states of mind (e.g., emotional expression). While some of these facial aspects may be processed automatically, observers can also deliberately attend to some features while ignoring others. It remains unclear how internal goals (e.g., task context) influence the representational geometry of variable and stable facial aspects in face-responsive cortex. We investigated neural response patterns related to decoding i) the intention to attend to a facial aspect before its perception, ii) the attended aspect of a face and iii) stimulus properties. We measured neural responses while subjects watched videos of dynamic positive and negative expressions, and judged the age or the expression’s valence. Split-half multivoxel pattern analyses (MVPA) showed that (i) the intention to attend to a specific aspect of a face can be decoded from left fronto-lateral, but not face-responsive regions; (ii) during face perception, the attend aspect (age vs emotion) could be robustly decoded from almost all face-responsive regions; and (iii) a stimulus property (valence), was represented in right posterior superior temporal sulcus and medial prefrontal cortices. The effect of deliberately shifting the focus of attention on representations suggest a powerful influence of top-down signals on cortical representation of social information, varying across cortical regions, likely reflecting neural flexibility to optimally integrate internal goals and dynamic perceptual input. PMID:27978778

  7. Emergence of a Stable Cortical Map for Neuroprosthetic Control

    PubMed Central

    Ganguly, Karunesh; Carmena, Jose M.

    2009-01-01

    Cortical control of neuroprosthetic devices is known to require neuronal adaptations. It remains unclear whether a stable cortical representation for prosthetic function can be stored and recalled in a manner that mimics our natural recall of motor skills. Especially in light of the mixed evidence for a stationary neuron-behavior relationship in cortical motor areas, understanding this relationship during long-term neuroprosthetic control can elucidate principles of neural plasticity as well as improve prosthetic function. Here, we paired stable recordings from ensembles of primary motor cortex neurons in macaque monkeys with a constant decoder that transforms neural activity to prosthetic movements. Proficient control was closely linked to the emergence of a surprisingly stable pattern of ensemble activity, indicating that the motor cortex can consolidate a neural representation for prosthetic control in the presence of a constant decoder. The importance of such a cortical map was evident in that small perturbations to either the size of the neural ensemble or to the decoder could reversibly disrupt function. Moreover, once a cortical map became consolidated, a second map could be learned and stored. Thus, long-term use of a neuroprosthetic device is associated with the formation of a cortical map for prosthetic function that is stable across time, readily recalled, resistant to interference, and resembles a putative memory engram. PMID:19621062

  8. Using an Artificial Neural Bypass to Restore Cortical Control of Rhythmic Movements in a Human with Quadriplegia

    NASA Astrophysics Data System (ADS)

    Sharma, Gaurav; Friedenberg, David A.; Annetta, Nicholas; Glenn, Bradley; Bockbrader, Marcie; Majstorovic, Connor; Domas, Stephanie; Mysiw, W. Jerry; Rezai, Ali; Bouton, Chad

    2016-09-01

    Neuroprosthetic technology has been used to restore cortical control of discrete (non-rhythmic) hand movements in a paralyzed person. However, cortical control of rhythmic movements which originate in the brain but are coordinated by Central Pattern Generator (CPG) neural networks in the spinal cord has not been demonstrated previously. Here we show a demonstration of an artificial neural bypass technology that decodes cortical activity and emulates spinal cord CPG function allowing volitional rhythmic hand movement. The technology uses a combination of signals recorded from the brain, machine-learning algorithms to decode the signals, a numerical model of CPG network, and a neuromuscular electrical stimulation system to evoke rhythmic movements. Using the neural bypass, a quadriplegic participant was able to initiate, sustain, and switch between rhythmic and discrete finger movements, using his thoughts alone. These results have implications in advancing neuroprosthetic technology to restore complex movements in people living with paralysis.

  9. Decoding the content of recollection within the core recollection network and beyond.

    PubMed

    Thakral, Preston P; Wang, Tracy H; Rugg, Michael D

    2017-06-01

    Recollection - retrieval of qualitative information about a past event - is associated with enhanced neural activity in a consistent set of neural regions (the 'core recollection network') seemingly regardless of the nature of the recollected content. Here, we employed multi-voxel pattern analysis (MVPA) to assess whether retrieval-related functional magnetic resonance imaging (fMRI) activity in core recollection regions - including the hippocampus, angular gyrus, medial prefrontal cortex, retrosplenial/posterior cingulate cortex, and middle temporal gyrus - contain information about studied content and thus demonstrate retrieval-related 'reinstatement' effects. During study, participants viewed objects and concrete words that were subjected to different encoding tasks. Test items included studied words, the names of studied objects, or unstudied words. Participants judged whether the items were recollected, familiar, or new by making 'remember', 'know', and 'new' responses, respectively. The study history of remembered test items could be reliably decoded using MVPA in most regions, as well as from the dorsolateral prefrontal cortex, a region where univariate recollection effects could not be detected. The findings add to evidence that members of the core recollection network, as well as at least one neural region where mean signal is insensitive to recollection success, carry information about recollected content. Importantly, the study history of recognized items endorsed with a 'know' response could be decoded with equal accuracy. The results thus demonstrate a striking dissociation between mean signal and multi-voxel indices of recollection. Moreover, they converge with prior findings in suggesting that, as it is operationalized by classification-based MVPA, reinstatement is not uniquely a signature of recollection. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Revealing hidden states in visual working memory using electroencephalography

    PubMed Central

    Wolff, Michael J.; Ding, Jacqueline; Myers, Nicholas E.; Stokes, Mark G.

    2015-01-01

    It is often assumed that information in visual working memory (vWM) is maintained via persistent activity. However, recent evidence indicates that information in vWM could be maintained in an effectively “activity-silent” neural state. Silent vWM is consistent with recent cognitive and neural models, but poses an important experimental problem: how can we study these silent states using conventional measures of brain activity? We propose a novel approach that is analogous to echolocation: using a high-contrast visual stimulus, it may be possible to drive brain activity during vWM maintenance and measure the vWM-dependent impulse response. We recorded electroencephalography (EEG) while participants performed a vWM task in which a randomly oriented grating was remembered. Crucially, a high-contrast, task-irrelevant stimulus was shown in the maintenance period in half of the trials. The electrophysiological response from posterior channels was used to decode the orientations of the gratings. While orientations could be decoded during and shortly after stimulus presentation, decoding accuracy dropped back close to baseline in the delay. However, the visual evoked response from the task-irrelevant stimulus resulted in a clear re-emergence in decodability. This result provides important proof-of-concept for a promising and relatively simple approach to decode “activity-silent” vWM content using non-invasive EEG. PMID:26388748

  11. "Decoding versus comprehension": Brain responses underlying reading comprehension in children with autism.

    PubMed

    Bednarz, Haley M; Maximo, Jose O; Murdaugh, Donna L; O'Kelley, Sarah; Kana, Rajesh K

    2017-06-01

    Despite intact decoding ability, deficits in reading comprehension are relatively common in children with autism spectrum disorders (ASD). However, few neuroimaging studies have tested the neural bases of this specific profile of reading deficit in ASD. This fMRI study examined activation and synchronization of the brain's reading network in children with ASD with specific reading comprehension deficits during a word similarities task. Thirteen typically developing children and 18 children with ASD performed the task in the MRI scanner. No statistically significant group differences in functional activation were observed; however, children with ASD showed decreased functional connectivity between the left inferior frontal gyrus (LIFG) and the left inferior occipital gyrus (LIOG). In addition, reading comprehension ability significantly positively predicted functional connectivity between the LIFG and left thalamus (LTHAL) among all subjects. The results of this study provide evidence for altered recruitment of reading-related neural resources in ASD children and suggest specific weaknesses in top-down modulation of semantic processing. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. A thesaurus for a neural population code

    PubMed Central

    Ganmor, Elad; Segev, Ronen; Schneidman, Elad

    2015-01-01

    Information is carried in the brain by the joint spiking patterns of large groups of noisy, unreliable neurons. This noise limits the capacity of the neural code and determines how information can be transmitted and read-out. To accurately decode, the brain must overcome this noise and identify which patterns are semantically similar. We use models of network encoding noise to learn a thesaurus for populations of neurons in the vertebrate retina responding to artificial and natural videos, measuring the similarity between population responses to visual stimuli based on the information they carry. This thesaurus reveals that the code is organized in clusters of synonymous activity patterns that are similar in meaning but may differ considerably in their structure. This organization is highly reminiscent of the design of engineered codes. We suggest that the brain may use this structure and show how it allows accurate decoding of novel stimuli from novel spiking patterns. DOI: http://dx.doi.org/10.7554/eLife.06134.001 PMID:26347983

  13. Classical and neural methods of image sequence interpolation

    NASA Astrophysics Data System (ADS)

    Skoneczny, Slawomir; Szostakowski, Jaroslaw

    2001-08-01

    An image interpolation problem is often encountered in many areas. Some examples are interpolation for coding/decoding process for transmission purposes, reconstruction a full frame from two interlaced sub-frames in normal TV or HDTV, or reconstruction of missing frames in old destroyed cinematic sequences. In this paper an overview of interframe interpolation methods is presented. Both direct as well as motion compensated interpolation techniques are given by examples. The used methodology can also be either classical or based on neural networks depending on demand of a specific interpolation problem solving person.

  14. Noise-robust speech recognition through auditory feature detection and spike sequence decoding.

    PubMed

    Schafer, Phillip B; Jin, Dezhe Z

    2014-03-01

    Speech recognition in noisy conditions is a major challenge for computer systems, but the human brain performs it routinely and accurately. Automatic speech recognition (ASR) systems that are inspired by neuroscience can potentially bridge the performance gap between humans and machines. We present a system for noise-robust isolated word recognition that works by decoding sequences of spikes from a population of simulated auditory feature-detecting neurons. Each neuron is trained to respond selectively to a brief spectrotemporal pattern, or feature, drawn from the simulated auditory nerve response to speech. The neural population conveys the time-dependent structure of a sound by its sequence of spikes. We compare two methods for decoding the spike sequences--one using a hidden Markov model-based recognizer, the other using a novel template-based recognition scheme. In the latter case, words are recognized by comparing their spike sequences to template sequences obtained from clean training data, using a similarity measure based on the length of the longest common sub-sequence. Using isolated spoken digits from the AURORA-2 database, we show that our combined system outperforms a state-of-the-art robust speech recognizer at low signal-to-noise ratios. Both the spike-based encoding scheme and the template-based decoding offer gains in noise robustness over traditional speech recognition methods. Our system highlights potential advantages of spike-based acoustic coding and provides a biologically motivated framework for robust ASR development.

  15. Neural signal processing and closed-loop control algorithm design for an implanted neural recording and stimulation system.

    PubMed

    Hamilton, Lei; McConley, Marc; Angermueller, Kai; Goldberg, David; Corba, Massimiliano; Kim, Louis; Moran, James; Parks, Philip D; Sang Chin; Widge, Alik S; Dougherty, Darin D; Eskandar, Emad N

    2015-08-01

    A fully autonomous intracranial device is built to continually record neural activities in different parts of the brain, process these sampled signals, decode features that correlate to behaviors and neuropsychiatric states, and use these features to deliver brain stimulation in a closed-loop fashion. In this paper, we describe the sampling and stimulation aspects of such a device. We first describe the signal processing algorithms of two unsupervised spike sorting methods. Next, we describe the LFP time-frequency analysis and feature derivation from the two spike sorting methods. Spike sorting includes a novel approach to constructing a dictionary learning algorithm in a Compressed Sensing (CS) framework. We present a joint prediction scheme to determine the class of neural spikes in the dictionary learning framework; and, the second approach is a modified OSort algorithm which is implemented in a distributed system optimized for power efficiency. Furthermore, sorted spikes and time-frequency analysis of LFP signals can be used to generate derived features (including cross-frequency coupling, spike-field coupling). We then show how these derived features can be used in the design and development of novel decode and closed-loop control algorithms that are optimized to apply deep brain stimulation based on a patient's neuropsychiatric state. For the control algorithm, we define the state vector as representative of a patient's impulsivity, avoidance, inhibition, etc. Controller parameters are optimized to apply stimulation based on the state vector's current state as well as its historical values. The overall algorithm and software design for our implantable neural recording and stimulation system uses an innovative, adaptable, and reprogrammable architecture that enables advancement of the state-of-the-art in closed-loop neural control while also meeting the challenges of system power constraints and concurrent development with ongoing scientific research designed to define brain network connectivity and neural network dynamics that vary at the individual patient level and vary over time.

  16. nSTAT: Open-Source Neural Spike Train Analysis Toolbox for Matlab

    PubMed Central

    Cajigas, I.; Malik, W.Q.; Brown, E.N.

    2012-01-01

    Over the last decade there has been a tremendous advance in the analytical tools available to neuroscientists to understand and model neural function. In particular, the point process - Generalized Linear Model (PPGLM) framework has been applied successfully to problems ranging from neuro-endocrine physiology to neural decoding. However, the lack of freely distributed software implementations of published PP-GLM algorithms together with problem-specific modifications required for their use, limit wide application of these techniques. In an effort to make existing PP-GLM methods more accessible to the neuroscience community, we have developed nSTAT – an open source neural spike train analysis toolbox for Matlab®. By adopting an Object-Oriented Programming (OOP) approach, nSTAT allows users to easily manipulate data by performing operations on objects that have an intuitive connection to the experiment (spike trains, covariates, etc.), rather than by dealing with data in vector/matrix form. The algorithms implemented within nSTAT address a number of common problems including computation of peri-stimulus time histograms, quantification of the temporal response properties of neurons, and characterization of neural plasticity within and across trials. nSTAT provides a starting point for exploratory data analysis, allows for simple and systematic building and testing of point process models, and for decoding of stimulus variables based on point process models of neural function. By providing an open-source toolbox, we hope to establish a platform that can be easily used, modified, and extended by the scientific community to address limitations of current techniques and to extend available techniques to more complex problems. PMID:22981419

  17. How Does a Child Solve 7 + 8? Decoding Brain Activity Patterns Associated with Counting and Retrieval Strategies

    ERIC Educational Resources Information Center

    Cho, Soohyun; Ryali, Srikanth; Geary, David C.; Menon, Vinod

    2011-01-01

    Cognitive development and learning are characterized by diminished reliance on effortful procedures and increased use of memory-based problem solving. Here we identify the neural correlates of this strategy shift in 7-9-year-old children at an important developmental period for arithmetic skill acquisition. Univariate and multivariate approaches…

  18. Parametric models to relate spike train and LFP dynamics with neural information processing.

    PubMed

    Banerjee, Arpan; Dean, Heather L; Pesaran, Bijan

    2012-01-01

    Spike trains and local field potentials (LFPs) resulting from extracellular current flows provide a substrate for neural information processing. Understanding the neural code from simultaneous spike-field recordings and subsequent decoding of information processing events will have widespread applications. One way to demonstrate an understanding of the neural code, with particular advantages for the development of applications, is to formulate a parametric statistical model of neural activity and its covariates. Here, we propose a set of parametric spike-field models (unified models) that can be used with existing decoding algorithms to reveal the timing of task or stimulus specific processing. Our proposed unified modeling framework captures the effects of two important features of information processing: time-varying stimulus-driven inputs and ongoing background activity that occurs even in the absence of environmental inputs. We have applied this framework for decoding neural latencies in simulated and experimentally recorded spike-field sessions obtained from the lateral intraparietal area (LIP) of awake, behaving monkeys performing cued look-and-reach movements to spatial targets. Using both simulated and experimental data, we find that estimates of trial-by-trial parameters are not significantly affected by the presence of ongoing background activity. However, including background activity in the unified model improves goodness of fit for predicting individual spiking events. Uncovering the relationship between the model parameters and the timing of movements offers new ways to test hypotheses about the relationship between neural activity and behavior. We obtained significant spike-field onset time correlations from single trials using a previously published data set where significantly strong correlation was only obtained through trial averaging. We also found that unified models extracted a stronger relationship between neural response latency and trial-by-trial behavioral performance than existing models of neural information processing. Our results highlight the utility of the unified modeling framework for characterizing spike-LFP recordings obtained during behavioral performance.

  19. Metastable neural dynamics mediates expectation

    NASA Astrophysics Data System (ADS)

    Mazzucato, Luca; La Camera, Giancarlo; Fontanini, Alfredo

    Sensory stimuli are processed faster when their presentation is expected compared to when they come as a surprise. We previously showed that, in multiple single-unit recordings from alert rat gustatory cortex, taste stimuli can be decoded faster from neural activity if preceded by a stimulus-predicting cue. However, the specific computational process mediating this anticipatory neural activity is unknown. Here, we propose a biologically plausible model based on a recurrent network of spiking neurons with clustered architecture. In the absence of stimulation, the model neural activity unfolds through sequences of metastable states, each state being a population vector of firing rates. We modeled taste stimuli and cue (the same for all stimuli) as two inputs targeting subsets of excitatory neurons. As observed in experiment, stimuli evoked specific state sequences, characterized in terms of `coding states', i.e., states occurring significantly more often for a particular stimulus. When stimulus presentation is preceded by a cue, coding states show a faster and more reliable onset, and expected stimuli can be decoded more quickly than unexpected ones. This anticipatory effect is unrelated to changes of firing rates in stimulus-selective neurons and is absent in homogeneous balanced networks, suggesting that a clustered organization is necessary to mediate the expectation of relevant events. Our results demonstrate a novel mechanism for speeding up sensory coding in cortical circuits. NIDCD K25-DC013557 (LM); NIDCD R01-DC010389 (AF); NSF IIS-1161852 (GL).

  20. θ-Band and β-Band Neural Activity Reflects Independent Syllable Tracking and Comprehension of Time-Compressed Speech.

    PubMed

    Pefkou, Maria; Arnal, Luc H; Fontolan, Lorenzo; Giraud, Anne-Lise

    2017-08-16

    Recent psychophysics data suggest that speech perception is not limited by the capacity of the auditory system to encode fast acoustic variations through neural γ activity, but rather by the time given to the brain to decode them. Whether the decoding process is bounded by the capacity of θ rhythm to follow syllabic rhythms in speech, or constrained by a more endogenous top-down mechanism, e.g., involving β activity, is unknown. We addressed the dynamics of auditory decoding in speech comprehension by challenging syllable tracking and speech decoding using comprehensible and incomprehensible time-compressed auditory sentences. We recorded EEGs in human participants and found that neural activity in both θ and γ ranges was sensitive to syllabic rate. Phase patterns of slow neural activity consistently followed the syllabic rate (4-14 Hz), even when this rate went beyond the classical θ range (4-8 Hz). The power of θ activity increased linearly with syllabic rate but showed no sensitivity to comprehension. Conversely, the power of β (14-21 Hz) activity was insensitive to the syllabic rate, yet reflected comprehension on a single-trial basis. We found different long-range dynamics for θ and β activity, with β activity building up in time while more contextual information becomes available. This is consistent with the roles of θ and β activity in stimulus-driven versus endogenous mechanisms. These data show that speech comprehension is constrained by concurrent stimulus-driven θ and low-γ activity, and by endogenous β activity, but not primarily by the capacity of θ activity to track the syllabic rhythm. SIGNIFICANCE STATEMENT Speech comprehension partly depends on the ability of the auditory cortex to track syllable boundaries with θ-range neural oscillations. The reason comprehension drops when speech is accelerated could hence be because θ oscillations can no longer follow the syllabic rate. Here, we presented subjects with comprehensible and incomprehensible accelerated speech, and show that neural phase patterns in the θ band consistently reflect the syllabic rate, even when speech becomes too fast to be intelligible. The drop in comprehension, however, is signaled by a significant decrease in the power of low-β oscillations (14-21 Hz). These data suggest that speech comprehension is not limited by the capacity of θ oscillations to adapt to syllabic rate, but by an endogenous decoding process. Copyright © 2017 the authors 0270-6474/17/377930-09$15.00/0.

  1. Distributed patterns of activity in sensory cortex reflect the precision of multiple items maintained in visual short-term memory.

    PubMed

    Emrich, Stephen M; Riggall, Adam C; Larocque, Joshua J; Postle, Bradley R

    2013-04-10

    Traditionally, load sensitivity of sustained, elevated activity has been taken as an index of storage for a limited number of items in visual short-term memory (VSTM). Recently, studies have demonstrated that the contents of a single item held in VSTM can be decoded from early visual cortex, despite the fact that these areas do not exhibit elevated, sustained activity. It is unknown, however, whether the patterns of neural activity decoded from sensory cortex change as a function of load, as one would expect from a region storing multiple representations. Here, we use multivoxel pattern analysis to examine the neural representations of VSTM in humans across multiple memory loads. In an important extension of previous findings, our results demonstrate that the contents of VSTM can be decoded from areas that exhibit a transient response to visual stimuli, but not from regions that exhibit elevated, sustained load-sensitive delay-period activity. Moreover, the neural information present in these transiently activated areas decreases significantly with increasing load, indicating load sensitivity of the patterns of activity that support VSTM maintenance. Importantly, the decrease in classification performance as a function of load is correlated with within-subject changes in mnemonic resolution. These findings indicate that distributed patterns of neural activity in putatively sensory visual cortex support the representation and precision of information in VSTM.

  2. Non-causal spike filtering improves decoding of movement intention for intracortical BCIs

    PubMed Central

    Masse, Nicolas Y.; Jarosiewicz, Beata; Simeral, John D.; Bacher, Daniel; Stavisky, Sergey D.; Cash, Sydney S.; Oakley, Erin M.; Berhanu, Etsub; Eskandar, Emad; Friehs, Gerhard; Hochberg, Leigh R.; Donoghue, John P.

    2014-01-01

    Background Multiple types of neural signals are available for controlling assistive devices through brain-computer interfaces (BCIs). Intracortically-recorded spiking neural signals are attractive for BCIs because they can in principle provide greater fidelity of encoded information compared to electrocorticographic (ECoG) signals and electroencephalograms (EEGs). Recent reports show that the information content of these spiking neural signals can be reliably extracted simply by causally band-pass filtering the recorded extracellular voltage signals and then applying a spike detection threshold, without relying on “sorting” action potentials. New method We show that replacing the causal filter with an equivalent non-causal filter increases the information content extracted from the extracellular spiking signal and improves decoding of intended movement direction. This method can be used for real-time BCI applications by using a 4 ms lag between recording and filtering neural signals. Results Across 18 sessions from two people with tetraplegia enrolled in the BrainGate2 pilot clinical trial, we found that threshold crossing events extracted using this non-causal filtering method were significantly more informative of each participant’s intended cursor kinematics compared to threshold crossing events derived from causally filtered signals. This new method decreased the mean angular error between the intended and decoded cursor direction by 9.7° for participant S3, who was implanted 5.4 years prior to this study, and by 3.5° for participant T2, who was implanted 3 months prior to this study. Conclusions Non-causally filtering neural signals prior to extracting threshold crossing events may be a simple yet effective way to condition intracortically recorded neural activity for direct control of external devices through BCIs. PMID:25128256

  3. Interpretability of Multivariate Brain Maps in Linear Brain Decoding: Definition, and Heuristic Quantification in Multivariate Analysis of MEG Time-Locked Effects.

    PubMed

    Kia, Seyed Mostafa; Vega Pons, Sandro; Weisz, Nathan; Passerini, Andrea

    2016-01-01

    Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future.

  4. Interpretability of Multivariate Brain Maps in Linear Brain Decoding: Definition, and Heuristic Quantification in Multivariate Analysis of MEG Time-Locked Effects

    PubMed Central

    Kia, Seyed Mostafa; Vega Pons, Sandro; Weisz, Nathan; Passerini, Andrea

    2017-01-01

    Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future. PMID:28167896

  5. Encoding and Decoding Models in Cognitive Electrophysiology

    PubMed Central

    Holdgraf, Christopher R.; Rieger, Jochem W.; Micheli, Cristiano; Martin, Stephanie; Knight, Robert T.; Theunissen, Frederic E.

    2017-01-01

    Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of “Encoding” models, in which stimulus features are used to model brain activity, and “Decoding” models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aim is to provide a practical understanding of predictive modeling of human brain data and to propose best-practices in conducting these analyses. PMID:29018336

  6. Using an Artificial Neural Bypass to Restore Cortical Control of Rhythmic Movements in a Human with Quadriplegia

    PubMed Central

    Sharma, Gaurav; Friedenberg, David A.; Annetta, Nicholas; Glenn, Bradley; Bockbrader, Marcie; Majstorovic, Connor; Domas, Stephanie; Mysiw, W. Jerry; Rezai, Ali; Bouton, Chad

    2016-01-01

    Neuroprosthetic technology has been used to restore cortical control of discrete (non-rhythmic) hand movements in a paralyzed person. However, cortical control of rhythmic movements which originate in the brain but are coordinated by Central Pattern Generator (CPG) neural networks in the spinal cord has not been demonstrated previously. Here we show a demonstration of an artificial neural bypass technology that decodes cortical activity and emulates spinal cord CPG function allowing volitional rhythmic hand movement. The technology uses a combination of signals recorded from the brain, machine-learning algorithms to decode the signals, a numerical model of CPG network, and a neuromuscular electrical stimulation system to evoke rhythmic movements. Using the neural bypass, a quadriplegic participant was able to initiate, sustain, and switch between rhythmic and discrete finger movements, using his thoughts alone. These results have implications in advancing neuroprosthetic technology to restore complex movements in people living with paralysis. PMID:27658585

  7. Comparison of Classifiers for Decoding Sensory and Cognitive Information from Prefrontal Neuronal Populations

    PubMed Central

    Astrand, Elaine; Enel, Pierre; Ibos, Guilhem; Dominey, Peter Ford; Baraduc, Pierre; Ben Hamed, Suliann

    2014-01-01

    Decoding neuronal information is important in neuroscience, both as a basic means to understand how neuronal activity is related to cerebral function and as a processing stage in driving neuroprosthetic effectors. Here, we compare the readout performance of six commonly used classifiers at decoding two different variables encoded by the spiking activity of the non-human primate frontal eye fields (FEF): the spatial position of a visual cue, and the instructed orientation of the animal's attention. While the first variable is exogenously driven by the environment, the second variable corresponds to the interpretation of the instruction conveyed by the cue; it is endogenously driven and corresponds to the output of internal cognitive operations performed on the visual attributes of the cue. These two variables were decoded using either a regularized optimal linear estimator in its explicit formulation, an optimal linear artificial neural network estimator, a non-linear artificial neural network estimator, a non-linear naïve Bayesian estimator, a non-linear Reservoir recurrent network classifier or a non-linear Support Vector Machine classifier. Our results suggest that endogenous information such as the orientation of attention can be decoded from the FEF with the same accuracy as exogenous visual information. All classifiers did not behave equally in the face of population size and heterogeneity, the available training and testing trials, the subject's behavior and the temporal structure of the variable of interest. In most situations, the regularized optimal linear estimator and the non-linear Support Vector Machine classifiers outperformed the other tested decoders. PMID:24466019

  8. A Bidirectional Brain-Machine Interface Algorithm That Approximates Arbitrary Force-Fields

    PubMed Central

    Semprini, Marianna; Mussa-Ivaldi, Ferdinando A.; Panzeri, Stefano

    2014-01-01

    We examine bidirectional brain-machine interfaces that control external devices in a closed loop by decoding motor cortical activity to command the device and by encoding the state of the device by delivering electrical stimuli to sensory areas. Although it is possible to design this artificial sensory-motor interaction while maintaining two independent channels of communication, here we propose a rule that closes the loop between flows of sensory and motor information in a way that approximates a desired dynamical policy expressed as a field of forces acting upon the controlled external device. We previously developed a first implementation of this approach based on linear decoding of neural activity recorded from the motor cortex into a set of forces (a force field) applied to a point mass, and on encoding of position of the point mass into patterns of electrical stimuli delivered to somatosensory areas. However, this previous algorithm had the limitation that it only worked in situations when the position-to-force map to be implemented is invertible. Here we overcome this limitation by developing a new non-linear form of the bidirectional interface that can approximate a virtually unlimited family of continuous fields. The new algorithm bases both the encoding of position information and the decoding of motor cortical activity on an explicit map between spike trains and the state space of the device computed with Multi-Dimensional-Scaling. We present a detailed computational analysis of the performance of the interface and a validation of its robustness by using synthetic neural responses in a simulated sensory-motor loop. PMID:24626393

  9. Ratiometric Decoding of Pheromones for a Biomimetic Infochemical Communication System.

    PubMed

    Wei, Guangfen; Thomas, Sanju; Cole, Marina; Rácz, Zoltán; Gardner, Julian W

    2017-10-30

    Biosynthetic infochemical communication is an emerging scientific field employing molecular compounds for information transmission, labelling, and biochemical interfacing; having potential application in diverse areas ranging from pest management to group coordination of swarming robots. Our communication system comprises a chemoemitter module that encodes information by producing volatile pheromone components and a chemoreceiver module that decodes the transmitted ratiometric information via polymer-coated piezoelectric Surface Acoustic Wave Resonator (SAWR) sensors. The inspiration for such a system is based on the pheromone-based communication between insects. Ten features are extracted from the SAWR sensor response and analysed using multi-variate classification techniques, i.e., Linear Discriminant Analysis (LDA), Probabilistic Neural Network (PNN), and Multilayer Perception Neural Network (MLPNN) methods, and an optimal feature subset is identified. A combination of steady state and transient features of the sensor signals showed superior performances with LDA and MLPNN. Although MLPNN gave excellent results reaching 100% recognition rate at 400 s, over all time stations PNN gave the best performance based on an expanded data-set with adjacent neighbours. In this case, 100% of the pheromone mixtures were successfully identified just 200 s after they were first injected into the wind tunnel. We believe that this approach can be used for future chemical communication employing simple mixtures of airborne molecules.

  10. Ratiometric Decoding of Pheromones for a Biomimetic Infochemical Communication System

    PubMed Central

    Wei, Guangfen; Thomas, Sanju; Cole, Marina; Rácz, Zoltán

    2017-01-01

    Biosynthetic infochemical communication is an emerging scientific field employing molecular compounds for information transmission, labelling, and biochemical interfacing; having potential application in diverse areas ranging from pest management to group coordination of swarming robots. Our communication system comprises a chemoemitter module that encodes information by producing volatile pheromone components and a chemoreceiver module that decodes the transmitted ratiometric information via polymer-coated piezoelectric Surface Acoustic Wave Resonator (SAWR) sensors. The inspiration for such a system is based on the pheromone-based communication between insects. Ten features are extracted from the SAWR sensor response and analysed using multi-variate classification techniques, i.e., Linear Discriminant Analysis (LDA), Probabilistic Neural Network (PNN), and Multilayer Perception Neural Network (MLPNN) methods, and an optimal feature subset is identified. A combination of steady state and transient features of the sensor signals showed superior performances with LDA and MLPNN. Although MLPNN gave excellent results reaching 100% recognition rate at 400 s, over all time stations PNN gave the best performance based on an expanded data-set with adjacent neighbours. In this case, 100% of the pheromone mixtures were successfully identified just 200 s after they were first injected into the wind tunnel. We believe that this approach can be used for future chemical communication employing simple mixtures of airborne molecules. PMID:29084158

  11. Whispering - The hidden side of auditory communication.

    PubMed

    Frühholz, Sascha; Trost, Wiebke; Grandjean, Didier

    2016-11-15

    Whispering is a unique expression mode that is specific to auditory communication. Individuals switch their vocalization mode to whispering especially when affected by inner emotions in certain social contexts, such as in intimate relationships or intimidating social interactions. Although this context-dependent whispering is adaptive, whispered voices are acoustically far less rich than phonated voices and thus impose higher hearing and neural auditory decoding demands for recognizing their socio-affective value by listeners. The neural dynamics underlying this recognition especially from whispered voices are largely unknown. Here we show that whispered voices in humans are considerably impoverished as quantified by an entropy measure of spectral acoustic information, and this missing information needs large-scale neural compensation in terms of auditory and cognitive processing. Notably, recognizing the socio-affective information from voices was slightly more difficult from whispered voices, probably based on missing tonal information. While phonated voices elicited extended activity in auditory regions for decoding of relevant tonal and time information and the valence of voices, whispered voices elicited activity in a complex auditory-frontal brain network. Our data suggest that a large-scale multidirectional brain network compensates for the impoverished sound quality of socially meaningful environmental signals to support their accurate recognition and valence attribution. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Feature Selection Methods for Robust Decoding of Finger Movements in a Non-human Primate

    PubMed Central

    Padmanaban, Subash; Baker, Justin; Greger, Bradley

    2018-01-01

    Objective: The performance of machine learning algorithms used for neural decoding of dexterous tasks may be impeded due to problems arising when dealing with high-dimensional data. The objective of feature selection algorithms is to choose a near-optimal subset of features from the original feature space to improve the performance of the decoding algorithm. The aim of our study was to compare the effects of four feature selection techniques, Wilcoxon signed-rank test, Relative Importance, Principal Component Analysis (PCA), and Mutual Information Maximization on SVM classification performance for a dexterous decoding task. Approach: A nonhuman primate (NHP) was trained to perform small coordinated movements—similar to typing. An array of microelectrodes was implanted in the hand area of the motor cortex of the NHP and used to record action potentials (AP) during finger movements. A Support Vector Machine (SVM) was used to classify which finger movement the NHP was making based upon AP firing rates. We used the SVM classification to examine the functional parameters of (i) robustness to simulated failure and (ii) longevity of classification. We also compared the effect of using isolated-neuron and multi-unit firing rates as the feature vector supplied to the SVM. Main results: The average decoding accuracy for multi-unit features and single-unit features using Mutual Information Maximization (MIM) across 47 sessions was 96.74 ± 3.5% and 97.65 ± 3.36% respectively. The reduction in decoding accuracy between using 100% of the features and 10% of features based on MIM was 45.56% (from 93.7 to 51.09%) and 4.75% (from 95.32 to 90.79%) for multi-unit and single-unit features respectively. MIM had best performance compared to other feature selection methods. Significance: These results suggest improved decoding performance can be achieved by using optimally selected features. The results based on clinically relevant performance metrics also suggest that the decoding algorithm can be made robust by using optimal features and feature selection algorithms. We believe that even a few percent increase in performance is important and improves the decoding accuracy of the machine learning algorithm potentially increasing the ease of use of a brain machine interface. PMID:29467602

  13. Temporal Context in Speech Processing and Attentional Stream Selection: A Behavioral and Neural perspective

    PubMed Central

    Zion Golumbic, Elana M.; Poeppel, David; Schroeder, Charles E.

    2012-01-01

    The human capacity for processing speech is remarkable, especially given that information in speech unfolds over multiple time scales concurrently. Similarly notable is our ability to filter out of extraneous sounds and focus our attention on one conversation, epitomized by the ‘Cocktail Party’ effect. Yet, the neural mechanisms underlying on-line speech decoding and attentional stream selection are not well understood. We review findings from behavioral and neurophysiological investigations that underscore the importance of the temporal structure of speech for achieving these perceptual feats. We discuss the hypothesis that entrainment of ambient neuronal oscillations to speech’s temporal structure, across multiple time-scales, serves to facilitate its decoding and underlies the selection of an attended speech stream over other competing input. In this regard, speech decoding and attentional stream selection are examples of ‘active sensing’, emphasizing an interaction between proactive and predictive top-down modulation of neuronal dynamics and bottom-up sensory input. PMID:22285024

  14. Towards the control of individual fingers of a prosthetic hand using surface EMG signals.

    PubMed

    Tenore, Francesco; Ramos, Ander; Fahmy, Amir; Acharya, Soumyadipta; Etienne-Cummings, Ralph; Thakor, Nitish V

    2007-01-01

    The fast pace of development of upper-limb prostheses requires a paradigm shift in EMG-based controls. Traditional control schemes are only capable of providing 2 degrees of freedom, which is insufficient for dexterous control of individual fingers. We present a framework where myoelectric signals from natural hand and finger movements can be decoded with a high accuracy. 32 surface-EMG electrodes were placed on the forearm of an able-bodied subject while performing individual finger movements. Using time-domain feature extraction methods as inputs to a neural network classifier, we show that 12 individuated flexion and extension movements of the fingers can be decoded with an accuracy higher than 98%. To our knowledge, this is the first instance in which such movements have been successfully decoded using surface-EMG. These preliminary findings provide a framework that will allow the results to be extended to non-invasive control of the next generation of upper-limb prostheses for amputees.

  15. Neural signatures of attention: insights from decoding population activity patterns.

    PubMed

    Sapountzis, Panagiotis; Gregoriou, Georgia G

    2018-01-01

    Understanding brain function and the computations that individual neurons and neuronal ensembles carry out during cognitive functions is one of the biggest challenges in neuroscientific research. To this end, invasive electrophysiological studies have provided important insights by recording the activity of single neurons in behaving animals. To average out noise, responses are typically averaged across repetitions and across neurons that are usually recorded on different days. However, the brain makes decisions on short time scales based on limited exposure to sensory stimulation by interpreting responses of populations of neurons on a moment to moment basis. Recent studies have employed machine-learning algorithms in attention and other cognitive tasks to decode the information content of distributed activity patterns across neuronal ensembles on a single trial basis. Here, we review results from studies that have used pattern-classification decoding approaches to explore the population representation of cognitive functions. These studies have offered significant insights into population coding mechanisms. Moreover, we discuss how such advances can aid the development of cognitive brain-computer interfaces.

  16. Decoding the Nature of Emotion in the Brain.

    PubMed

    Kragel, Philip A; LaBar, Kevin S

    2016-06-01

    A central, unresolved problem in affective neuroscience is understanding how emotions are represented in nervous system activity. After prior localization approaches largely failed, researchers began applying multivariate statistical tools to reconceptualize how emotion constructs might be embedded in large-scale brain networks. Findings from pattern analyses of neuroimaging data show that affective dimensions and emotion categories are uniquely represented in the activity of distributed neural systems that span cortical and subcortical regions. Results from multiple-category decoding studies are incompatible with theories postulating that specific emotions emerge from the neural coding of valence and arousal. This 'new look' into emotion representation promises to improve and reformulate neurobiological models of affect. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Decoding the Nature of Emotion in the Brain

    PubMed Central

    Kragel, Philip A.; LaBar, Kevin S.

    2016-01-01

    A central, unresolved problem in affective neuroscience is understanding how emotions are represented in nervous system activity. After prior localization approaches largely failed, researchers began applying multivariate statistical tools to reconceptualize how emotion constructs might be embedded in large-scale brain networks. Findings from pattern analyses of neuroimaging data show that affective dimensions and emotion categories are uniquely represented in the activity of distributed neural systems that span cortical and subcortical regions. Results from multiple-category decoding studies are incompatible with theories postulating that specific emotions emerge from the neural coding of valence and arousal. This ‘new look’ into emotion representation promises to improve and reformulate neurobiological models of affect. PMID:27133227

  18. Decoding the time-course of object recognition in the human brain: From visual features to categorical decisions.

    PubMed

    Contini, Erika W; Wardle, Susan G; Carlson, Thomas A

    2017-10-01

    Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Statistical coding and decoding of heartbeat intervals.

    PubMed

    Lucena, Fausto; Barros, Allan Kardec; Príncipe, José C; Ohnishi, Noboru

    2011-01-01

    The heart integrates neuroregulatory messages into specific bands of frequency, such that the overall amplitude spectrum of the cardiac output reflects the variations of the autonomic nervous system. This modulatory mechanism seems to be well adjusted to the unpredictability of the cardiac demand, maintaining a proper cardiac regulation. A longstanding theory holds that biological organisms facing an ever-changing environment are likely to evolve adaptive mechanisms to extract essential features in order to adjust their behavior. The key question, however, has been to understand how the neural circuitry self-organizes these feature detectors to select behaviorally relevant information. Previous studies in computational perception suggest that a neural population enhances information that is important for survival by minimizing the statistical redundancy of the stimuli. Herein we investigate whether the cardiac system makes use of a redundancy reduction strategy to regulate the cardiac rhythm. Based on a network of neural filters optimized to code heartbeat intervals, we learn a population code that maximizes the information across the neural ensemble. The emerging population code displays filter tuning proprieties whose characteristics explain diverse aspects of the autonomic cardiac regulation, such as the compromise between fast and slow cardiac responses. We show that the filters yield responses that are quantitatively similar to observed heart rate responses during direct sympathetic or parasympathetic nerve stimulation. Our findings suggest that the heart decodes autonomic stimuli according to information theory principles analogous to how perceptual cues are encoded by sensory systems.

  20. Cough event classification by pretrained deep neural network.

    PubMed

    Liu, Jia-Ming; You, Mingyu; Wang, Zheng; Li, Guo-Zheng; Xu, Xianghuai; Qiu, Zhongmin

    2015-01-01

    Cough is an essential symptom in respiratory diseases. In the measurement of cough severity, an accurate and objective cough monitor is expected by respiratory disease society. This paper aims to introduce a better performed algorithm, pretrained deep neural network (DNN), to the cough classification problem, which is a key step in the cough monitor. The deep neural network models are built from two steps, pretrain and fine-tuning, followed by a Hidden Markov Model (HMM) decoder to capture tamporal information of the audio signals. By unsupervised pretraining a deep belief network, a good initialization for a deep neural network is learned. Then the fine-tuning step is a back propogation tuning the neural network so that it can predict the observation probability associated with each HMM states, where the HMM states are originally achieved by force-alignment with a Gaussian Mixture Model Hidden Markov Model (GMM-HMM) on the training samples. Three cough HMMs and one noncough HMM are employed to model coughs and noncoughs respectively. The final decision is made based on viterbi decoding algorihtm that generates the most likely HMM sequence for each sample. A sample is labeled as cough if a cough HMM is found in the sequence. The experiments were conducted on a dataset that was collected from 22 patients with respiratory diseases. Patient dependent (PD) and patient independent (PI) experimental settings were used to evaluate the models. Five criteria, sensitivity, specificity, F1, macro average and micro average are shown to depict different aspects of the models. From overall evaluation criteria, the DNN based methods are superior to traditional GMM-HMM based method on F1 and micro average with maximal 14% and 11% error reduction in PD and 7% and 10% in PI, meanwhile keep similar performances on macro average. They also surpass GMM-HMM model on specificity with maximal 14% error reduction on both PD and PI. In this paper, we tried pretrained deep neural network in cough classification problem. Our results showed that comparing with the conventional GMM-HMM framework, the HMM-DNN could get better overall performance on cough classification task.

  1. Identifying musical pieces from fMRI data using encoding and decoding models.

    PubMed

    Hoefle, Sebastian; Engel, Annerose; Basilio, Rodrigo; Alluri, Vinoo; Toiviainen, Petri; Cagy, Maurício; Moll, Jorge

    2018-02-02

    Encoding models can reveal and decode neural representations in the visual and semantic domains. However, a thorough understanding of how distributed information in auditory cortices and temporal evolution of music contribute to model performance is still lacking in the musical domain. We measured fMRI responses during naturalistic music listening and constructed a two-stage approach that first mapped musical features in auditory cortices and then decoded novel musical pieces. We then probed the influence of stimuli duration (number of time points) and spatial extent (number of voxels) on decoding accuracy. Our approach revealed a linear increase in accuracy with duration and a point of optimal model performance for the spatial extent. We further showed that Shannon entropy is a driving factor, boosting accuracy up to 95% for music with highest information content. These findings provide key insights for future decoding and reconstruction algorithms and open new venues for possible clinical applications.

  2. Decoder calibration with ultra small current sample set for intracortical brain-machine interface

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping

    2018-04-01

    Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application of intracortical brain-machine interfaces in clinical practice.

  3. Reprint of “Non-causal spike filtering improves decoding of movement intention for intracortical BCIs”☆

    PubMed Central

    Masse, Nicolas Y.; Jarosiewicz, Beata; Simeral, John D.; Bacher, Daniel; Stavisky, Sergey D.; Cash, Sydney S.; Oakley, Erin M.; Berhanu, Etsub; Eskandar, Emad; Friehs, Gerhard; Hochberg, Leigh R.; Donoghue, John P.

    2015-01-01

    Background Multiple types of neural signals are available for controlling assistive devices through brain–computer interfaces (BCIs). Intracortically recorded spiking neural signals are attractive for BCIs because they can in principle provide greater fidelity of encoded information compared to electrocorticographic (ECoG) signals and electroencephalograms (EEGs). Recent reports show that the information content of these spiking neural signals can be reliably extracted simply by causally band-pass filtering the recorded extracellular voltage signals and then applying a spike detection threshold, without relying on “sorting” action potentials. New method We show that replacing the causal filter with an equivalent non-causal filter increases the information content extracted from the extracellular spiking signal and improves decoding of intended movement direction. This method can be used for real-time BCI applications by using a 4 ms lag between recording and filtering neural signals. Results Across 18 sessions from two people with tetraplegia enrolled in the BrainGate2 pilot clinical trial, we found that threshold crossing events extracted using this non-causal filtering method were significantly more informative of each participant’s intended cursor kinematics compared to threshold crossing events derived from causally filtered signals. This new method decreased the mean angular error between the intended and decoded cursor direction by 9.7° for participant S3, who was implanted 5.4 years prior to this study, and by 3.5° for participant T2, who was implanted 3 months prior to this study. PMID:25681017

  4. Enabling Low-Power, Multi-Modal Neural Interfaces Through a Common, Low-Bandwidth Feature Space.

    PubMed

    Irwin, Zachary T; Thompson, David E; Schroeder, Karen E; Tat, Derek M; Hassani, Ali; Bullard, Autumn J; Woo, Shoshana L; Urbanchek, Melanie G; Sachs, Adam J; Cederna, Paul S; Stacey, William C; Patil, Parag G; Chestek, Cynthia A

    2016-05-01

    Brain-Machine Interfaces (BMIs) have shown great potential for generating prosthetic control signals. Translating BMIs into the clinic requires fully implantable, wireless systems; however, current solutions have high power requirements which limit their usability. Lowering this power consumption typically limits the system to a single neural modality, or signal type, and thus to a relatively small clinical market. Here, we address both of these issues by investigating the use of signal power in a single narrow frequency band as a decoding feature for extracting information from electrocorticographic (ECoG), electromyographic (EMG), and intracortical neural data. We have designed and tested the Multi-modal Implantable Neural Interface (MINI), a wireless recording system which extracts and transmits signal power in a single, configurable frequency band. In prerecorded datasets, we used the MINI to explore low frequency signal features and any resulting tradeoff between power savings and decoding performance losses. When processing intracortical data, the MINI achieved a power consumption 89.7% less than a more typical system designed to extract action potential waveforms. When processing ECoG and EMG data, the MINI achieved similar power reductions of 62.7% and 78.8%. At the same time, using the single signal feature extracted by the MINI, we were able to decode all three modalities with less than a 9% drop in accuracy relative to using high-bandwidth, modality-specific signal features. We believe this system architecture can be used to produce a viable, cost-effective, clinical BMI.

  5. Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding

    PubMed Central

    Resnik, Andrey; Celikel, Tansu; Englitz, Bernhard

    2016-01-01

    Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information. PMID:27304526

  6. Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding.

    PubMed

    Huang, Chao; Resnik, Andrey; Celikel, Tansu; Englitz, Bernhard

    2016-06-01

    Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information.

  7. Spatial co-adaptation of cortical control columns in a micro-ECoG brain-computer interface

    NASA Astrophysics Data System (ADS)

    Rouse, A. G.; Williams, J. J.; Wheeler, J. J.; Moran, D. W.

    2016-10-01

    Objective. Electrocorticography (ECoG) has been used for a range of applications including electrophysiological mapping, epilepsy monitoring, and more recently as a recording modality for brain-computer interfaces (BCIs). Studies that examine ECoG electrodes designed and implanted chronically solely for BCI applications remain limited. The present study explored how two key factors influence chronic, closed-loop ECoG BCI: (i) the effect of inter-electrode distance on BCI performance and (ii) the differences in neural adaptation and performance when fixed versus adaptive BCI decoding weights are used. Approach. The amplitudes of epidural micro-ECoG signals between 75 and 105 Hz with 300 μm diameter electrodes were used for one-dimensional and two-dimensional BCI tasks. The effect of inter-electrode distance on BCI control was tested between 3 and 15 mm. Additionally, the performance and cortical modulation differences between constant, fixed decoding using a small subset of channels versus adaptive decoding weights using the entire array were explored. Main results. Successful BCI control was possible with two electrodes separated by 9 and 15 mm. Performance decreased and the signals became more correlated when the electrodes were only 3 mm apart. BCI performance in a 2D BCI task improved significantly when using adaptive decoding weights (80%-90%) compared to using constant, fixed weights (50%-60%). Additionally, modulation increased for channels previously unavailable for BCI control under the fixed decoding scheme upon switching to the adaptive, all-channel scheme. Significance. Our results clearly show that neural activity under a BCI recording electrode (which we define as a ‘cortical control column’) readily adapts to generate an appropriate control signal. These results show that the practical minimal spatial resolution of these control columns with micro-ECoG BCI is likely on the order of 3 mm. Additionally, they show that the combination and interaction between neural adaptation and machine learning are critical to optimizing ECoG BCI performance.

  8. Detecting individual memories through the neural decoding of memory states and past experience.

    PubMed

    Rissman, Jesse; Greely, Henry T; Wagner, Anthony D

    2010-05-25

    A wealth of neuroscientific evidence indicates that our brains respond differently to previously encountered than to novel stimuli. There has been an upswell of interest in the prospect that functional MRI (fMRI), when coupled with multivariate data analysis techniques, might allow the presence or absence of individual memories to be detected from brain activity patterns. This could have profound implications for forensic investigations and legal proceedings, and thus the merits and limitations of such an approach are in critical need of empirical evaluation. We conducted two experiments to investigate whether neural signatures of recognition memory can be reliably decoded from fMRI data. In Exp. 1, participants were scanned while making explicit recognition judgments for studied and novel faces. Multivoxel pattern analysis (MVPA) revealed a robust ability to classify whether a given face was subjectively experienced as old or new, as well as whether recognition was accompanied by recollection, strong familiarity, or weak familiarity. Moreover, a participant's subjective mnemonic experiences could be reliably decoded even when the classifier was trained on the brain data from other individuals. In contrast, the ability to classify a face's objective old/new status, when holding subjective status constant, was severely limited. This important boundary condition was further evidenced in Exp. 2, which demonstrated that mnemonic decoding is poor when memory is indirectly (implicitly) probed. Thus, although subjective memory states can be decoded quite accurately under controlled experimental conditions, fMRI has uncertain utility for objectively detecting an individual's past experiences.

  9. Neuroscience-inspired computational systems for speech recognition under noisy conditions

    NASA Astrophysics Data System (ADS)

    Schafer, Phillip B.

    Humans routinely recognize speech in challenging acoustic environments with background music, engine sounds, competing talkers, and other acoustic noise. However, today's automatic speech recognition (ASR) systems perform poorly in such environments. In this dissertation, I present novel methods for ASR designed to approach human-level performance by emulating the brain's processing of sounds. I exploit recent advances in auditory neuroscience to compute neuron-based representations of speech, and design novel methods for decoding these representations to produce word transcriptions. I begin by considering speech representations modeled on the spectrotemporal receptive fields of auditory neurons. These representations can be tuned to optimize a variety of objective functions, which characterize the response properties of a neural population. I propose an objective function that explicitly optimizes the noise invariance of the neural responses, and find that it gives improved performance on an ASR task in noise compared to other objectives. The method as a whole, however, fails to significantly close the performance gap with humans. I next consider speech representations that make use of spiking model neurons. The neurons in this method are feature detectors that selectively respond to spectrotemporal patterns within short time windows in speech. I consider a number of methods for training the response properties of the neurons. In particular, I present a method using linear support vector machines (SVMs) and show that this method produces spikes that are robust to additive noise. I compute the spectrotemporal receptive fields of the neurons for comparison with previous physiological results. To decode the spike-based speech representations, I propose two methods designed to work on isolated word recordings. The first method uses a classical ASR technique based on the hidden Markov model. The second method is a novel template-based recognition scheme that takes advantage of the neural representation's invariance in noise. The scheme centers on a speech similarity measure based on the longest common subsequence between spike sequences. The combined encoding and decoding scheme outperforms a benchmark system in extremely noisy acoustic conditions. Finally, I consider methods for decoding spike representations of continuous speech. To help guide the alignment of templates to words, I design a syllable detection scheme that robustly marks the locations of syllabic nuclei. The scheme combines SVM-based training with a peak selection algorithm designed to improve noise tolerance. By incorporating syllable information into the ASR system, I obtain strong recognition results in noisy conditions, although the performance in noiseless conditions is below the state of the art. The work presented here constitutes a novel approach to the problem of ASR that can be applied in the many challenging acoustic environments in which we use computer technologies today. The proposed spike-based processing methods can potentially be exploited in effcient hardware implementations and could significantly reduce the computational costs of ASR. The work also provides a framework for understanding the advantages of spike-based acoustic coding in the human brain.

  10. Multiclass fMRI data decoding and visualization using supervised self-organizing maps.

    PubMed

    Hausfeld, Lars; Valente, Giancarlo; Formisano, Elia

    2014-08-01

    When multivariate pattern decoding is applied to fMRI studies entailing more than two experimental conditions, a most common approach is to transform the multiclass classification problem into a series of binary problems. Furthermore, for decoding analyses, classification accuracy is often the only outcome reported although the topology of activation patterns in the high-dimensional features space may provide additional insights into underlying brain representations. Here we propose to decode and visualize voxel patterns of fMRI datasets consisting of multiple conditions with a supervised variant of self-organizing maps (SSOMs). Using simulations and real fMRI data, we evaluated the performance of our SSOM-based approach. Specifically, the analysis of simulated fMRI data with varying signal-to-noise and contrast-to-noise ratio suggested that SSOMs perform better than a k-nearest-neighbor classifier for medium and large numbers of features (i.e. 250 to 1000 or more voxels) and similar to support vector machines (SVMs) for small and medium numbers of features (i.e. 100 to 600voxels). However, for a larger number of features (>800voxels), SSOMs performed worse than SVMs. When applied to a challenging 3-class fMRI classification problem with datasets collected to examine the neural representation of three human voices at individual speaker level, the SSOM-based algorithm was able to decode speaker identity from auditory cortical activation patterns. Classification performances were similar between SSOMs and other decoding algorithms; however, the ability to visualize decoding models and underlying data topology of SSOMs promotes a more comprehensive understanding of classification outcomes. We further illustrated this visualization ability of SSOMs with a re-analysis of a dataset examining the representation of visual categories in the ventral visual cortex (Haxby et al., 2001). This analysis showed that SSOMs could retrieve and visualize topography and neighborhood relations of the brain representation of eight visual categories. We conclude that SSOMs are particularly suited for decoding datasets consisting of more than two classes and are optimally combined with approaches that reduce the number of voxels used for classification (e.g. region-of-interest or searchlight approaches). Copyright © 2014. Published by Elsevier Inc.

  11. The role of the medial temporal limbic system in processing emotions in voice and music.

    PubMed

    Frühholz, Sascha; Trost, Wiebke; Grandjean, Didier

    2014-12-01

    Subcortical brain structures of the limbic system, such as the amygdala, are thought to decode the emotional value of sensory information. Recent neuroimaging studies, as well as lesion studies in patients, have shown that the amygdala is sensitive to emotions in voice and music. Similarly, the hippocampus, another part of the temporal limbic system (TLS), is responsive to vocal and musical emotions, but its specific roles in emotional processing from music and especially from voices have been largely neglected. Here we review recent research on vocal and musical emotions, and outline commonalities and differences in the neural processing of emotions in the TLS in terms of emotional valence, emotional intensity and arousal, as well as in terms of acoustic and structural features of voices and music. We summarize the findings in a neural framework including several subcortical and cortical functional pathways between the auditory system and the TLS. This framework proposes that some vocal expressions might already receive a fast emotional evaluation via a subcortical pathway to the amygdala, whereas cortical pathways to the TLS are thought to be equally used for vocal and musical emotions. While the amygdala might be specifically involved in a coarse decoding of the emotional value of voices and music, the hippocampus might process more complex vocal and musical emotions, and might have an important role especially for the decoding of musical emotions by providing memory-based and contextual associations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. "Who" is saying "what"? Brain-based decoding of human voice and speech.

    PubMed

    Formisano, Elia; De Martino, Federico; Bonte, Milene; Goebel, Rainer

    2008-11-07

    Can we decipher speech content ("what" is being said) and speaker identity ("who" is saying it) from observations of brain activity of a listener? Here, we combine functional magnetic resonance imaging with a data-mining algorithm and retrieve what and whom a person is listening to from the neural fingerprints that speech and voice signals elicit in the listener's auditory cortex. These cortical fingerprints are spatially distributed and insensitive to acoustic variations of the input so as to permit the brain-based recognition of learned speech from unknown speakers and of learned voices from previously unheard utterances. Our findings unravel the detailed cortical layout and computational properties of the neural populations at the basis of human speech recognition and speaker identification.

  13. Decoding facial expressions based on face-selective and motion-sensitive areas.

    PubMed

    Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin

    2017-06-01

    Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  14. Action prediction based on anticipatory brain potentials during simulated driving.

    PubMed

    Khaliliardali, Zahra; Chavarriaga, Ricardo; Gheorghe, Lucian Andrei; Millán, José del R

    2015-12-01

    The ability of an automobile to infer the driver's upcoming actions directly from neural signals could enrich the interaction of the car with its driver. Intelligent vehicles fitted with an on-board brain-computer interface able to decode the driver's intentions can use this information to improve the driving experience. In this study we investigate the neural signatures of anticipation of specific actions, namely braking and accelerating. We investigated anticipatory slow cortical potentials in electroencephalogram recorded from 18 healthy participants in a driving simulator using a variant of the contingent negative variation (CNV) paradigm with Go and No-go conditions: count-down numbers followed by 'Start'/'Stop' cue. We report decoding performance before the action onset using a quadratic discriminant analysis classifier based on temporal features. (i) Despite the visual and driving related cognitive distractions, we show the presence of anticipatory event related potentials locked to the stimuli onset similar to the widely reported CNV signal (with an average peak value of -8 μV at electrode Cz). (ii) We demonstrate the discrimination between cases requiring to perform an action upon imperative subsequent stimulus (Go condition, e.g. a 'Red' traffic light) versus events that do not require such action (No-go condition; e.g. a 'Yellow' light); with an average single trial classification performance of 0.83 ± 0.13 for braking and 0.79 ± 0.12 for accelerating (area under the curve). (iii) We show that the centro-medial anticipatory potentials are observed as early as 320 ± 200 ms before the action with a detection rate of 0.77 ± 0.12 in offline analysis. We show for the first time the feasibility of predicting the driver's intention through decoding anticipatory related potentials during simulated car driving with high recognition rates.

  15. Neural predictors of evaluative attitudes toward celebrities

    PubMed Central

    Shibata, Kazuhisa; Matsumoto, Kenji; Adolphs, Ralph

    2017-01-01

    Abstract Our attitudes toward others influence a wide range of everyday behaviors and have been the most extensively studied concept in the history of social psychology. Yet they remain difficult to measure reliably and objectively, since both explicit and implicit measures are typically confounded by other psychological processes. We here address the feasibility of decoding incidental attitudes based on brain activations. Participants were presented with pictures of members of a Japanese idol group inside an functional magnetic resonance imaging scanner while performing an unrelated detection task, and subsequently (outside the scanner) performed an incentive-compatible choice task that revealed their attitude toward each celebrity. We used a real-world election scheme that exists for this idol group, which confirmed both strongly negative and strongly positive attitudes toward specific individuals. Whole-brain multivariate analyses (searchlight-based support vector regression) showed that activation patterns in the anterior striatum predicted each participant’s revealed attitudes (choice behavior) using leave-one-out (as well as 4-fold) cross-validation across participants. In contrast, attitude extremity (unsigned magnitude) could be decoded from a distinct region in the posterior striatum. The findings demonstrate dissociable striatal representations of valenced attitude and attitude extremity and constitute a first step toward an objective and process-pure neural measure of attitudes. PMID:27651542

  16. Dynamic frame resizing with convolutional neural network for efficient video compression

    NASA Astrophysics Data System (ADS)

    Kim, Jaehwan; Park, Youngo; Choi, Kwang Pyo; Lee, JongSeok; Jeon, Sunyoung; Park, JeongHoon

    2017-09-01

    In the past, video codecs such as vc-1 and H.263 used a technique to encode reduced-resolution video and restore original resolution from the decoder for improvement of coding efficiency. The techniques of vc-1 and H.263 Annex Q are called dynamic frame resizing and reduced-resolution update mode, respectively. However, these techniques have not been widely used due to limited performance improvements that operate well only under specific conditions. In this paper, video frame resizing (reduced/restore) technique based on machine learning is proposed for improvement of coding efficiency. The proposed method features video of low resolution made by convolutional neural network (CNN) in encoder and reconstruction of original resolution using CNN in decoder. The proposed method shows improved subjective performance over all the high resolution videos which are dominantly consumed recently. In order to assess subjective quality of the proposed method, Video Multi-method Assessment Fusion (VMAF) which showed high reliability among many subjective measurement tools was used as subjective metric. Moreover, to assess general performance, diverse bitrates are tested. Experimental results showed that BD-rate based on VMAF was improved by about 51% compare to conventional HEVC. Especially, VMAF values were significantly improved in low bitrate. Also, when the method is subjectively tested, it had better subjective visual quality in similar bit rate.

  17. Cross-Modal Decoding of Neural Patterns Associated with Working Memory: Evidence for Attention-Based Accounts of Working Memory

    PubMed Central

    Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica

    2016-01-01

    Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high–low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. PMID:25146374

  18. Fast attainment of computer cursor control with noninvasively acquired brain signals

    NASA Astrophysics Data System (ADS)

    Bradberry, Trent J.; Gentili, Rodolphe J.; Contreras-Vidal, José L.

    2011-06-01

    Brain-computer interface (BCI) systems are allowing humans and non-human primates to drive prosthetic devices such as computer cursors and artificial arms with just their thoughts. Invasive BCI systems acquire neural signals with intracranial or subdural electrodes, while noninvasive BCI systems typically acquire neural signals with scalp electroencephalography (EEG). Some drawbacks of invasive BCI systems are the inherent risks of surgery and gradual degradation of signal integrity. A limitation of noninvasive BCI systems for two-dimensional control of a cursor, in particular those based on sensorimotor rhythms, is the lengthy training time required by users to achieve satisfactory performance. Here we describe a novel approach to continuously decoding imagined movements from EEG signals in a BCI experiment with reduced training time. We demonstrate that, using our noninvasive BCI system and observational learning, subjects were able to accomplish two-dimensional control of a cursor with performance levels comparable to those of invasive BCI systems. Compared to other studies of noninvasive BCI systems, training time was substantially reduced, requiring only a single session of decoder calibration (~20 min) and subject practice (~20 min). In addition, we used standardized low-resolution brain electromagnetic tomography to reveal that the neural sources that encoded observed cursor movement may implicate a human mirror neuron system. These findings offer the potential to continuously control complex devices such as robotic arms with one's mind without lengthy training or surgery.

  19. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter initialization. Finally, the architecture extended control to tasks beyond those used for CLDA training. These results have significant implications towards the development of clinically-viable neuroprosthetics. PMID:27035820

  20. Multiplicative mixing of object identity and image attributes in single inferior temporal neurons.

    PubMed

    Ratan Murty, N Apurva; Arun, S P

    2018-04-03

    Object recognition is challenging because the same object can produce vastly different images, mixing signals related to its identity with signals due to its image attributes, such as size, position, rotation, etc. Previous studies have shown that both signals are present in high-level visual areas, but precisely how they are combined has remained unclear. One possibility is that neurons might encode identity and attribute signals multiplicatively so that each can be efficiently decoded without interference from the other. Here, we show that, in high-level visual cortex, responses of single neurons can be explained better as a product rather than a sum of tuning for object identity and tuning for image attributes. This subtle effect in single neurons produced substantially better population decoding of object identity and image attributes in the neural population as a whole. This property was absent both in low-level vision models and in deep neural networks. It was also unique to invariances: when tested with two-part objects, neural responses were explained better as a sum than as a product of part tuning. Taken together, our results indicate that signals requiring separate decoding, such as object identity and image attributes, are combined multiplicatively in IT neurons, whereas signals that require integration (such as parts in an object) are combined additively. Copyright © 2018 the Author(s). Published by PNAS.

  1. Vacillation, indecision and hesitation in moment-by-moment decoding of monkey motor cortex

    PubMed Central

    Kaufman, Matthew T; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V

    2015-01-01

    When choosing actions, we can act decisively, vacillate, or suffer momentary indecision. Studying how individual decisions unfold requires moment-by-moment readouts of brain state. Here we provide such a view from dorsal premotor and primary motor cortex. Two monkeys performed a novel decision task while we recorded from many neurons simultaneously. We found that a decoder trained using ‘forced choices’ (one target viable) was highly reliable when applied to ‘free choices’. However, during free choices internal events formed three categories. Typically, neural activity was consistent with rapid, unwavering choices. Sometimes, though, we observed presumed ‘changes of mind’: the neural state initially reflected one choice before changing to reflect the final choice. Finally, we observed momentary ‘indecision’: delay forming any clear motor plan. Further, moments of neural indecision accompanied moments of behavioral indecision. Together, these results reveal the rich and diverse set of internal events long suspected to occur during free choice. DOI: http://dx.doi.org/10.7554/eLife.04677.001 PMID:25942352

  2. Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf

    PubMed Central

    Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao

    2016-01-01

    Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461

  3. Multineuron spike train analysis with R-convolution linear combination kernel.

    PubMed

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Cortical processing of pitch: Model-based encoding and decoding of auditory fMRI responses to real-life sounds.

    PubMed

    De Angelis, Vittoria; De Martino, Federico; Moerel, Michelle; Santoro, Roberta; Hausfeld, Lars; Formisano, Elia

    2017-11-13

    Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant perceptual attribute such as pitch. Taken together, the results of our model-based encoding and decoding analyses indicated that the pitch of complex real life sounds is extracted and processed in lateral HG/STG regions, at locations consistent with those indicated in several previous fMRI studies using synthetic sounds. Within these regions, pitch-related sound representations reflect the modulatory combination of height and the salience of the pitch percept. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Optimizing the learning rate for adaptive estimation of neural encoding models

    PubMed Central

    2018-01-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069

  6. Optimizing the learning rate for adaptive estimation of neural encoding models.

    PubMed

    Hsieh, Han-Lin; Shanechi, Maryam M

    2018-05-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.

  7. Real-time classification of auditory sentences using evoked cortical activity in humans

    NASA Astrophysics Data System (ADS)

    Moses, David A.; Leonard, Matthew K.; Chang, Edward F.

    2018-06-01

    Objective. Recent research has characterized the anatomical and functional basis of speech perception in the human auditory cortex. These advances have made it possible to decode speech information from activity in brain regions like the superior temporal gyrus, but no published work has demonstrated this ability in real-time, which is necessary for neuroprosthetic brain-computer interfaces. Approach. Here, we introduce a real-time neural speech recognition (rtNSR) software package, which was used to classify spoken input from high-resolution electrocorticography signals in real-time. We tested the system with two human subjects implanted with electrode arrays over the lateral brain surface. Subjects listened to multiple repetitions of ten sentences, and rtNSR classified what was heard in real-time from neural activity patterns using direct sentence-level and HMM-based phoneme-level classification schemes. Main results. We observed single-trial sentence classification accuracies of 90% or higher for each subject with less than 7 minutes of training data, demonstrating the ability of rtNSR to use cortical recordings to perform accurate real-time speech decoding in a limited vocabulary setting. Significance. Further development and testing of the package with different speech paradigms could influence the design of future speech neuroprosthetic applications.

  8. On the asynchronously continuous control of mobile robot movement by motor cortical spiking activity.

    PubMed

    Xu, Zhiming; So, Rosa Q; Toe, Kyaw Kyar; Ang, Kai Keng; Guan, Cuntai

    2014-01-01

    This paper presents an asynchronously intracortical brain-computer interface (BCI) which allows the subject to continuously drive a mobile robot. This system has a great implication for disabled patients to move around. By carefully designing a multiclass support vector machine (SVM), the subject's self-paced instantaneous movement intents are continuously decoded to control the mobile robot. In particular, we studied the stability of the neural representation of the movement directions. Experimental results on the nonhuman primate showed that the overt movement directions were stably represented in ensemble of recorded units, and our SVM classifier could successfully decode such movements continuously along the desired movement path. However, the neural representation of the stop state for the self-paced control was not stably represented and could drift.

  9. Active visual search in non-stationary scenes: coping with temporal variability and uncertainty

    NASA Astrophysics Data System (ADS)

    Ušćumlić, Marija; Blankertz, Benjamin

    2016-02-01

    Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and fixation duration) in an active search task. In addition, our method to improve single-trial detection performance in this adverse scenario is an important step in making brain-computer interfacing technology available for human-computer interaction applications.

  10. Adaptive neuron-to-EMG decoder training for FES neuroprostheses

    NASA Astrophysics Data System (ADS)

    Ethier, Christian; Acuna, Daniel; Solla, Sara A.; Miller, Lee E.

    2016-08-01

    Objective. We have previously demonstrated a brain-machine interface neuroprosthetic system that provided continuous control of functional electrical stimulation (FES) and restoration of grasp in a primate model of spinal cord injury (SCI). Predicting intended EMG directly from cortical recordings provides a flexible high-dimensional control signal for FES. However, no peripheral signal such as force or EMG is available for training EMG decoders in paralyzed individuals. Approach. Here we present a method for training an EMG decoder in the absence of muscle activity recordings; the decoder relies on mapping behaviorally relevant cortical activity to the inferred EMG activity underlying an intended action. Monkeys were trained at a 2D isometric wrist force task to control a computer cursor by applying force in the flexion, extension, ulnar, and radial directions and execute a center-out task. We used a generic muscle force-to-endpoint force model based on muscle pulling directions to relate each target force to an optimal EMG pattern that attained the target force while minimizing overall muscle activity. We trained EMG decoders during the target hold periods using a gradient descent algorithm that compared EMG predictions to optimal EMG patterns. Main results. We tested this method both offline and online. We quantified both the accuracy of offline force predictions and the ability of a monkey to use these real-time force predictions for closed-loop cursor control. We compared both offline and online results to those obtained with several other direct force decoders, including an optimal decoder computed from concurrently measured neural and force signals. Significance. This novel approach to training an adaptive EMG decoder could make a brain-control FES neuroprosthesis an effective tool to restore the hand function of paralyzed individuals. Clinical implementation would make use of individualized EMG-to-force models. Broad generalization could be achieved by including data from multiple grasping tasks in the training of the neuron-to-EMG decoder. Our approach would make it possible for persons with SCI to grasp objects with their own hands, using near-normal motor intent.

  11. A fuzzy integral method based on the ensemble of neural networks to analyze fMRI data for cognitive state classification across multiple subjects.

    PubMed

    Cacha, L A; Parida, S; Dehuri, S; Cho, S-B; Poznanski, R R

    2016-12-01

    The huge number of voxels in fMRI over time poses a major challenge to for effective analysis. Fast, accurate, and reliable classifiers are required for estimating the decoding accuracy of brain activities. Although machine-learning classifiers seem promising, individual classifiers have their own limitations. To address this limitation, the present paper proposes a method based on the ensemble of neural networks to analyze fMRI data for cognitive state classification for application across multiple subjects. Similarly, the fuzzy integral (FI) approach has been employed as an efficient tool for combining different classifiers. The FI approach led to the development of a classifiers ensemble technique that performs better than any of the single classifier by reducing the misclassification, the bias, and the variance. The proposed method successfully classified the different cognitive states for multiple subjects with high accuracy of classification. Comparison of the performance improvement, while applying ensemble neural networks method, vs. that of the individual neural network strongly points toward the usefulness of the proposed method.

  12. Deep hierarchical attention network for video description

    NASA Astrophysics Data System (ADS)

    Li, Shuohao; Tang, Min; Zhang, Jun

    2018-03-01

    Pairing video to natural language description remains a challenge in computer vision and machine translation. Inspired by image description, which uses an encoder-decoder model for reducing visual scene into a single sentence, we propose a deep hierarchical attention network for video description. The proposed model uses convolutional neural network (CNN) and bidirectional LSTM network as encoders while a hierarchical attention network is used as the decoder. Compared to encoder-decoder models used in video description, the bidirectional LSTM network can capture the temporal structure among video frames. Moreover, the hierarchical attention network has an advantage over single-layer attention network on global context modeling. To make a fair comparison with other methods, we evaluate the proposed architecture with different types of CNN structures and decoders. Experimental results on the standard datasets show that our model has a more superior performance than the state-of-the-art techniques.

  13. Modeling task-specific neuronal ensembles improves decoding of grasp

    NASA Astrophysics Data System (ADS)

    Smith, Ryan J.; Soares, Alcimar B.; Rouse, Adam G.; Schieber, Marc H.; Thakor, Nitish V.

    2018-06-01

    Objective. Dexterous movement involves the activation and coordination of networks of neuronal populations across multiple cortical regions. Attempts to model firing of individual neurons commonly treat the firing rate as directly modulating with motor behavior. However, motor behavior may additionally be associated with modulations in the activity and functional connectivity of neurons in a broader ensemble. Accounting for variations in neural ensemble connectivity may provide additional information about the behavior being performed. Approach. In this study, we examined neural ensemble activity in primary motor cortex (M1) and premotor cortex (PM) of two male rhesus monkeys during performance of a center-out reach, grasp and manipulate task. We constructed point process encoding models of neuronal firing that incorporated task-specific variations in the baseline firing rate as well as variations in functional connectivity with the neural ensemble. Models were evaluated both in terms of their encoding capabilities and their ability to properly classify the grasp being performed. Main results. Task-specific ensemble models correctly predicted the performed grasp with over 95% accuracy and were shown to outperform models of neuronal activity that assume only a variable baseline firing rate. Task-specific ensemble models exhibited superior decoding performance in 82% of units in both monkeys (p  <  0.01). Inclusion of ensemble activity also broadly improved the ability of models to describe observed spiking. Encoding performance of task-specific ensemble models, measured by spike timing predictability, improved upon baseline models in 62% of units. Significance. These results suggest that additional discriminative information about motor behavior found in the variations in functional connectivity of neuronal ensembles located in motor-related cortical regions is relevant to decode complex tasks such as grasping objects, and may serve the basis for more reliable and accurate neural prosthesis.

  14. Neural mechanisms underlying catastrophic failure in human-machine interaction during aerial navigation.

    PubMed

    Saproo, Sameer; Shih, Victor; Jangraw, David C; Sajda, Paul

    2016-12-01

    We investigated the neural correlates of workload buildup in a fine visuomotor task called the boundary avoidance task (BAT). The BAT has been known to induce naturally occurring failures of human-machine coupling in high performance aircraft that can potentially lead to a crash-these failures are termed pilot induced oscillations (PIOs). We recorded EEG and pupillometry data from human subjects engaged in a flight BAT simulated within a virtual 3D environment. We find that workload buildup in a BAT can be successfully decoded from oscillatory features in the electroencephalogram (EEG). Information in delta, theta, alpha, beta, and gamma spectral bands of the EEG all contribute to successful decoding, however gamma band activity with a lateralized somatosensory topography has the highest contribution, while theta band activity with a fronto-central topography has the most robust contribution in terms of real-world usability. We show that the output of the spectral decoder can be used to predict PIO susceptibility. We also find that workload buildup in the task induces pupil dilation, the magnitude of which is significantly correlated with the magnitude of the decoded EEG signals. These results suggest that PIOs may result from the dysregulation of cortical networks such as the locus coeruleus (LC)-anterior cingulate cortex (ACC) circuit. Our findings may generalize to similar control failures in other cases of tight man-machine coupling where gains and latencies in the control system must be inferred and compensated for by the human operators. A closed-loop intervention using neurophysiological decoding of workload buildup that targets the LC-ACC circuit may positively impact operator performance in such situations.

  15. Neural mechanisms underlying catastrophic failure in human-machine interaction during aerial navigation

    NASA Astrophysics Data System (ADS)

    Saproo, Sameer; Shih, Victor; Jangraw, David C.; Sajda, Paul

    2016-12-01

    Objective. We investigated the neural correlates of workload buildup in a fine visuomotor task called the boundary avoidance task (BAT). The BAT has been known to induce naturally occurring failures of human-machine coupling in high performance aircraft that can potentially lead to a crash—these failures are termed pilot induced oscillations (PIOs). Approach. We recorded EEG and pupillometry data from human subjects engaged in a flight BAT simulated within a virtual 3D environment. Main results. We find that workload buildup in a BAT can be successfully decoded from oscillatory features in the electroencephalogram (EEG). Information in delta, theta, alpha, beta, and gamma spectral bands of the EEG all contribute to successful decoding, however gamma band activity with a lateralized somatosensory topography has the highest contribution, while theta band activity with a fronto-central topography has the most robust contribution in terms of real-world usability. We show that the output of the spectral decoder can be used to predict PIO susceptibility. We also find that workload buildup in the task induces pupil dilation, the magnitude of which is significantly correlated with the magnitude of the decoded EEG signals. These results suggest that PIOs may result from the dysregulation of cortical networks such as the locus coeruleus (LC)—anterior cingulate cortex (ACC) circuit. Significance. Our findings may generalize to similar control failures in other cases of tight man-machine coupling where gains and latencies in the control system must be inferred and compensated for by the human operators. A closed-loop intervention using neurophysiological decoding of workload buildup that targets the LC-ACC circuit may positively impact operator performance in such situations.

  16. Emergence of representations through repeated training on pronouncing novel letter combinations leads to efficient reading.

    PubMed

    Takashima, Atsuko; Hulzink, Iris; Wagensveld, Barbara; Verhoeven, Ludo

    2016-08-01

    Printed text can be decoded by utilizing different processing routes depending on the familiarity of the script. A predominant use of word-level decoding strategies can be expected in the case of a familiar script, and an almost exclusive use of letter-level decoding strategies for unfamiliar scripts. Behavioural studies have revealed that frequently occurring words are read more efficiently, suggesting that these words are read in a more holistic way at the word-level, than infrequent and unfamiliar words. To test whether repeated exposure to specific letter combinations leads to holistic reading, we monitored both behavioural and neural responses during novel script decoding and examined changes related to repeated exposure. We trained a group of Dutch university students to decode pseudowords written in an unfamiliar script, i.e., Korean Hangul characters. We compared behavioural and neural responses to pronouncing trained versus untrained two-character pseudowords (equivalent to two-syllable pseudowords). We tested once shortly after the initial training and again after a four days' delay that included another training session. We found that trained pseudowords were pronounced faster and more accurately than novel combinations of radicals (equivalent to letters). Imaging data revealed that pronunciation of trained pseudowords engaged the posterior temporo-parietal region, and engagement of this network was predictive of reading efficiency a month later. The results imply that repeated exposure to specific combinations of graphemes can lead to emergence of holistic representations that result in efficient reading. Furthermore, inter-individual differences revealed that good learners retained efficiency more than bad learners one month later. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Cross-Modal Decoding of Neural Patterns Associated with Working Memory: Evidence for Attention-Based Accounts of Working Memory.

    PubMed

    Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica

    2016-01-01

    Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high-low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Neural predictors of evaluative attitudes toward celebrities.

    PubMed

    Izuma, Keise; Shibata, Kazuhisa; Matsumoto, Kenji; Adolphs, Ralph

    2017-03-01

    Our attitudes toward others influence a wide range of everyday behaviors and have been the most extensively studied concept in the history of social psychology. Yet they remain difficult to measure reliably and objectively, since both explicit and implicit measures are typically confounded by other psychological processes. We here address the feasibility of decoding incidental attitudes based on brain activations. Participants were presented with pictures of members of a Japanese idol group inside an functional magnetic resonance imaging scanner while performing an unrelated detection task, and subsequently (outside the scanner) performed an incentive-compatible choice task that revealed their attitude toward each celebrity. We used a real-world election scheme that exists for this idol group, which confirmed both strongly negative and strongly positive attitudes toward specific individuals. Whole-brain multivariate analyses (searchlight-based support vector regression) showed that activation patterns in the anterior striatum predicted each participant's revealed attitudes (choice behavior) using leave-one-out (as well as 4-fold) cross-validation across participants. In contrast, attitude extremity (unsigned magnitude) could be decoded from a distinct region in the posterior striatum. The findings demonstrate dissociable striatal representations of valenced attitude and attitude extremity and constitute a first step toward an objective and process-pure neural measure of attitudes. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  19. Flexible categorization of relative stimulus strength by the optic tectum

    PubMed Central

    Mysore, Shreesh P.; Knudsen, Eric I.

    2011-01-01

    Categorization is the process by which the brain segregates continuously variable stimuli into discrete groups. We report that patterns of neural population activity in the owl optic tectum (OT) categorize stimuli based on their relative strengths into “strongest” versus “other”. The category boundary shifts adaptively to track changes in the absolute strength of the strongest stimulus. This population-wide categorization is mediated by the responses of a small subset of neurons. Our data constitute the first direct demonstration of an explicit categorization of stimuli by a neural network based on relative stimulus strength or salience. The finding of categorization by the population code relaxes constraints on the properties of downstream decoders that might read out the location of the strongest stimulus. These results indicate that the ensemble neural code in the OT could mediate bottom-up stimulus selection for gaze and attention, a form of stimulus categorization in which the category boundary often shifts within hundreds of milliseconds. PMID:21613487

  20. Perceptual learning modifies the functional specializations of visual cortical areas.

    PubMed

    Chen, Nihong; Cai, Peng; Zhou, Tiangang; Thompson, Benjamin; Fang, Fang

    2016-05-17

    Training can improve performance of perceptual tasks. This phenomenon, known as perceptual learning, is strongest for the trained task and stimulus, leading to a widely accepted assumption that the associated neuronal plasticity is restricted to brain circuits that mediate performance of the trained task. Nevertheless, learning does transfer to other tasks and stimuli, implying the presence of more widespread plasticity. Here, we trained human subjects to discriminate the direction of coherent motion stimuli. The behavioral learning effect substantially transferred to noisy motion stimuli. We used transcranial magnetic stimulation (TMS) and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms underlying the transfer of learning. The TMS experiment revealed dissociable, causal contributions of V3A (one of the visual areas in the extrastriate visual cortex) and MT+ (middle temporal/medial superior temporal cortex) to coherent and noisy motion processing. Surprisingly, the contribution of MT+ to noisy motion processing was replaced by V3A after perceptual training. The fMRI experiment complemented and corroborated the TMS finding. Multivariate pattern analysis showed that, before training, among visual cortical areas, coherent and noisy motion was decoded most accurately in V3A and MT+, respectively. After training, both kinds of motion were decoded most accurately in V3A. Our findings demonstrate that the effects of perceptual learning extend far beyond the retuning of specific neural populations for the trained stimuli. Learning could dramatically modify the inherent functional specializations of visual cortical areas and dynamically reweight their contributions to perceptual decisions based on their representational qualities. These neural changes might serve as the neural substrate for the transfer of perceptual learning.

  1. Representation of continuous hand and arm movements in macaque areas M1, F5, and AIP: a comparative decoding study

    NASA Astrophysics Data System (ADS)

    Menz, Veera Katharina; Schaffelhofer, Stefan; Scherberger, Hansjörg

    2015-10-01

    Objective. In the last decade, multiple brain areas have been investigated with respect to their decoding capability of continuous arm or hand movements. So far, these studies have mainly focused on motor or premotor areas like M1 and F5. However, there is accumulating evidence that anterior intraparietal area (AIP) in the parietal cortex also contains information about continuous movement. Approach. In this study, we decoded 27 degrees of freedom representing complete hand and arm kinematics during a delayed grasping task from simultaneously recorded activity in areas M1, F5, and AIP of two macaque monkeys (Macaca mulatta). Main results. We found that all three areas provided decoding performances that lay significantly above chance. In particular, M1 yielded highest decoding accuracy followed by F5 and AIP. Furthermore, we provide support for the notion that AIP does not only code categorical visual features of objects to be grasped, but also contains a substantial amount of temporal kinematic information. Significance. This fact could be utilized in future developments of neural interfaces restoring hand and arm movements.

  2. Bayesian population decoding of spiking neurons.

    PubMed

    Gerwinn, Sebastian; Macke, Jakob; Bethge, Matthias

    2009-01-01

    The timing of action potentials in spiking neurons depends on the temporal dynamics of their inputs and contains information about temporal fluctuations in the stimulus. Leaky integrate-and-fire neurons constitute a popular class of encoding models, in which spike times depend directly on the temporal structure of the inputs. However, optimal decoding rules for these models have only been studied explicitly in the noiseless case. Here, we study decoding rules for probabilistic inference of a continuous stimulus from the spike times of a population of leaky integrate-and-fire neurons with threshold noise. We derive three algorithms for approximating the posterior distribution over stimuli as a function of the observed spike trains. In addition to a reconstruction of the stimulus we thus obtain an estimate of the uncertainty as well. Furthermore, we derive a 'spike-by-spike' online decoding scheme that recursively updates the posterior with the arrival of each new spike. We use these decoding rules to reconstruct time-varying stimuli represented by a Gaussian process from spike trains of single neurons as well as neural populations.

  3. Retrosynthetic Reaction Prediction Using Neural Sequence-to-Sequence Models

    PubMed Central

    2017-01-01

    We describe a fully data driven model that learns to perform a retrosynthetic reaction prediction task, which is treated as a sequence-to-sequence mapping problem. The end-to-end trained model has an encoder–decoder architecture that consists of two recurrent neural networks, which has previously shown great success in solving other sequence-to-sequence prediction tasks such as machine translation. The model is trained on 50,000 experimental reaction examples from the United States patent literature, which span 10 broad reaction types that are commonly used by medicinal chemists. We find that our model performs comparably with a rule-based expert system baseline model, and also overcomes certain limitations associated with rule-based expert systems and with any machine learning approach that contains a rule-based expert system component. Our model provides an important first step toward solving the challenging problem of computational retrosynthetic analysis. PMID:29104927

  4. A decision-making model based on a spiking neural circuit and synaptic plasticity.

    PubMed

    Wei, Hui; Bu, Yijie; Dai, Dawei

    2017-10-01

    To adapt to the environment and survive, most animals can control their behaviors by making decisions. The process of decision-making and responding according to cues in the environment is stable, sustainable, and learnable. Understanding how behaviors are regulated by neural circuits and the encoding and decoding mechanisms from stimuli to responses are important goals in neuroscience. From results observed in Drosophila experiments, the underlying decision-making process is discussed, and a neural circuit that implements a two-choice decision-making model is proposed to explain and reproduce the observations. Compared with previous two-choice decision making models, our model uses synaptic plasticity to explain changes in decision output given the same environment. Moreover, biological meanings of parameters of our decision-making model are discussed. In this paper, we explain at the micro-level (i.e., neurons and synapses) how observable decision-making behavior at the macro-level is acquired and achieved.

  5. Using Reinforcement Learning to Provide Stable Brain-Machine Interface Control Despite Neural Input Reorganization

    PubMed Central

    Pohlmeyer, Eric A.; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline W.; Sanchez, Justin C.

    2014-01-01

    Brain-machine interface (BMI) systems give users direct neural control of robotic, communication, or functional electrical stimulation systems. As BMI systems begin transitioning from laboratory settings into activities of daily living, an important goal is to develop neural decoding algorithms that can be calibrated with a minimal burden on the user, provide stable control for long periods of time, and can be responsive to fluctuations in the decoder’s neural input space (e.g. neurons appearing or being lost amongst electrode recordings). These are significant challenges for static neural decoding algorithms that assume stationary input/output relationships. Here we use an actor-critic reinforcement learning architecture to provide an adaptive BMI controller that can successfully adapt to dramatic neural reorganizations, can maintain its performance over long time periods, and which does not require the user to produce specific kinetic or kinematic activities to calibrate the BMI. Two marmoset monkeys used the Reinforcement Learning BMI (RLBMI) to successfully control a robotic arm during a two-target reaching task. The RLBMI was initialized using random initial conditions, and it quickly learned to control the robot from brain states using only a binary evaluative feedback regarding whether previously chosen robot actions were good or bad. The RLBMI was able to maintain control over the system throughout sessions spanning multiple weeks. Furthermore, the RLBMI was able to quickly adapt and maintain control of the robot despite dramatic perturbations to the neural inputs, including a series of tests in which the neuron input space was deliberately halved or doubled. PMID:24498055

  6. A Novel Robot System Integrating Biological and Mechanical Intelligence Based on Dissociated Neural Network-Controlled Closed-Loop Environment.

    PubMed

    Li, Yongcheng; Sun, Rong; Wang, Yuechao; Li, Hongyi; Zheng, Xiongfei

    2016-01-01

    We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning). Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle) to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot's performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks.

  7. A Novel Robot System Integrating Biological and Mechanical Intelligence Based on Dissociated Neural Network-Controlled Closed-Loop Environment

    PubMed Central

    Wang, Yuechao; Li, Hongyi; Zheng, Xiongfei

    2016-01-01

    We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning). Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle) to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot’s performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks. PMID:27806074

  8. Vertically aligned carbon nanofiber as nano-neuron interface for monitoring neural function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ericson, Milton Nance; McKnight, Timothy E; Melechko, Anatoli Vasilievich

    2012-01-01

    Neural chips, which are capable of simultaneous, multi-site neural recording and stimulation, have been used to detect and modulate neural activity for almost 30 years. As a neural interface, neural chips provide dynamic functional information for neural decoding and neural control. By improving sensitivity and spatial resolution, nano-scale electrodes may revolutionize neural detection and modulation at cellular and molecular levels as nano-neuron interfaces. We developed a carbon-nanofiber neural chip with lithographically defined arrays of vertically aligned carbon nanofiber electrodes and demonstrated its capability of both stimulating and monitoring electrophysiological signals from brain tissues in vitro and monitoring dynamic information ofmore » neuroplasticity. This novel nano-neuron interface can potentially serve as a precise, informative, biocompatible, and dual-mode neural interface for monitoring of both neuroelectrical and neurochemical activity at the single cell level and even inside the cell.« less

  9. Detecting intention to execute the next movement while performing current movement from EEG using global optimal constrained ICA.

    PubMed

    Eilbeigi, Elnaz; Setarehdan, Seyed Kamaledin

    2018-05-26

    Brain-computer interfaces (BCIs) are a promising tool in neurorehabilitation. The intention to perform a motor action can be detected from brain signals and used to control robotic devices. Most previous studies have focused on the starting of movements from a resting state, while in daily life activities, motions occur continuously and the neural activities correlated to the evolving movements are yet to be investigated. First we investigate the existence of neural correlates of intention to replace an object on the table during a holding phase. Next, we present a new method to extract the movement-related cortical potentials (MRCP) from a single-trial EEG. A novel method called Global optimal constrained ICA (GocICA) is proposed to overcome the limitations of cICA which is implemented based on Particle Swarm Optimization (PSO) and Charged System Search (CSS) techniques. GocICA is then utilized for decoding the intention to grasp and lift and intention to replace movements where the results were compared. It was found that GocICA significantly improves the intention detection performance. Best results in offline detection were obtained with CSS-cICA for both kinds of intentions. Furthermore, pseudo-online decoding showed that GocICA was able to predict both intentions before the onset of related movements with the highest probability. Decoding of the next movement intention during current movement is possible, which can be used to create more natural neuroprostheses. The results demonstrate that GocICA is a promising new algorithm for single-trial MRCP detection which can be used for detecting other types of ERPs such as P300. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Action prediction based on anticipatory brain potentials during simulated driving

    NASA Astrophysics Data System (ADS)

    Khaliliardali, Zahra; Chavarriaga, Ricardo; Gheorghe, Lucian Andrei; Millán, José del R.

    2015-12-01

    Objective. The ability of an automobile to infer the driver’s upcoming actions directly from neural signals could enrich the interaction of the car with its driver. Intelligent vehicles fitted with an on-board brain-computer interface able to decode the driver’s intentions can use this information to improve the driving experience. In this study we investigate the neural signatures of anticipation of specific actions, namely braking and accelerating. Approach. We investigated anticipatory slow cortical potentials in electroencephalogram recorded from 18 healthy participants in a driving simulator using a variant of the contingent negative variation (CNV) paradigm with Go and No-go conditions: count-down numbers followed by ‘Start’/‘Stop’ cue. We report decoding performance before the action onset using a quadratic discriminant analysis classifier based on temporal features. Main results. (i) Despite the visual and driving related cognitive distractions, we show the presence of anticipatory event related potentials locked to the stimuli onset similar to the widely reported CNV signal (with an average peak value of -8 μV at electrode Cz). (ii) We demonstrate the discrimination between cases requiring to perform an action upon imperative subsequent stimulus (Go condition, e.g. a ‘Red’ traffic light) versus events that do not require such action (No-go condition; e.g. a ‘Yellow’ light); with an average single trial classification performance of 0.83 ± 0.13 for braking and 0.79 ± 0.12 for accelerating (area under the curve). (iii) We show that the centro-medial anticipatory potentials are observed as early as 320 ± 200 ms before the action with a detection rate of 0.77 ± 0.12 in offline analysis. Significance. We show for the first time the feasibility of predicting the driver’s intention through decoding anticipatory related potentials during simulated car driving with high recognition rates.

  11. Is race erased? Decoding race from patterns of neural activity when skin color is not diagnostic of group boundaries.

    PubMed

    Ratner, Kyle G; Kaul, Christian; Van Bavel, Jay J

    2013-10-01

    Several theories suggest that people do not represent race when it does not signify group boundaries. However, race is often associated with visually salient differences in skin tone and facial features. In this study, we investigated whether race could be decoded from distributed patterns of neural activity in the fusiform gyri and early visual cortex when visual features that often covary with race were orthogonal to group membership. To this end, we used multivariate pattern analysis to examine an fMRI dataset that was collected while participants assigned to mixed-race groups categorized own-race and other-race faces as belonging to their newly assigned group. Whereas conventional univariate analyses provided no evidence of race-based responses in the fusiform gyri or early visual cortex, multivariate pattern analysis suggested that race was represented within these regions. Moreover, race was represented in the fusiform gyri to a greater extent than early visual cortex, suggesting that the fusiform gyri results do not merely reflect low-level perceptual information (e.g. color, contrast) from early visual cortex. These findings indicate that patterns of activation within specific regions of the visual cortex may represent race even when overall activation in these regions is not driven by racial information.

  12. A closed-loop neurobotic system for fine touch sensing

    NASA Astrophysics Data System (ADS)

    Bologna, L. L.; Pinoteau, J.; Passot, J.-B.; Garrido, J. A.; Vogel, J.; Ros Vidal, E.; Arleo, A.

    2013-08-01

    Objective. Fine touch sensing relies on peripheral-to-central neurotransmission of somesthetic percepts, as well as on active motion policies shaping tactile exploration. This paper presents a novel neuroengineering framework for robotic applications based on the multistage processing of fine tactile information in the closed action-perception loop. Approach. The integrated system modules focus on (i) neural coding principles of spatiotemporal spiking patterns at the periphery of the somatosensory pathway, (ii) probabilistic decoding mechanisms mediating cortical-like tactile recognition and (iii) decision-making and low-level motor adaptation underlying active touch sensing. We probed the resulting neural architecture through a Braille reading task. Main results. Our results on the peripheral encoding of primary contact features are consistent with experimental data on human slow-adapting type I mechanoreceptors. They also suggest second-order processing by cuneate neurons may resolve perceptual ambiguities, contributing to a fast and highly performing online discrimination of Braille inputs by a downstream probabilistic decoder. The implemented multilevel adaptive control provides robustness to motion inaccuracy, while making the number of finger accelerations covariate with Braille character complexity. The resulting modulation of fingertip kinematics is coherent with that observed in human Braille readers. Significance. This work provides a basis for the design and implementation of modular neuromimetic systems for fine touch discrimination in robotics.

  13. [Modulation of Metacognition with Decoded Neurofeedback].

    PubMed

    Koizumi, Ai; Cortese, Aurelio; Amano, Kaoru; Kawato, Mitsuo; Lau, Hakwan

    2017-12-01

    Humans often assess their confidence in their own perception, e.g., feeling "confident" or "certain" of having seen a friend, or feeling "uncertain" about whether the phone rang. The neural mechanism underlying the metacognitive function that reflects subjective perception still remains under debate. We have previously used decoded neurofeedback (DecNef) to demonstrate that manipulating the multivoxel activation patterns in the frontoparietal network modulates perceptual confidence without affecting perceptual performance. The results provided clear evidence for a dissociation between perceptual confidence and performance and suggested a distinct role of the frontoparietal network in metacognition.

  14. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  15. Measuring Fisher Information Accurately in Correlated Neural Populations

    PubMed Central

    Kohn, Adam; Pouget, Alexandre

    2015-01-01

    Neural responses are known to be variable. In order to understand how this neural variability constrains behavioral performance, we need to be able to measure the reliability with which a sensory stimulus is encoded in a given population. However, such measures are challenging for two reasons: First, they must take into account noise correlations which can have a large influence on reliability. Second, they need to be as efficient as possible, since the number of trials available in a set of neural recording is usually limited by experimental constraints. Traditionally, cross-validated decoding has been used as a reliability measure, but it only provides a lower bound on reliability and underestimates reliability substantially in small datasets. We show that, if the number of trials per condition is larger than the number of neurons, there is an alternative, direct estimate of reliability which consistently leads to smaller errors and is much faster to compute. The superior performance of the direct estimator is evident both for simulated data and for neuronal population recordings from macaque primary visual cortex. Furthermore we propose generalizations of the direct estimator which measure changes in stimulus encoding across conditions and the impact of correlations on encoding and decoding, typically denoted by Ishuffle and Idiag respectively. PMID:26030735

  16. Specific CA3 neurons decode neural information of dentate granule cells evoked by paired-pulse stimulation in co-cultured networks.

    PubMed

    Poli, Daniele; DeMarse, Thomas B; Wheeler, Bruce C; Brewer, Gregory J

    2017-07-01

    CA3 and dentate gyrus (DG) neurons are cultured in two-chamber devices on multi-electrode arrays (MEAs) and connected via micro-tunnels. In order to evoke time-locked activity, paired-pulse stimulation is applied to 22 different sites and repeated 25 times in each well in 5 MEA co-cultures and results compared to CA3-CA3 and DG-DG networks homologous controls. In these hippocampal sub-regions, we focus on the mechanisms underpinning a network's ability to decode the identity of site specific stimulation from analysis of evoked network responses using a support vector machine classifier. Our results indicate that a pool of CA3 neurons is able to reliably decode the identity of DG stimulation site information.

  17. Spatial band-pass filtering aids decoding musical genres from auditory cortex 7T fMRI.

    PubMed

    Sengupta, Ayan; Pollmann, Stefan; Hanke, Michael

    2018-01-01

    Spatial filtering strategies, combined with multivariate decoding analysis of BOLD images, have been used to investigate the nature of the neural signal underlying the discriminability of brain activity patterns evoked by sensory stimulation -- primarily in the visual cortex. Reported evidence indicates that such signals are spatially broadband in nature, and are not primarily comprised of fine-grained activation patterns. However, it is unclear whether this is a general property of the BOLD signal, or whether it is specific to the details of employed analyses and stimuli. Here we performed an analysis of publicly available, high-resolution 7T fMRI on the response BOLD response to musical genres in primary auditory cortex that matches a previously conducted study on decoding visual orientation from V1.  The results show that the pattern of decoding accuracies with respect to different types and levels of spatial filtering is comparable to that obtained from V1, despite considerable differences in the respective cortical circuitry.

  18. Decoding Face Information in Time, Frequency and Space from Direct Intracranial Recordings of the Human Brain

    PubMed Central

    Oya, Hiroyuki; Howard, Matthew A.; Adolphs, Ralph

    2008-01-01

    Faces are processed by a neural system with distributed anatomical components, but the roles of these components remain unclear. A dominant theory of face perception postulates independent representations of invariant aspects of faces (e.g., identity) in ventral temporal cortex including the fusiform gyrus, and changeable aspects of faces (e.g., emotion) in lateral temporal cortex including the superior temporal sulcus. Here we recorded neuronal activity directly from the cortical surface in 9 neurosurgical subjects undergoing epilepsy monitoring while they viewed static and dynamic facial expressions. Applying novel decoding analyses to the power spectrogram of electrocorticograms (ECoG) from over 100 contacts in ventral and lateral temporal cortex, we found better representation of both invariant and changeable aspects of faces in ventral than lateral temporal cortex. Critical information for discriminating faces from geometric patterns was carried by power modulations between 50 to 150 Hz. For both static and dynamic face stimuli, we obtained a higher decoding performance in ventral than lateral temporal cortex. For discriminating fearful from happy expressions, critical information was carried by power modulation between 60–150 Hz and below 30 Hz, and again better decoded in ventral than lateral temporal cortex. Task-relevant attention improved decoding accuracy more than10% across a wide frequency range in ventral but not at all in lateral temporal cortex. Spatial searchlight decoding showed that decoding performance was highest around the middle fusiform gyrus. Finally, we found that the right hemisphere, in general, showed superior decoding to the left hemisphere. Taken together, our results challenge the dominant model for independent face representation of invariant and changeable aspects: information about both face attributes was better decoded from a single region in the middle fusiform gyrus. PMID:19065268

  19. The brain's silent messenger: using selective attention to decode human thought for brain-based communication.

    PubMed

    Naci, Lorina; Cusack, Rhodri; Jia, Vivian Z; Owen, Adrian M

    2013-05-29

    The interpretation of human thought from brain activity, without recourse to speech or action, is one of the most provoking and challenging frontiers of modern neuroscience. In particular, patients who are fully conscious and awake, yet, due to brain damage, are unable to show any behavioral responsivity, expose the limits of the neuromuscular system and the necessity for alternate forms of communication. Although it is well established that selective attention can significantly enhance the neural representation of attended sounds, it remains, thus far, untested as a response modality for brain-based communication. We asked whether its effect could be reliably used to decode answers to binary (yes/no) questions. Fifteen healthy volunteers answered questions (e.g., "Do you have brothers or sisters?") in the fMRI scanner, by selectively attending to the appropriate word ("yes" or "no"). Ninety percent of the answers were decoded correctly based on activity changes within the attention network. The majority of volunteers conveyed their answers with less than 3 min of scanning, suggesting that this technique is suited for communication in a reasonable amount of time. Formal comparison with the current best-established fMRI technique for binary communication revealed improved individual success rates and scanning times required to detect responses. This novel fMRI technique is intuitive, easy to use in untrained participants, and reliably robust within brief scanning times. Possible applications include communication with behaviorally nonresponsive patients.

  20. A 16-Channel Nonparametric Spike Detection ASIC Based on EC-PC Decomposition.

    PubMed

    Wu, Tong; Xu, Jian; Lian, Yong; Khalili, Azam; Rastegarnia, Amir; Guan, Cuntai; Yang, Zhi

    2016-02-01

    In extracellular neural recording experiments, detecting neural spikes is an important step for reliable information decoding. A successful implementation in integrated circuits can achieve substantial data volume reduction, potentially enabling a wireless operation and closed-loop system. In this paper, we report a 16-channel neural spike detection chip based on a customized spike detection method named as exponential component-polynomial component (EC-PC) algorithm. This algorithm features a reliable prediction of spikes by applying a probability threshold. The chip takes raw data as input and outputs three data streams simultaneously: field potentials, band-pass filtered neural data, and spiking probability maps. The algorithm parameters are on-chip configured automatically based on input data, which avoids manual parameter tuning. The chip has been tested with both in vivo experiments for functional verification and bench-top experiments for quantitative performance assessment. The system has a total power consumption of 1.36 mW and occupies an area of 6.71 mm (2) for 16 channels. When tested on synthesized datasets with spikes and noise segments extracted from in vivo preparations and scaled according to required precisions, the chip outperforms other detectors. A credit card sized prototype board is developed to provide power and data management through a USB port.

  1. Task-induced frequency modulation features for brain-computer interfacing.

    PubMed

    Jayaram, Vinay; Hohmann, Matthias; Just, Jennifer; Schölkopf, Bernhard; Grosse-Wentrup, Moritz

    2017-10-01

    Task-induced amplitude modulation of neural oscillations is routinely used in brain-computer interfaces (BCIs) for decoding subjects' intents, and underlies some of the most robust and common methods in the field, such as common spatial patterns and Riemannian geometry. While there has been some interest in phase-related features for classification, both techniques usually presuppose that the frequencies of neural oscillations remain stable across various tasks. We investigate here whether features based on task-induced modulation of the frequency of neural oscillations enable decoding of subjects' intents with an accuracy comparable to task-induced amplitude modulation. We compare cross-validated classification accuracies using the amplitude and frequency modulated features, as well as a joint feature space, across subjects in various paradigms and pre-processing conditions. We show results with a motor imagery task, a cognitive task, and also preliminary results in patients with amyotrophic lateral sclerosis (ALS), as well as using common spatial patterns and Laplacian filtering. The frequency features alone do not significantly out-perform traditional amplitude modulation features, and in some cases perform significantly worse. However, across both tasks and pre-processing in healthy subjects the joint space significantly out-performs either the frequency or amplitude features alone. This result only does not hold for ALS patients, for whom the dataset is of insufficient size to draw any statistically significant conclusions. Task-induced frequency modulation is robust and straight forward to compute, and increases performance when added to standard amplitude modulation features across paradigms. This allows more information to be extracted from the EEG signal cheaply and can be used throughout the field of BCIs.

  2. The Neural Dynamics of Attentional Selection in Natural Scenes.

    PubMed

    Kaiser, Daniel; Oosterhof, Nikolaas N; Peelen, Marius V

    2016-10-12

    The human visual system can only represent a small subset of the many objects present in cluttered scenes at any given time, such that objects compete for representation. Despite these processing limitations, the detection of object categories in cluttered natural scenes is remarkably rapid. How does the brain efficiently select goal-relevant objects from cluttered scenes? In the present study, we used multivariate decoding of magneto-encephalography (MEG) data to track the neural representation of within-scene objects as a function of top-down attentional set. Participants detected categorical targets (cars or people) in natural scenes. The presence of these categories within a scene was decoded from MEG sensor patterns by training linear classifiers on differentiating cars and people in isolation and testing these classifiers on scenes containing one of the two categories. The presence of a specific category in a scene could be reliably decoded from MEG response patterns as early as 160 ms, despite substantial scene clutter and variation in the visual appearance of each category. Strikingly, we find that these early categorical representations fully depend on the match between visual input and top-down attentional set: only objects that matched the current attentional set were processed to the category level within the first 200 ms after scene onset. A sensor-space searchlight analysis revealed that this early attention bias was localized to lateral occipitotemporal cortex, reflecting top-down modulation of visual processing. These results show that attention quickly resolves competition between objects in cluttered natural scenes, allowing for the rapid neural representation of goal-relevant objects. Efficient attentional selection is crucial in many everyday situations. For example, when driving a car, we need to quickly detect obstacles, such as pedestrians crossing the street, while ignoring irrelevant objects. How can humans efficiently perform such tasks, given the multitude of objects contained in real-world scenes? Here we used multivariate decoding of magnetoencephalogaphy data to characterize the neural underpinnings of attentional selection in natural scenes with high temporal precision. We show that brain activity quickly tracks the presence of objects in scenes, but crucially only for those objects that were immediately relevant for the participant. These results provide evidence for fast and efficient attentional selection that mediates the rapid detection of goal-relevant objects in real-world environments. Copyright © 2016 the authors 0270-6474/16/3610522-07$15.00/0.

  3. Planning Ahead: Object-Directed Sequential Actions Decoded from Human Frontoparietal and Occipitotemporal Networks

    PubMed Central

    Gallivan, Jason P.; Johnsrude, Ingrid S.; Randall Flanagan, J.

    2016-01-01

    Object-manipulation tasks (e.g., drinking from a cup) typically involve sequencing together a series of distinct motor acts (e.g., reaching toward, grasping, lifting, and transporting the cup) in order to accomplish some overarching goal (e.g., quenching thirst). Although several studies in humans have investigated the neural mechanisms supporting the planning of visually guided movements directed toward objects (such as reaching or pointing), only a handful have examined how manipulatory sequences of actions—those that occur after an object has been grasped—are planned and represented in the brain. Here, using event-related functional MRI and pattern decoding methods, we investigated the neural basis of real-object manipulation using a delayed-movement task in which participants first prepared and then executed different object-directed action sequences that varied either in their complexity or final spatial goals. Consistent with previous reports of preparatory brain activity in non-human primates, we found that activity patterns in several frontoparietal areas reliably predicted entire action sequences in advance of movement. Notably, we found that similar sequence-related information could also be decoded from pre-movement signals in object- and body-selective occipitotemporal cortex (OTC). These findings suggest that both frontoparietal and occipitotemporal circuits are engaged in transforming object-related information into complex, goal-directed movements. PMID:25576538

  4. Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation

    PubMed Central

    Oliva, Aude

    2017-01-01

    Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630

  5. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex

    PubMed Central

    Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo

    2015-01-01

    The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70–200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys’ behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators. PMID:26266537

  6. Embedded System for Prosthetic Control Using Implanted Neuromuscular Interfaces Accessed Via an Osseointegrated Implant.

    PubMed

    Mastinu, Enzo; Doguet, Pascal; Botquin, Yohan; Hakansson, Bo; Ortiz-Catalan, Max

    2017-08-01

    Despite the technological progress in robotics achieved in the last decades, prosthetic limbs still lack functionality, reliability, and comfort. Recently, an implanted neuromusculoskeletal interface built upon osseointegration was developed and tested in humans, namely the Osseointegrated Human-Machine Gateway. Here, we present an embedded system to exploit the advantages of this technology. Our artificial limb controller allows for bioelectric signals acquisition, processing, decoding of motor intent, prosthetic control, and sensory feedback. It includes a neurostimulator to provide direct neural feedback based on sensory information. The system was validated using real-time tasks characterization, power consumption evaluation, and myoelectric pattern recognition performance. Functionality was proven in a first pilot patient from whom results of daily usage were obtained. The system was designed to be reliably used in activities of daily living, as well as a research platform to monitor prosthesis usage and training, machine-learning-based control algorithms, and neural stimulation paradigms.

  7. State-based decoding of hand and finger kinematics using neuronal ensemble and LFP activity during dexterous reach-to-grasp movements

    PubMed Central

    Mollazadeh, Mohsen; Davidson, Adam G.; Schieber, Marc H.; Thakor, Nitish V.

    2013-01-01

    The performance of brain-machine interfaces (BMIs) that continuously control upper limb neuroprostheses may benefit from distinguishing periods of posture and movement so as to prevent inappropriate movement of the prosthesis. Few studies, however, have investigated how decoding behavioral states and detecting the transitions between posture and movement could be used autonomously to trigger a kinematic decoder. We recorded simultaneous neuronal ensemble and local field potential (LFP) activity from microelectrode arrays in primary motor cortex (M1) and dorsal (PMd) and ventral (PMv) premotor areas of two male rhesus monkeys performing a center-out reach-and-grasp task, while upper limb kinematics were tracked with a motion capture system with markers on the dorsal aspect of the forearm, hand, and fingers. A state decoder was trained to distinguish four behavioral states (baseline, reaction, movement, hold), while a kinematic decoder was trained to continuously decode hand end point position and 18 joint angles of the wrist and fingers. LFP amplitude most accurately predicted transition into the reaction (62%) and movement (73%) states, while spikes most accurately decoded arm, hand, and finger kinematics during movement. Using an LFP-based state decoder to trigger a spike-based kinematic decoder [r = 0.72, root mean squared error (RMSE) = 0.15] significantly improved decoding of reach-to-grasp movements from baseline to final hold, compared with either a spike-based state decoder combined with a spike-based kinematic decoder (r = 0.70, RMSE = 0.17) or a spike-based kinematic decoder alone (r = 0.67, RMSE = 0.17). Combining LFP-based state decoding with spike-based kinematic decoding may be a valuable step toward the realization of BMI control of a multifingered neuroprosthesis performing dexterous manipulation. PMID:23536714

  8. Can Emotional and Behavioral Dysregulation in Youth Be Decoded from Functional Neuroimaging?

    PubMed

    Portugal, Liana C L; Rosa, Maria João; Rao, Anil; Bebko, Genna; Bertocci, Michele A; Hinze, Amanda K; Bonar, Lisa; Almeida, Jorge R C; Perlman, Susan B; Versace, Amelia; Schirda, Claudiu; Travis, Michael; Gill, Mary Kay; Demeter, Christine; Diwadkar, Vaibhav A; Ciuffetelli, Gary; Rodriguez, Eric; Forbes, Erika E; Sunshine, Jeffrey L; Holland, Scott K; Kowatch, Robert A; Birmaher, Boris; Axelson, David; Horwitz, Sarah M; Arnold, Eugene L; Fristad, Mary A; Youngstrom, Eric A; Findling, Robert L; Pereira, Mirtes; Oliveira, Leticia; Phillips, Mary L; Mourao-Miranda, Janaina

    2016-01-01

    High comorbidity among pediatric disorders characterized by behavioral and emotional dysregulation poses problems for diagnosis and treatment, and suggests that these disorders may be better conceptualized as dimensions of abnormal behaviors. Furthermore, identifying neuroimaging biomarkers related to dimensional measures of behavior may provide targets to guide individualized treatment. We aimed to use functional neuroimaging and pattern regression techniques to determine whether patterns of brain activity could accurately decode individual-level severity on a dimensional scale measuring behavioural and emotional dysregulation at two different time points. A sample of fifty-seven youth (mean age: 14.5 years; 32 males) was selected from a multi-site study of youth with parent-reported behavioral and emotional dysregulation. Participants performed a block-design reward paradigm during functional Magnetic Resonance Imaging (fMRI). Pattern regression analyses consisted of Relevance Vector Regression (RVR) and two cross-validation strategies implemented in the Pattern Recognition for Neuroimaging toolbox (PRoNTo). Medication was treated as a binary confounding variable. Decoded and actual clinical scores were compared using Pearson's correlation coefficient (r) and mean squared error (MSE) to evaluate the models. Permutation test was applied to estimate significance levels. Relevance Vector Regression identified patterns of neural activity associated with symptoms of behavioral and emotional dysregulation at the initial study screen and close to the fMRI scanning session. The correlation and the mean squared error between actual and decoded symptoms were significant at the initial study screen and close to the fMRI scanning session. However, after controlling for potential medication effects, results remained significant only for decoding symptoms at the initial study screen. Neural regions with the highest contribution to the pattern regression model included cerebellum, sensory-motor and fronto-limbic areas. The combination of pattern regression models and neuroimaging can help to determine the severity of behavioral and emotional dysregulation in youth at different time points.

  9. A Real-Time Brain-Machine Interface Combining Motor Target and Trajectory Intent Using an Optimal Feedback Control Design

    PubMed Central

    Shanechi, Maryam M.; Williams, Ziv M.; Wornell, Gregory W.; Hu, Rollin C.; Powers, Marissa; Brown, Emery N.

    2013-01-01

    Real-time brain-machine interfaces (BMI) have focused on either estimating the continuous movement trajectory or target intent. However, natural movement often incorporates both. Additionally, BMIs can be modeled as a feedback control system in which the subject modulates the neural activity to move the prosthetic device towards a desired target while receiving real-time sensory feedback of the state of the movement. We develop a novel real-time BMI using an optimal feedback control design that jointly estimates the movement target and trajectory of monkeys in two stages. First, the target is decoded from neural spiking activity before movement initiation. Second, the trajectory is decoded by combining the decoded target with the peri-movement spiking activity using an optimal feedback control design. This design exploits a recursive Bayesian decoder that uses an optimal feedback control model of the sensorimotor system to take into account the intended target location and the sensory feedback in its trajectory estimation from spiking activity. The real-time BMI processes the spiking activity directly using point process modeling. We implement the BMI in experiments consisting of an instructed-delay center-out task in which monkeys are presented with a target location on the screen during a delay period and then have to move a cursor to it without touching the incorrect targets. We show that the two-stage BMI performs more accurately than either stage alone. Correct target prediction can compensate for inaccurate trajectory estimation and vice versa. The optimal feedback control design also results in trajectories that are smoother and have lower estimation error. The two-stage decoder also performs better than linear regression approaches in offline cross-validation analyses. Our results demonstrate the advantage of a BMI design that jointly estimates the target and trajectory of movement and more closely mimics the sensorimotor control system. PMID:23593130

  10. Error Control Coding Techniques for Space and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    2000-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  11. An Interactive Concatenated Turbo Coding System

    NASA Technical Reports Server (NTRS)

    Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc

    1999-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  12. Brain-Machine Interface control of a robot arm using actor-critic rainforcement learning.

    PubMed

    Pohlmeyer, Eric A; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline; Sanchez, Justin C

    2012-01-01

    Here we demonstrate how a marmoset monkey can use a reinforcement learning (RL) Brain-Machine Interface (BMI) to effectively control the movements of a robot arm for a reaching task. In this work, an actor-critic RL algorithm used neural ensemble activity in the monkey's motor cortext to control the robot movements during a two-target decision task. This novel approach to decoding offers unique advantages for BMI control applications. Compared to supervised learning decoding methods, the actor-critic RL algorithm does not require an explicit set of training data to create a static control model, but rather it incrementally adapts the model parameters according to its current performance, in this case requiring only a very basic feedback signal. We show how this algorithm achieved high performance when mapping the monkey's neural states (94%) to robot actions, and only needed to experience a few trials before obtaining accurate real-time control of the robot arm. Since RL methods responsively adapt and adjust their parameters, they can provide a method to create BMIs that are robust against perturbations caused by changes in either the neural input space or the output actions they generate under different task requirements or goals.

  13. Clique-Based Neural Associative Memories with Local Coding and Precoding.

    PubMed

    Mofrad, Asieh Abolpour; Parker, Matthew G; Ferdosi, Zahra; Tadayon, Mohammad H

    2016-08-01

    Techniques from coding theory are able to improve the efficiency of neuroinspired and neural associative memories by forcing some construction and constraints on the network. In this letter, the approach is to embed coding techniques into neural associative memory in order to increase their performance in the presence of partial erasures. The motivation comes from recent work by Gripon, Berrou, and coauthors, which revisited Willshaw networks and presented a neural network with interacting neurons that partitioned into clusters. The model introduced stores patterns as small-size cliques that can be retrieved in spite of partial error. We focus on improving the success of retrieval by applying two techniques: doing a local coding in each cluster and then applying a precoding step. We use a slightly different decoding scheme, which is appropriate for partial erasures and converges faster. Although the ideas of local coding and precoding are not new, the way we apply them is different. Simulations show an increase in the pattern retrieval capacity for both techniques. Moreover, we use self-dual additive codes over field [Formula: see text], which have very interesting properties and a simple-graph representation.

  14. Multiunit Activity-Based Real-Time Limb-State Estimation from Dorsal Root Ganglion Recordings

    PubMed Central

    Han, Sungmin; Chu, Jun-Uk; Kim, Hyungmin; Park, Jong Woong; Youn, Inchan

    2017-01-01

    Proprioceptive afferent activities could be useful for providing sensory feedback signals for closed-loop control during functional electrical stimulation (FES). However, most previous studies have used the single-unit activity of individual neurons to extract sensory information from proprioceptive afferents. This study proposes a new decoding method to estimate ankle and knee joint angles using multiunit activity data. Proprioceptive afferent signals were recorded from a dorsal root ganglion with a single-shank microelectrode during passive movements of the ankle and knee joints, and joint angles were measured as kinematic data. The mean absolute value (MAV) was extracted from the multiunit activity data, and a dynamically driven recurrent neural network (DDRNN) was used to estimate ankle and knee joint angles. The multiunit activity-based MAV feature was sufficiently informative to estimate limb states, and the DDRNN showed a better decoding performance than conventional linear estimators. In addition, processing time delay satisfied real-time constraints. These results demonstrated that the proposed method could be applicable for providing real-time sensory feedback signals in closed-loop FES systems. PMID:28276474

  15. A high performing brain-machine interface driven by low-frequency local field potentials alone and together with spikes

    NASA Astrophysics Data System (ADS)

    Stavisky, Sergey D.; Kao, Jonathan C.; Nuyujukian, Paul; Ryu, Stephen I.; Shenoy, Krishna V.

    2015-06-01

    Objective. Brain-machine interfaces (BMIs) seek to enable people with movement disabilities to directly control prosthetic systems with their neural activity. Current high performance BMIs are driven by action potentials (spikes), but access to this signal often diminishes as sensors degrade over time. Decoding local field potentials (LFPs) as an alternative or complementary BMI control signal may improve performance when there is a paucity of spike signals. To date only a small handful of LFP decoding methods have been tested online; there remains a need to test different LFP decoding approaches and improve LFP-driven performance. There has also not been a reported demonstration of a hybrid BMI that decodes kinematics from both LFP and spikes. Here we first evaluate a BMI driven by the local motor potential (LMP), a low-pass filtered time-domain LFP amplitude feature. We then combine decoding of both LMP and spikes to implement a hybrid BMI. Approach. Spikes and LFP were recorded from two macaques implanted with multielectrode arrays in primary and premotor cortex while they performed a reaching task. We then evaluated closed-loop BMI control using biomimetic decoders driven by LMP, spikes, or both signals together. Main results. LMP decoding enabled quick and accurate cursor control which surpassed previously reported LFP BMI performance. Hybrid decoding of both spikes and LMP improved performance when spikes signal quality was mediocre to poor. Significance. These findings show that LMP is an effective BMI control signal which requires minimal power to extract and can substitute for or augment impoverished spikes signals. Use of this signal may lengthen the useful lifespan of BMIs and is therefore an important step towards clinically viable BMIs.

  16. Anisotropic connectivity implements motion-based prediction in a spiking neural network.

    PubMed

    Kaplan, Bernhard A; Lansner, Anders; Masson, Guillaume S; Perrinet, Laurent U

    2013-01-01

    Predictive coding hypothesizes that the brain explicitly infers upcoming sensory input to establish a coherent representation of the world. Although it is becoming generally accepted, it is not clear on which level spiking neural networks may implement predictive coding and what function their connectivity may have. We present a network model of conductance-based integrate-and-fire neurons inspired by the architecture of retinotopic cortical areas that assumes predictive coding is implemented through network connectivity, namely in the connection delays and in selectiveness for the tuning properties of source and target cells. We show that the applied connection pattern leads to motion-based prediction in an experiment tracking a moving dot. In contrast to our proposed model, a network with random or isotropic connectivity fails to predict the path when the moving dot disappears. Furthermore, we show that a simple linear decoding approach is sufficient to transform neuronal spiking activity into a probabilistic estimate for reading out the target trajectory.

  17. Attentional Selection in a Cocktail Party Environment Can Be Decoded from Single-Trial EEG

    PubMed Central

    O'Sullivan, James A.; Power, Alan J.; Mesgarani, Nima; Rajaram, Siddharth; Foxe, John J.; Shinn-Cunningham, Barbara G.; Slaney, Malcolm; Shamma, Shihab A.; Lalor, Edmund C.

    2015-01-01

    How humans solve the cocktail party problem remains unknown. However, progress has been made recently thanks to the realization that cortical activity tracks the amplitude envelope of speech. This has led to the development of regression methods for studying the neurophysiology of continuous speech. One such method, known as stimulus-reconstruction, has been successfully utilized with cortical surface recordings and magnetoencephalography (MEG). However, the former is invasive and gives a relatively restricted view of processing along the auditory hierarchy, whereas the latter is expensive and rare. Thus it would be extremely useful for research in many populations if stimulus-reconstruction was effective using electroencephalography (EEG), a widely available and inexpensive technology. Here we show that single-trial (≈60 s) unaveraged EEG data can be decoded to determine attentional selection in a naturalistic multispeaker environment. Furthermore, we show a significant correlation between our EEG-based measure of attention and performance on a high-level attention task. In addition, by attempting to decode attention at individual latencies, we identify neural processing at ∼200 ms as being critical for solving the cocktail party problem. These findings open up new avenues for studying the ongoing dynamics of cognition using EEG and for developing effective and natural brain–computer interfaces. PMID:24429136

  18. Successful decoding of famous faces in the fusiform face area.

    PubMed

    Axelrod, Vadim; Yovel, Galit

    2015-01-01

    What are the neural mechanisms of face recognition? It is believed that the network of face-selective areas, which spans the occipital, temporal, and frontal cortices, is important in face recognition. A number of previous studies indeed reported that face identity could be discriminated based on patterns of multivoxel activity in the fusiform face area and the anterior temporal lobe. However, given the difficulty in localizing the face-selective area in the anterior temporal lobe, its role in face recognition is still unknown. Furthermore, previous studies limited their analysis to occipito-temporal regions without testing identity decoding in more anterior face-selective regions, such as the amygdala and prefrontal cortex. In the current high-resolution functional Magnetic Resonance Imaging study, we systematically examined the decoding of the identity of famous faces in the temporo-frontal network of face-selective and adjacent non-face-selective regions. A special focus has been put on the face-area in the anterior temporal lobe, which was reliably localized using an optimized scanning protocol. We found that face-identity could be discriminated above chance level only in the fusiform face area. Our results corroborate the role of the fusiform face area in face recognition. Future studies are needed to further explore the role of the more recently discovered anterior face-selective areas in face recognition.

  19. Complementary Reliability-Based Decodings of Binary Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1997-01-01

    This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.

  20. Neural decoding of attentional selection in multi-speaker environments without access to clean sources

    NASA Astrophysics Data System (ADS)

    O'Sullivan, James; Chen, Zhuo; Herrero, Jose; McKhann, Guy M.; Sheth, Sameer A.; Mehta, Ashesh D.; Mesgarani, Nima

    2017-10-01

    Objective. People who suffer from hearing impairments can find it difficult to follow a conversation in a multi-speaker environment. Current hearing aids can suppress background noise; however, there is little that can be done to help a user attend to a single conversation amongst many without knowing which speaker the user is attending to. Cognitively controlled hearing aids that use auditory attention decoding (AAD) methods are the next step in offering help. Translating the successes in AAD research to real-world applications poses a number of challenges, including the lack of access to the clean sound sources in the environment with which to compare with the neural signals. We propose a novel framework that combines single-channel speech separation algorithms with AAD. Approach. We present an end-to-end system that (1) receives a single audio channel containing a mixture of speakers that is heard by a listener along with the listener’s neural signals, (2) automatically separates the individual speakers in the mixture, (3) determines the attended speaker, and (4) amplifies the attended speaker’s voice to assist the listener. Main results. Using invasive electrophysiology recordings, we identified the regions of the auditory cortex that contribute to AAD. Given appropriate electrode locations, our system is able to decode the attention of subjects and amplify the attended speaker using only the mixed audio. Our quality assessment of the modified audio demonstrates a significant improvement in both subjective and objective speech quality measures. Significance. Our novel framework for AAD bridges the gap between the most recent advancements in speech processing technologies and speech prosthesis research and moves us closer to the development of cognitively controlled hearable devices for the hearing impaired.

  1. Modeling and possible implementation of self-learning equivalence-convolutional neural structures for auto-encoding-decoding and clusterization of images

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.

    2017-08-01

    Self-learning equivalent-convolutional neural structures (SLECNS) for auto-coding-decoding and image clustering are discussed. The SLECNS architectures and their spatially invariant equivalent models (SI EMs) using the corresponding matrix-matrix procedures with basic operations of continuous logic and non-linear processing are proposed. These SI EMs have several advantages, such as the ability to recognize image fragments with better efficiency and strong cross correlation. The proposed clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively processing algorithms and to k-average method. The experimental results confirmed that larger images and 2D binary fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an image with dimension of 256x256 (a reference array) and fragments with dimensions of 7x7 and 21x21 for clustering is carried out. The experiments, using the software environment Mathcad, showed that the proposed method is universal, has a significant convergence, the small number of iterations is easily, displayed on the matrix structure, and confirmed its prospects. Thus, to understand the mechanisms of self-learning equivalence-convolutional clustering, accompanying her to the competitive processes in neurons, and the neural auto-encoding-decoding and recognition principles with the use of self-learning cluster patterns is very important which used the algorithm and the principles of non-linear processing of two-dimensional spatial functions of images comparison. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. We show that the implementation of SLECNS based on known equivalentors or traditional correlators is possible if they are based on proposed equivalental two-dimensional functions of image similarity. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalental weighing of input patterns. Real model experiments in Mathcad are demonstrated, which confirm that non-linear processing on equivalent functions allows you to determine the neuron winners and adjust the weight matrix. Experimental results have shown that such models can be successfully used for auto- and hetero-associative recognition. They can also be used to explain some mechanisms known as "focus" and "competing gain-inhibition concept". The SLECNS architecture and hardware implementations of its basic nodes based on multi-channel convolvers and correlators with time integration are proposed. The parameters and performance of such architectures are estimated.

  2. Simultaneous scalp electroencephalography (EEG), electromyography (EMG), and whole-body segmental inertial recording for multi-modal neural decoding.

    PubMed

    Bulea, Thomas C; Kilicarslan, Atilla; Ozdemir, Recep; Paloski, William H; Contreras-Vidal, Jose L

    2013-07-26

    Recent studies support the involvement of supraspinal networks in control of bipedal human walking. Part of this evidence encompasses studies, including our previous work, demonstrating that gait kinematics and limb coordination during treadmill walking can be inferred from the scalp electroencephalogram (EEG) with reasonably high decoding accuracies. These results provide impetus for development of non-invasive brain-machine-interface (BMI) systems for use in restoration and/or augmentation of gait- a primary goal of rehabilitation research. To date, studies examining EEG decoding of activity during gait have been limited to treadmill walking in a controlled environment. However, to be practically viable a BMI system must be applicable for use in everyday locomotor tasks such as over ground walking and turning. Here, we present a novel protocol for non-invasive collection of brain activity (EEG), muscle activity (electromyography (EMG)), and whole-body kinematic data (head, torso, and limb trajectories) during both treadmill and over ground walking tasks. By collecting these data in the uncontrolled environment insight can be gained regarding the feasibility of decoding unconstrained gait and surface EMG from scalp EEG.

  3. Encoding and decoding amplitude-modulated cochlear implant stimuli—a point process analysis

    PubMed Central

    Shea-Brown, Eric; Rubinstein, Jay T.

    2010-01-01

    Cochlear implant speech processors stimulate the auditory nerve by delivering amplitude-modulated electrical pulse trains to intracochlear electrodes. Studying how auditory nerve cells encode modulation information is of fundamental importance, therefore, to understanding cochlear implant function and improving speech perception in cochlear implant users. In this paper, we analyze simulated responses of the auditory nerve to amplitude-modulated cochlear implant stimuli using a point process model. First, we quantify the information encoded in the spike trains by testing an ideal observer’s ability to detect amplitude modulation in a two-alternative forced-choice task. We vary the amount of information available to the observer to probe how spike timing and averaged firing rate encode modulation. Second, we construct a neural decoding method that predicts several qualitative trends observed in psychophysical tests of amplitude modulation detection in cochlear implant listeners. We find that modulation information is primarily available in the sequence of spike times. The performance of an ideal observer, however, is inconsistent with observed trends in psychophysical data. Using a neural decoding method that jitters spike times to degrade its temporal resolution and then computes a common measure of phase locking from spike trains of a heterogeneous population of model nerve cells, we predict the correct qualitative dependence of modulation detection thresholds on modulation frequency and stimulus level. The decoder does not predict the observed loss of modulation sensitivity at high carrier pulse rates, but this framework can be applied to future models that better represent auditory nerve responses to high carrier pulse rate stimuli. The supplemental material of this article contains the article’s data in an active, re-usable format. PMID:20177761

  4. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    PubMed

    Li, Yuanqing; Wang, Guangyi; Long, Jinyi; Yu, Zhuliang; Huang, Biao; Li, Xiaojian; Yu, Tianyou; Liang, Changhong; Li, Zheng; Sun, Pei

    2011-01-01

    One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI) experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG). The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  5. Reproducibility and Discriminability of Brain Patterns of Semantic Categories Enhanced by Congruent Audiovisual Stimuli

    PubMed Central

    Long, Jinyi; Yu, Zhuliang; Huang, Biao; Li, Xiaojian; Yu, Tianyou; Liang, Changhong; Li, Zheng; Sun, Pei

    2011-01-01

    One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI) experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: “old people” and “young people.” These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG). The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration. PMID:21750692

  6. Neuroprosthetic Decoder Training as Imitation Learning.

    PubMed

    Merel, Josh; Carlson, David; Paninski, Liam; Cunningham, John P

    2016-05-01

    Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.

  7. Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex.

    PubMed

    Ibayashi, Kenji; Kunii, Naoto; Matsuo, Takeshi; Ishishita, Yohei; Shimada, Seijiro; Kawai, Kensuke; Saito, Nobuhito

    2018-01-01

    Restoration of speech communication for locked-in patients by means of brain computer interfaces (BCIs) is currently an important area of active research. Among the neural signals obtained from intracranial recordings, single/multi-unit activity (SUA/MUA), local field potential (LFP), and electrocorticography (ECoG) are good candidates for an input signal for BCIs. However, the question of which signal or which combination of the three signal modalities is best suited for decoding speech production remains unverified. In order to record SUA, LFP, and ECoG simultaneously from a highly localized area of human ventral sensorimotor cortex (vSMC), we fabricated an electrode the size of which was 7 by 13 mm containing sparsely arranged microneedle and conventional macro contacts. We determined which signal modality is the most capable of decoding speech production, and tested if the combination of these signals could improve the decoding accuracy of spoken phonemes. Feature vectors were constructed from spike frequency obtained from SUAs and event-related spectral perturbation derived from ECoG and LFP signals, then input to the decoder. The results showed that the decoding accuracy for five spoken vowels was highest when features from multiple signals were combined and optimized for each subject, and reached 59% when averaged across all six subjects. This result suggests that multi-scale signals convey complementary information for speech articulation. The current study demonstrated that simultaneous recording of multi-scale neuronal activities could raise decoding accuracy even though the recording area is limited to a small portion of cortex, which is advantageous for future implementation of speech-assisting BCIs.

  8. Decoding Speech With Integrated Hybrid Signals Recorded From the Human Ventral Motor Cortex

    PubMed Central

    Ibayashi, Kenji; Kunii, Naoto; Matsuo, Takeshi; Ishishita, Yohei; Shimada, Seijiro; Kawai, Kensuke; Saito, Nobuhito

    2018-01-01

    Restoration of speech communication for locked-in patients by means of brain computer interfaces (BCIs) is currently an important area of active research. Among the neural signals obtained from intracranial recordings, single/multi-unit activity (SUA/MUA), local field potential (LFP), and electrocorticography (ECoG) are good candidates for an input signal for BCIs. However, the question of which signal or which combination of the three signal modalities is best suited for decoding speech production remains unverified. In order to record SUA, LFP, and ECoG simultaneously from a highly localized area of human ventral sensorimotor cortex (vSMC), we fabricated an electrode the size of which was 7 by 13 mm containing sparsely arranged microneedle and conventional macro contacts. We determined which signal modality is the most capable of decoding speech production, and tested if the combination of these signals could improve the decoding accuracy of spoken phonemes. Feature vectors were constructed from spike frequency obtained from SUAs and event-related spectral perturbation derived from ECoG and LFP signals, then input to the decoder. The results showed that the decoding accuracy for five spoken vowels was highest when features from multiple signals were combined and optimized for each subject, and reached 59% when averaged across all six subjects. This result suggests that multi-scale signals convey complementary information for speech articulation. The current study demonstrated that simultaneous recording of multi-scale neuronal activities could raise decoding accuracy even though the recording area is limited to a small portion of cortex, which is advantageous for future implementation of speech-assisting BCIs. PMID:29674950

  9. Voluntary Enhancement of Neural Signatures of Affiliative Emotion Using fMRI Neurofeedback

    PubMed Central

    Moll, Jorge; Weingartner, Julie H.; Bado, Patricia; Basilio, Rodrigo; Sato, João R.; Melo, Bruno R.; Bramati, Ivanei E.; de Oliveira-Souza, Ricardo; Zahn, Roland

    2014-01-01

    In Ridley Scott’s film “Blade Runner”, empathy-detection devices are employed to measure affiliative emotions. Despite recent neurocomputational advances, it is unknown whether brain signatures of affiliative emotions, such as tenderness/affection, can be decoded and voluntarily modulated. Here, we employed multivariate voxel pattern analysis and real-time fMRI to address this question. We found that participants were able to use visual feedback based on decoded fMRI patterns as a neurofeedback signal to increase brain activation characteristic of tenderness/affection relative to pride, an equally complex control emotion. Such improvement was not observed in a control group performing the same fMRI task without neurofeedback. Furthermore, the neurofeedback-driven enhancement of tenderness/affection-related distributed patterns was associated with local fMRI responses in the septohypothalamic area and frontopolar cortex, regions previously implicated in affiliative emotion. This demonstrates that humans can voluntarily enhance brain signatures of tenderness/affection, unlocking new possibilities for promoting prosocial emotions and countering antisocial behavior. PMID:24847819

  10. Methods for Assessment of Memory Reactivation.

    PubMed

    Liu, Shizhao; Grosmark, Andres D; Chen, Zhe

    2018-04-13

    It has been suggested that reactivation of previously acquired experiences or stored information in declarative memories in the hippocampus and neocortex contributes to memory consolidation and learning. Understanding memory consolidation depends crucially on the development of robust statistical methods for assessing memory reactivation. To date, several statistical methods have seen established for assessing memory reactivation based on bursts of ensemble neural spike activity during offline states. Using population-decoding methods, we propose a new statistical metric, the weighted distance correlation, to assess hippocampal memory reactivation (i.e., spatial memory replay) during quiet wakefulness and slow-wave sleep. The new metric can be combined with an unsupervised population decoding analysis, which is invariant to latent state labeling and allows us to detect statistical dependency beyond linearity in memory traces. We validate the new metric using two rat hippocampal recordings in spatial navigation tasks. Our proposed analysis framework may have a broader impact on assessing memory reactivations in other brain regions under different behavioral tasks.

  11. EMG-Based Estimation of Limb Movement Using Deep Learning With Recurrent Convolutional Neural Networks.

    PubMed

    Xia, Peng; Hu, Jie; Peng, Yinghong

    2017-10-25

    A novel model based on deep learning is proposed to estimate kinematic information for myoelectric control from multi-channel electromyogram (EMG) signals. The neural information of limb movement is embedded in EMG signals that are influenced by all kinds of factors. In order to overcome the negative effects of variability in signals, the proposed model employs the deep architecture combining convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The EMG signals are transformed to time-frequency frames as the input to the model. The limb movement is estimated by the model that is trained with the gradient descent and backpropagation procedure. We tested the model for simultaneous and proportional estimation of limb movement in eight healthy subjects and compared it with support vector regression (SVR) and CNNs on the same data set. The experimental studies show that the proposed model has higher estimation accuracy and better robustness with respect to time. The combination of CNNs and RNNs can improve the model performance compared with using CNNs alone. The model of deep architecture is promising in EMG decoding and optimization of network structures can increase the accuracy and robustness. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  12. Task-induced frequency modulation features for brain-computer interfacing

    NASA Astrophysics Data System (ADS)

    Jayaram, Vinay; Hohmann, Matthias; Just, Jennifer; Schölkopf, Bernhard; Grosse-Wentrup, Moritz

    2017-10-01

    Objective. Task-induced amplitude modulation of neural oscillations is routinely used in brain-computer interfaces (BCIs) for decoding subjects’ intents, and underlies some of the most robust and common methods in the field, such as common spatial patterns and Riemannian geometry. While there has been some interest in phase-related features for classification, both techniques usually presuppose that the frequencies of neural oscillations remain stable across various tasks. We investigate here whether features based on task-induced modulation of the frequency of neural oscillations enable decoding of subjects’ intents with an accuracy comparable to task-induced amplitude modulation. Approach. We compare cross-validated classification accuracies using the amplitude and frequency modulated features, as well as a joint feature space, across subjects in various paradigms and pre-processing conditions. We show results with a motor imagery task, a cognitive task, and also preliminary results in patients with amyotrophic lateral sclerosis (ALS), as well as using common spatial patterns and Laplacian filtering. Main results. The frequency features alone do not significantly out-perform traditional amplitude modulation features, and in some cases perform significantly worse. However, across both tasks and pre-processing in healthy subjects the joint space significantly out-performs either the frequency or amplitude features alone. This result only does not hold for ALS patients, for whom the dataset is of insufficient size to draw any statistically significant conclusions. Significance. Task-induced frequency modulation is robust and straight forward to compute, and increases performance when added to standard amplitude modulation features across paradigms. This allows more information to be extracted from the EEG signal cheaply and can be used throughout the field of BCIs.

  13. Affective Brain-Computer Interfaces As Enabling Technology for Responsive Psychiatric Stimulation

    PubMed Central

    Widge, Alik S.; Dougherty, Darin D.; Moritz, Chet T.

    2014-01-01

    There is a pressing clinical need for responsive neurostimulators, which sense a patient’s brain activity and deliver targeted electrical stimulation to suppress unwanted symptoms. This is particularly true in psychiatric illness, where symptoms can fluctuate throughout the day. Affective BCIs, which decode emotional experience from neural activity, are a candidate control signal for responsive stimulators targeting the limbic circuit. Present affective decoders, however, cannot yet distinguish pathologic from healthy emotional extremes. Indiscriminate stimulus delivery would reduce quality of life and may be actively harmful. We argue that the key to overcoming this limitation is to specifically decode volition, in particular the patient’s intention to experience emotional regulation. Those emotion-regulation signals already exist in prefrontal cortex (PFC), and could be extracted with relatively simple BCI algorithms. We describe preliminary data from an animal model of PFC-controlled limbic brain stimulation and discuss next steps for pre-clinical testing and possible translation. PMID:25580443

  14. Reward Motivation Enhances Task Coding in Frontoparietal Cortex

    PubMed Central

    Etzel, Joset A.; Cole, Michael W.; Zacks, Jeffrey M.; Kay, Kendrick N.; Braver, Todd S.

    2016-01-01

    Reward motivation often enhances task performance, but the neural mechanisms underlying such cognitive enhancement remain unclear. Here, we used a multivariate pattern analysis (MVPA) approach to test the hypothesis that motivation-related enhancement of cognitive control results from improved encoding and representation of task set information. Participants underwent two fMRI sessions of cued task switching, the first under baseline conditions, and the second with randomly intermixed reward incentive and no-incentive trials. Information about the upcoming task could be successfully decoded from cue-related activation patterns in a set of frontoparietal regions typically associated with task control. More critically, MVPA classifiers trained on the baseline session had significantly higher decoding accuracy on incentive than non-incentive trials, with decoding improvement mediating reward-related enhancement of behavioral performance. These results strongly support the hypothesis that reward motivation enhances cognitive control, by improving the discriminability of task-relevant information coded and maintained in frontoparietal brain regions. PMID:25601237

  15. Categorical discrimination of human body parts by magnetoencephalography

    PubMed Central

    Nakamura, Misaki; Yanagisawa, Takufumi; Okamura, Yumiko; Fukuma, Ryohei; Hirata, Masayuki; Araki, Toshihiko; Kamitani, Yukiyasu; Yorifuji, Shiro

    2015-01-01

    Humans recognize body parts in categories. Previous studies have shown that responses in the fusiform body area (FBA) and extrastriate body area (EBA) are evoked by the perception of the human body, when presented either as whole or as isolated parts. These responses occur approximately 190 ms after body images are visualized. The extent to which body-sensitive responses show specificity for different body part categories remains to be largely clarified. We used a decoding method to quantify neural responses associated with the perception of different categories of body parts. Nine subjects underwent measurements of their brain activities by magnetoencephalography (MEG) while viewing 14 images of feet, hands, mouths, and objects. We decoded categories of the presented images from the MEG signals using a support vector machine (SVM) and calculated their accuracy by 10-fold cross-validation. For each subject, a response that appeared to be a body-sensitive response was observed and the MEG signals corresponding to the three types of body categories were classified based on the signals in the occipitotemporal cortex. The accuracy in decoding body-part categories (with a peak at approximately 48%) was above chance (33.3%) and significantly higher than that for random categories. According to the time course and location, the responses are suggested to be body-sensitive and to include information regarding the body-part category. Finally, this non-invasive method can decode category information of a visual object with high temporal and spatial resolution and this result may have a significant impact in the field of brain–machine interface research. PMID:26582986

  16. Categorical discrimination of human body parts by magnetoencephalography.

    PubMed

    Nakamura, Misaki; Yanagisawa, Takufumi; Okamura, Yumiko; Fukuma, Ryohei; Hirata, Masayuki; Araki, Toshihiko; Kamitani, Yukiyasu; Yorifuji, Shiro

    2015-01-01

    Humans recognize body parts in categories. Previous studies have shown that responses in the fusiform body area (FBA) and extrastriate body area (EBA) are evoked by the perception of the human body, when presented either as whole or as isolated parts. These responses occur approximately 190 ms after body images are visualized. The extent to which body-sensitive responses show specificity for different body part categories remains to be largely clarified. We used a decoding method to quantify neural responses associated with the perception of different categories of body parts. Nine subjects underwent measurements of their brain activities by magnetoencephalography (MEG) while viewing 14 images of feet, hands, mouths, and objects. We decoded categories of the presented images from the MEG signals using a support vector machine (SVM) and calculated their accuracy by 10-fold cross-validation. For each subject, a response that appeared to be a body-sensitive response was observed and the MEG signals corresponding to the three types of body categories were classified based on the signals in the occipitotemporal cortex. The accuracy in decoding body-part categories (with a peak at approximately 48%) was above chance (33.3%) and significantly higher than that for random categories. According to the time course and location, the responses are suggested to be body-sensitive and to include information regarding the body-part category. Finally, this non-invasive method can decode category information of a visual object with high temporal and spatial resolution and this result may have a significant impact in the field of brain-machine interface research.

  17. Dopamine D2-receptor blockade enhances decoding of prefrontal signals in humans.

    PubMed

    Kahnt, Thorsten; Weber, Susanna C; Haker, Helene; Robbins, Trevor W; Tobler, Philippe N

    2015-03-04

    The prefrontal cortex houses representations critical for ongoing and future behavior expressed in the form of patterns of neural activity. Dopamine has long been suggested to play a key role in the integrity of such representations, with D2-receptor activation rendering them flexible but weak. However, it is currently unknown whether and how D2-receptor activation affects prefrontal representations in humans. In the current study, we use dopamine receptor-specific pharmacology and multivoxel pattern-based functional magnetic resonance imaging to test the hypothesis that blocking D2-receptor activation enhances prefrontal representations. Human subjects performed a simple reward prediction task after double-blind and placebo controlled administration of the D2-receptor antagonist amisulpride. Using a whole-brain searchlight decoding approach we show that D2-receptor blockade enhances decoding of reward signals in the medial orbitofrontal cortex. Examination of activity patterns suggests that amisulpride increases the separation of activity patterns related to reward versus no reward. Moreover, consistent with the cortical distribution of D2 receptors, post hoc analyses showed enhanced decoding of motor signals in motor cortex, but not of visual signals in visual cortex. These results suggest that D2-receptor blockade enhances content-specific representations in frontal cortex, presumably by a dopamine-mediated increase in pattern separation. These findings are in line with a dual-state model of prefrontal dopamine, and provide new insights into the potential mechanism of action of dopaminergic drugs. Copyright © 2015 the authors 0270-6474/15/354104-08$15.00/0.

  18. Tracking Deceased-Related Thinking with Neural Pattern Decoding of a Cortical-Basal Ganglia Circuit.

    PubMed

    Schneck, Noam; Haufe, Stefan; Tu, Tao; Bonanno, George A; Ochsner, Kevin; Sajda, Paul; Mann, J John

    2017-07-01

    Deceased-related thinking is central to grieving and potentially critical to processing of the loss. Self-report measurements might fail to capture important elements of deceased-related thinking and processing. Here, we used a machine learning approach applied to fMRI - known as neural decoding - to develop a measure of ongoing deceased-related processing. 23 subjects grieving the loss of a first-degree relative, spouse or partner within 14 months underwent two fMRI tasks. They first viewed pictures and stories related to the deceased, a living control and a demographic control figure while providing ongoing valence and arousal ratings. Second, they performed a 10-minute Sustained Attention to Response Task (SART) with thought probes every 25-35 seconds to identify deceased, living and self-related thoughts. A conjunction analysis, controlling for valence/arousal, identified neural clusters in basal ganglia, orbital prefrontal cortex and insula associated with both types of deceased-related stimuli vs. the two control conditions in the first task. This pattern was applied to fMRI data collected during the SART, and discriminated deceased-related but not living or self-related thoughts, independently of grief-severity and time since loss. Deceased-related thoughts on the SART correlated with self-reported avoidance. The neural model predicted avoidance over and above deceased-related thoughts. A neural pattern trained to identify mental representations of the deceased tracked deceased-related thinking during a sustained attention task and also predicted subject-level avoidance. This approach provides a new imaging tool to be used as an index of processing the deceased for future studies of complicated grief.

  19. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.

  20. Improving brain-machine interface performance by decoding intended future movements

    NASA Astrophysics Data System (ADS)

    Willett, Francis R.; Suminski, Aaron J.; Fagg, Andrew H.; Hatsopoulos, Nicholas G.

    2013-04-01

    Objective. A brain-machine interface (BMI) records neural signals in real time from a subject's brain, interprets them as motor commands, and reroutes them to a device such as a robotic arm, so as to restore lost motor function. Our objective here is to improve BMI performance by minimizing the deleterious effects of delay in the BMI control loop. We mitigate the effects of delay by decoding the subject's intended movements a short time lead in the future. Approach. We use the decoded, intended future movements of the subject as the control signal that drives the movement of our BMI. This should allow the user's intended trajectory to be implemented more quickly by the BMI, reducing the amount of delay in the system. In our experiment, a monkey (Macaca mulatta) uses a future prediction BMI to control a simulated arm to hit targets on a screen. Main Results. Results from experiments with BMIs possessing different system delays (100, 200 and 300 ms) show that the monkey can make significantly straighter, faster and smoother movements when the decoder predicts the user's future intent. We also characterize how BMI performance changes as a function of delay, and explore offline how the accuracy of future prediction decoders varies at different time leads. Significance. This study is the first to characterize the effects of control delays in a BMI and to show that decoding the user's future intent can compensate for the negative effect of control delay on BMI performance.

  1. Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique

    NASA Astrophysics Data System (ADS)

    Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi

    Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.

  2. Multisession, noninvasive closed-loop neuroprosthetic control of grasping by upper limb amputees.

    PubMed

    Agashe, H A; Paek, A Y; Contreras-Vidal, J L

    2016-01-01

    Upper limb amputation results in a severe reduction in the quality of life of affected individuals due to their inability to easily perform activities of daily living. Brain-machine interfaces (BMIs) that translate grasping intent from the brain's neural activity into prosthetic control may increase the level of natural control currently available in myoelectric prostheses. Current BMI techniques demonstrate accurate arm position and single degree-of-freedom grasp control but are invasive and require daily recalibration. In this study we tested if transradial amputees (A1 and A2) could control grasp preshaping in a prosthetic device using a noninvasive electroencephalography (EEG)-based closed-loop BMI system. Participants attempted to grasp presented objects by controlling two grasping synergies, in 12 sessions performed over 5 weeks. Prior to closed-loop control, the first six sessions included a decoder calibration phase using action observation by the participants; thereafter, the decoder was fixed to examine neuroprosthetic performance in the absence of decoder recalibration. Ability of participants to control the prosthetic was measured by the success rate of grasping; ie, the percentage of trials within a session in which presented objects were successfully grasped. Participant A1 maintained a steady success rate (63±3%) across sessions (significantly above chance [41±5%] for 11 sessions). Participant A2, who was under the influence of pharmacological treatment for depression, hormone imbalance, pain management (for phantom pain as well as shoulder joint inflammation), and drug dependence, achieved a success rate of 32±2% across sessions (significantly above chance [27±5%] in only two sessions). EEG signal quality was stable across sessions, but the decoders created during the first six sessions showed variation, indicating EEG features relevant to decoding at a smaller timescale (100ms) may not be stable. Overall, our results show that (a) an EEG-based BMI for grasping is a feasible strategy for further investigation of prosthetic control by amputees, and (b) factors that may affect brain activity such as medication need further examination to improve accuracy and stability of BMI performance. © 2016 Elsevier B.V. All rights reserved.

  3. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; An Iterative Decoding Algorithm for Linear Block Codes Based on a Low-Weight Trellis Search

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.

  4. Neural correlates and neural computations in posterior parietal cortex during perceptual decision-making

    PubMed Central

    Huk, Alexander C.; Meister, Miriam L. R.

    2012-01-01

    A recent line of work has found remarkable success in relating perceptual decision-making and the spiking activity in the macaque lateral intraparietal area (LIP). In this review, we focus on questions about the neural computations in LIP that are not answered by demonstrations of neural correlates of psychological processes. We highlight three areas of limitations in our current understanding of the precise neural computations that might underlie neural correlates of decisions: (1) empirical questions not yet answered by existing data; (2) implementation issues related to how neural circuits could actually implement the mechanisms suggested by both extracellular neurophysiology and psychophysics; and (3) ecological constraints related to the use of well-controlled laboratory tasks and whether they provide an accurate window on sensorimotor computation. These issues motivate the adoption of a more general “encoding-decoding framework” that will be fruitful for more detailed contemplation of how neural computations in LIP relate to the formation of perceptual decisions. PMID:23087623

  5. In Vivo Neuromechanics: Decoding Causal Motor Neuron Behavior with Resulting Musculoskeletal Function.

    PubMed

    Sartori, Massimo; Yavuz, Utku Ş; Farina, Dario

    2017-10-18

    Human motor function emerges from the interaction between the neuromuscular and the musculoskeletal systems. Despite the knowledge of the mechanisms underlying neural and mechanical functions, there is no relevant understanding of the neuro-mechanical interplay in the neuro-musculo-skeletal system. This currently represents the major challenge to the understanding of human movement. We address this challenge by proposing a paradigm for investigating spinal motor neuron contribution to skeletal joint mechanical function in the intact human in vivo. We employ multi-muscle spatial sampling and deconvolution of high-density fiber electrical activity to decode accurate α-motor neuron discharges across five lumbosacral segments in the human spinal cord. We use complete α-motor neuron discharge series to drive forward subject-specific models of the musculoskeletal system in open-loop with no corrective feedback. We perform validation tests where mechanical moments are estimated with no knowledge of reference data over unseen conditions. This enables accurate blinded estimation of ankle function purely from motor neuron information. Remarkably, this enables observing causal associations between spinal motor neuron activity and joint moment control. We provide a new class of neural data-driven musculoskeletal modeling formulations for bridging between movement neural and mechanical levels in vivo with implications for understanding motor physiology, pathology, and recovery.

  6. Neural mechanisms of limb position estimation in the primate brain.

    PubMed

    Shi, Ying; Buneo, Christopher A

    2011-01-01

    Understanding the neural mechanisms of limb position estimation is important both for comprehending the neural control of goal directed arm movements and for developing neuroprosthetic systems designed to replace lost limb function. Here we examined the role of area 5 of the posterior parietal cortex in estimating limb position based on visual and somatic (proprioceptive, efference copy) signals. Single unit recordings were obtained as monkeys reached to visual targets presented in a semi-immersive virtual reality environment. On half of the trials animals were required to maintain their limb position at these targets while receiving both visual and non-visual feedback of their arm position, while on the other trials visual feedback was withheld. When examined individually, many area 5 neurons were tuned to the position of the limb in the workspace but very few neurons modulated their firing rates based on the presence/absence of visual feedback. At the population level however decoding of limb position was somewhat more accurate when visual feedback was provided. These findings support a role for area 5 in limb position estimation but also suggest that visual signals regarding limb position are only weakly represented in this area, and only at the population level.

  7. Neuroprosthetic Decoder Training as Imitation Learning

    PubMed Central

    Merel, Josh; Paninski, Liam; Cunningham, John P.

    2016-01-01

    Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user’s intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user’s intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector. PMID:27191387

  8. Robustness of neuroprosthetic decoding algorithms.

    PubMed

    Serruya, Mijail; Hatsopoulos, Nicholas; Fellows, Matthew; Paninski, Liam; Donoghue, John

    2003-03-01

    We assessed the ability of two algorithms to predict hand kinematics from neural activity as a function of the amount of data used to determine the algorithm parameters. Using chronically implanted intracortical arrays, single- and multineuron discharge was recorded during trained step tracking and slow continuous tracking tasks in macaque monkeys. The effect of increasing the amount of data used to build a neural decoding model on the ability of that model to predict hand kinematics accurately was examined. We evaluated how well a maximum-likelihood model classified discrete reaching directions and how well a linear filter model reconstructed continuous hand positions over time within and across days. For each of these two models we asked two questions: (1) How does classification performance change as the amount of data the model is built upon increases? (2) How does varying the time interval between the data used to build the model and the data used to test the model affect reconstruction? Less than 1 min of data for the discrete task (8 to 13 neurons) and less than 3 min (8 to 18 neurons) for the continuous task were required to build optimal models. Optimal performance was defined by a cost function we derived that reflects both the ability of the model to predict kinematics accurately and the cost of taking more time to build such models. For both the maximum-likelihood classifier and the linear filter model, increasing the duration between the time of building and testing the model within a day did not cause any significant trend of degradation or improvement in performance. Linear filters built on one day and tested on neural data on a subsequent day generated error-measure distributions that were not significantly different from those generated when the linear filters were tested on neural data from the initial day (p<0.05, Kolmogorov-Smirnov test). These data show that only a small amount of data from a limited number of cortical neurons appears to be necessary to construct robust models to predict kinematic parameters for the subsequent hours. Motor-control signals derived from neurons in motor cortex can be reliably acquired for use in neural prosthetic devices. Adequate decoding models can be built rapidly from small numbers of cells and maintained with daily calibration sessions.

  9. Multivariate Pattern Analysis Reveals Category-Related Organization of Semantic Representations in Anterior Temporal Cortex.

    PubMed

    Malone, Patrick S; Glezer, Laurie S; Kim, Judy; Jiang, Xiong; Riesenhuber, Maximilian

    2016-09-28

    The neural substrates of semantic representation have been the subject of much controversy. The study of semantic representations is complicated by difficulty in disentangling perceptual and semantic influences on neural activity, as well as in identifying stimulus-driven, "bottom-up" semantic selectivity unconfounded by top-down task-related modulations. To address these challenges, we trained human subjects to associate pseudowords (TPWs) with various animal and tool categories. To decode semantic representations of these TPWs, we used multivariate pattern classification of fMRI data acquired while subjects performed a semantic oddball detection task. Crucially, the classifier was trained and tested on disjoint sets of TPWs, so that the classifier had to use the semantic information from the training set to correctly classify the test set. Animal and tool TPWs were successfully decoded based on fMRI activity in spatially distinct subregions of the left medial anterior temporal lobe (LATL). In addition, tools (but not animals) were successfully decoded from activity in the left inferior parietal lobule. The tool-selective LATL subregion showed greater functional connectivity with left inferior parietal lobule and ventral premotor cortex, indicating that each LATL subregion exhibits distinct patterns of connectivity. Our findings demonstrate category-selective organization of semantic representations in LATL into spatially distinct subregions, continuing the lateral-medial segregation of activation in posterior temporal cortex previously observed in response to images of animals and tools, respectively. Together, our results provide evidence for segregation of processing hierarchies for different classes of objects and the existence of multiple, category-specific semantic networks in the brain. The location and specificity of semantic representations in the brain are still widely debated. We trained human participants to associate specific pseudowords with various animal and tool categories, and used multivariate pattern classification of fMRI data to decode the semantic representations of the trained pseudowords. We found that: (1) animal and tool information was organized in category-selective subregions of medial left anterior temporal lobe (LATL); (2) tools, but not animals, were encoded in left inferior parietal lobe; and (3) LATL subregions exhibited distinct patterns of functional connectivity with category-related regions across cortex. Our findings suggest that semantic knowledge in LATL is organized in category-related subregions, providing evidence for the existence of multiple, category-specific semantic representations in the brain. Copyright © 2016 the authors 0270-6474/16/3610089-08$15.00/0.

  10. Decoding the Charitable Brain: Empathy, Perspective Taking, and Attention Shifts Differentially Predict Altruistic Giving.

    PubMed

    Tusche, Anita; Böckler, Anne; Kanske, Philipp; Trautwein, Fynn-Mathis; Singer, Tania

    2016-04-27

    Altruistic behavior varies considerably across people and decision contexts. The relevant computational and motivational mechanisms that underlie its heterogeneity, however, are poorly understood. Using a charitable giving task together with multivariate decoding techniques, we identified three distinct psychological mechanisms underlying altruistic decision-making (empathy, perspective taking, and attentional reorienting) and linked them to dissociable neural computations. Neural responses in the anterior insula (AI) (but not temporoparietal junction [TPJ]) encoded trial-wise empathy for beneficiaries, whereas the TPJ (but not AI) predicted the degree of perspective taking. Importantly, the relative influence of both socio-cognitive processes differed across individuals: participants whose donation behavior was heavily influenced by affective empathy exhibited higher predictive accuracies for generosity in AI, whereas those who strongly relied on cognitive perspective taking showed improved predictions of generous donations in TPJ. Furthermore, subject-specific contributions of both processes for donations were reflected in participants' empathy and perspective taking responses in a separate fMRI task (EmpaToM), suggesting that process-specific inputs into altruistic choices may reflect participants' general propensity to either empathize or mentalize. Finally, using independent attention task data, we identified shared neural codes for attentional reorienting and generous donations in the posterior superior temporal sulcus, suggesting that domain-general attention shifts also contribute to generous behavior (but not in TPJ or AI). Overall, our findings demonstrate highly specific roles of AI for affective empathy and TPJ for cognitive perspective taking as precursors of prosocial behavior and suggest that these discrete routes of social cognition differentially drive intraindividual and interindividual differences in altruistic behavior. Human societies depend on the altruistic behavior of their members, but teasing apart its underlying motivations and neural mechanisms poses a serious challenge. Using multivariate decoding techniques, we delineated three distinct processes for altruistic decision-making (affective empathy, cognitive perspective taking, and domain-general attention shifts), linked them to dissociable neural computations, and identified their relative influence across individuals. Distinguishing process-specific computations both behaviorally and neurally is crucial for developing complete theoretical and neuroscientific accounts of altruistic behavior and more effective means of increasing it. Moreover, information on the relative influence of subprocesses across individuals and its link to people's more general propensity to engage empathy or perspective taking can inform training programs to increase prosociality, considering their "fit" with different individuals. Copyright © 2016 the authors 0270-6474/16/364719-14$15.00/0.

  11. Successful Decoding of Famous Faces in the Fusiform Face Area

    PubMed Central

    Axelrod, Vadim; Yovel, Galit

    2015-01-01

    What are the neural mechanisms of face recognition? It is believed that the network of face-selective areas, which spans the occipital, temporal, and frontal cortices, is important in face recognition. A number of previous studies indeed reported that face identity could be discriminated based on patterns of multivoxel activity in the fusiform face area and the anterior temporal lobe. However, given the difficulty in localizing the face-selective area in the anterior temporal lobe, its role in face recognition is still unknown. Furthermore, previous studies limited their analysis to occipito-temporal regions without testing identity decoding in more anterior face-selective regions, such as the amygdala and prefrontal cortex. In the current high-resolution functional Magnetic Resonance Imaging study, we systematically examined the decoding of the identity of famous faces in the temporo-frontal network of face-selective and adjacent non-face-selective regions. A special focus has been put on the face-area in the anterior temporal lobe, which was reliably localized using an optimized scanning protocol. We found that face-identity could be discriminated above chance level only in the fusiform face area. Our results corroborate the role of the fusiform face area in face recognition. Future studies are needed to further explore the role of the more recently discovered anterior face-selective areas in face recognition. PMID:25714434

  12. Population decoding of motor cortical activity using a generalized linear model with hidden states.

    PubMed

    Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas; Paninski, Liam

    2010-06-15

    Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (reducing the mean square error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  13. Population Decoding of Motor Cortical Activity using a Generalized Linear Model with Hidden States

    PubMed Central

    Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas G.; Paninski, Liam

    2010-01-01

    Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (lowering the Mean Square Error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications. PMID:20359500

  14. Simultaneous Scalp Electroencephalography (EEG), Electromyography (EMG), and Whole-body Segmental Inertial Recording for Multi-modal Neural Decoding

    PubMed Central

    Bulea, Thomas C.; Kilicarslan, Atilla; Ozdemir, Recep; Paloski, William H.; Contreras-Vidal, Jose L.

    2013-01-01

    Recent studies support the involvement of supraspinal networks in control of bipedal human walking. Part of this evidence encompasses studies, including our previous work, demonstrating that gait kinematics and limb coordination during treadmill walking can be inferred from the scalp electroencephalogram (EEG) with reasonably high decoding accuracies. These results provide impetus for development of non-invasive brain-machine-interface (BMI) systems for use in restoration and/or augmentation of gait- a primary goal of rehabilitation research. To date, studies examining EEG decoding of activity during gait have been limited to treadmill walking in a controlled environment. However, to be practically viable a BMI system must be applicable for use in everyday locomotor tasks such as over ground walking and turning. Here, we present a novel protocol for non-invasive collection of brain activity (EEG), muscle activity (electromyography (EMG)), and whole-body kinematic data (head, torso, and limb trajectories) during both treadmill and over ground walking tasks. By collecting these data in the uncontrolled environment insight can be gained regarding the feasibility of decoding unconstrained gait and surface EMG from scalp EEG. PMID:23912203

  15. Decoding the future from past experience: learning shapes predictions in early visual cortex.

    PubMed

    Luft, Caroline D B; Meeson, Alan; Welchman, Andrew E; Kourtzi, Zoe

    2015-05-01

    Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex. Copyright © 2015 the American Physiological Society.

  16. Meaning in the avian auditory cortex: Neural representation of communication calls

    PubMed Central

    Elie, Julie E; Theunissen, Frédéric E

    2014-01-01

    Understanding how the brain extracts the behavioral meaning carried by specific vocalization types that can be emitted by various vocalizers and in different conditions is a central question in auditory research. This semantic categorization is a fundamental process required for acoustic communication and presupposes discriminative and invariance properties of the auditory system for conspecific vocalizations. Songbirds have been used extensively to study vocal learning, but the communicative function of all their vocalizations and their neural representation has yet to be examined. In our research, we first generated a library containing almost the entire zebra finch vocal repertoire and organized communication calls along 9 different categories based on their behavioral meaning. We then investigated the neural representations of these semantic categories in the primary and secondary auditory areas of 6 anesthetized zebra finches. To analyze how single units encode these call categories, we described neural responses in terms of their discrimination, selectivity and invariance properties. Quantitative measures for these neural properties were obtained using an optimal decoder based both on spike counts and spike patterns. Information theoretic metrics show that almost half of the single units encode semantic information. Neurons achieve higher discrimination of these semantic categories by being more selective and more invariant. These results demonstrate that computations necessary for semantic categorization of meaningful vocalizations are already present in the auditory cortex and emphasize the value of a neuro-ethological approach to understand vocal communication. PMID:25728175

  17. Relationship between speed and EEG activity during imagined and executed hand movements

    NASA Astrophysics Data System (ADS)

    Yuan, Han; Perdoni, Christopher; He, Bin

    2010-04-01

    The relationship between primary motor cortex and movement kinematics has been shown in nonhuman primate studies of hand reaching or drawing tasks. Studies have demonstrated that the neural activities accompanying or immediately preceding the movement encode the direction, speed and other information. Here we investigated the relationship between the kinematics of imagined and actual hand movement, i.e. the clenching speed, and the EEG activity in ten human subjects. Study participants were asked to perform and imagine clenching of the left hand and right hand at various speeds. The EEG activity in the alpha (8-12 Hz) and beta (18-28 Hz) frequency bands were found to be linearly correlated with the speed of imagery clenching. Similar parametric modulation was also found during the execution of hand movements. A single equation relating the EEG activity to the speed and the hand (left versus right) was developed. This equation, which contained a linear independent combination of the two parameters, described the time-varying neural activity during the tasks. Based on the model, a regression approach was developed to decode the two parameters from the multiple-channel EEG signals. We demonstrated the continuous decoding of dynamic hand and speed information of the imagined clenching. In particular, the time-varying clenching speed was reconstructed in a bell-shaped profile. Our findings suggest an application to providing continuous and complex control of noninvasive brain-computer interface for movement-impaired paralytics.

  18. Stereo-vision-based cooperative-vehicle positioning using OCC and neural networks

    NASA Astrophysics Data System (ADS)

    Ifthekhar, Md. Shareef; Saha, Nirzhar; Jang, Yeong Min

    2015-10-01

    Vehicle positioning has been subjected to extensive research regarding driving safety measures and assistance as well as autonomous navigation. The most common positioning technique used in automotive positioning is the global positioning system (GPS). However, GPS is not reliably accurate because of signal blockage caused by high-rise buildings. In addition, GPS is error prone when a vehicle is inside a tunnel. Moreover, GPS and other radio-frequency-based approaches cannot provide orientation information or the position of neighboring vehicles. In this study, we propose a cooperative-vehicle positioning (CVP) technique by using the newly developed optical camera communications (OCC). The OCC technique utilizes image sensors and cameras to receive and decode light-modulated information from light-emitting diodes (LEDs). A vehicle equipped with an OCC transceiver can receive positioning and other information such as speed, lane change, driver's condition, etc., through optical wireless links of neighboring vehicles. Thus, the target vehicle position that is too far away to establish an OCC link can be determined by a computer-vision-based technique combined with the cooperation of neighboring vehicles. In addition, we have devised a back-propagation (BP) neural-network learning method for positioning and range estimation for CVP. The proposed neural-network-based technique can estimate target vehicle position from only two image points of target vehicles using stereo vision. For this, we use rear LEDs on target vehicles as image points. We show from simulation results that our neural-network-based method achieves better accuracy than that of the computer-vision method.

  19. Computerised working memory based cognitive remediation therapy does not affect Reading the Mind in the Eyes test performance or neural activity during a Facial Emotion Recognition test in psychosis.

    PubMed

    Mothersill, David; Dillon, Rachael; Hargreaves, April; Castorina, Marco; Furey, Emilia; Fagan, Andrew J; Meaney, James F; Fitzmaurice, Brian; Hallahan, Brian; McDonald, Colm; Wykes, Til; Corvin, Aiden; Robertson, Ian H; Donohoe, Gary

    2018-05-27

    Working memory based cognitive remediation therapy (CT) for psychosis has recently been associated with broad improvements in performance on untrained tasks measuring working memory, episodic memory and IQ, and changes in associated brain regions. However, it is unclear if these improvements transfer to the domain of social cognition and neural activity related to performance on social cognitive tasks. We examined performance on the Reading the Mind in the Eyes test (Eyes test) in a large sample of participants with psychosis who underwent working memory based CT (N = 43) compared to a Control Group of participants with psychosis (N = 35). In a subset of this sample, we used functional magnetic resonance imaging (fMRI) to examine changes in neural activity during a facial emotion recognition task in participants who underwent CT (N = 15) compared to a Control Group (N = 15). No significant effects of CT were observed on Eyes test performance or on neural activity during facial emotion recognition, either at p<0.05 family-wise error, or at a p<0.001 uncorrected threshold, within a priori social cognitive regions of interest. This study suggests that working memory based CT does not significantly impact an aspect of social cognition which was measured behaviourally and neurally. It provides further evidence that deficits in the ability to decode mental state from facial expressions are dissociable from working memory deficits, and suggests that future CT programs should target social cognition in addition to working memory for the purposes of further enhancing social function. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  20. Group-level spatio-temporal pattern recovery in MEG decoding using multi-task joint feature learning.

    PubMed

    Kia, Seyed Mostafa; Pedregosa, Fabian; Blumenthal, Anna; Passerini, Andrea

    2017-06-15

    The use of machine learning models to discriminate between patterns of neural activity has become in recent years a standard analysis approach in neuroimaging studies. Whenever these models are linear, the estimated parameters can be visualized in the form of brain maps which can aid in understanding how brain activity in space and time underlies a cognitive function. However, the recovered brain maps often suffer from lack of interpretability, especially in group analysis of multi-subject data. To facilitate the application of brain decoding in group-level analysis, we present an application of multi-task joint feature learning for group-level multivariate pattern recovery in single-trial magnetoencephalography (MEG) decoding. The proposed method allows for recovering sparse yet consistent patterns across different subjects, and therefore enhances the interpretability of the decoding model. Our experimental results demonstrate that the mutli-task joint feature learning framework is capable of recovering more meaningful patterns of varying spatio-temporally distributed brain activity across individuals while still maintaining excellent generalization performance. We compare the performance of the multi-task joint feature learning in terms of generalization, reproducibility, and quality of pattern recovery against traditional single-subject and pooling approaches on both simulated and real MEG datasets. These results can facilitate the usage of brain decoding for the characterization of fine-level distinctive patterns in group-level inference. Considering the importance of group-level analysis, the proposed approach can provide a methodological shift towards more interpretable brain decoding models. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Multiplexing in the primate motion pathway.

    PubMed

    Huk, Alexander C

    2012-06-01

    This article begins by reviewing recent work on 3D motion processing in the primate visual system. Some of these results suggest that 3D motion signals may be processed in the same circuitry already known to compute 2D motion signals. Such "multiplexing" has implications for the study of visual cortical circuits and neural signals. A more explicit appreciation of multiplexing--and the computations required for demultiplexing--may enrich the study of the visual system by emphasizing the importance of a structured and balanced "encoding/decoding" framework. In addition to providing a fresh perspective on how successive stages of visual processing might be approached, multiplexing also raises caveats about the value of "neural correlates" for understanding neural computation.

  2. A loop-based neural architecture for structured behavior encoding and decoding.

    PubMed

    Gisiger, Thomas; Boukadoum, Mounir

    2018-02-01

    We present a new type of artificial neural network that generalizes on anatomical and dynamical aspects of the mammal brain. Its main novelty lies in its topological structure which is built as an array of interacting elementary motifs shaped like loops. These loops come in various types and can implement functions such as gating, inhibitory or executive control, or encoding of task elements to name a few. Each loop features two sets of neurons and a control region, linked together by non-recurrent projections. The two neural sets do the bulk of the loop's computations while the control unit specifies the timing and the conditions under which the computations implemented by the loop are to be performed. By functionally linking many such loops together, a neural network is obtained that may perform complex cognitive computations. To demonstrate the potential offered by such a system, we present two neural network simulations. The first illustrates the structure and dynamics of a single loop implementing a simple gating mechanism. The second simulation shows how connecting four loops in series can produce neural activity patterns that are sufficient to pass a simplified delayed-response task. We also show that this network reproduces electrophysiological measurements gathered in various regions of the brain of monkeys performing similar tasks. We also demonstrate connections between this type of neural network and recurrent or long short-term memory network models, and suggest ways to generalize them for future artificial intelligence research. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Near Real-Time Comprehension Classification with Artificial Neural Networks: Decoding e-Learner Non-Verbal Behavior

    ERIC Educational Resources Information Center

    Holmes, Mike; Latham, Annabel; Crockett, Keeley; O'Shea, James D.

    2018-01-01

    Comprehension is an important cognitive state for learning. Human tutors recognize comprehension and non-comprehension states by interpreting learner non-verbal behavior (NVB). Experienced tutors adapt pedagogy, materials, and instruction to provide additional learning scaffold in the context of perceived learner comprehension. Near real-time…

  4. Neural Strategies for Reading Japanese and Chinese Sentences: A Cross-Linguistic fMRI Study of Character-Decoding and Morphosyntax

    ERIC Educational Resources Information Center

    Huang, Koongliang; Itoh, Kosuke; Kwee, Ingrid L.; Nakada, Tsutomu

    2012-01-01

    Japanese and Chinese share virtually identical morphographic characters invented in ancient China. Whereas modern Chinese retained the original morphographic functionality of these characters (hanzi), modern Japanese utilizes these characters (kanji) as complex syllabograms. This divergence provides a unique opportunity to systematically…

  5. Brain Routes for Reading in Adults with and without Autism: EMEG Evidence

    ERIC Educational Resources Information Center

    Moseley, Rachel L.; Pulvermüller, Friedemann; Mohr, Bettina; Lombardo, Michael V.; Baron-Cohen, Simon; Shtyrov, Yury

    2014-01-01

    Reading utilises at least two neural pathways. The temporal lexical route visually maps whole words to their lexical entries, whilst the nonlexical route decodes words phonologically via parietal cortex. Readers typically employ the lexical route for familiar words, but poor comprehension plus precocity at mechanically "sounding out"…

  6. Vertically aligned carbon nanofiber as nano-neuron interface for monitoring neural function.

    PubMed

    Yu, Zhe; McKnight, Timothy E; Ericson, M Nance; Melechko, Anatoli V; Simpson, Michael L; Morrison, Barclay

    2012-05-01

    Neural chips, which are capable of simultaneous multisite neural recording and stimulation, have been used to detect and modulate neural activity for almost thirty years. As neural interfaces, neural chips provide dynamic functional information for neural decoding and neural control. By improving sensitivity and spatial resolution, nano-scale electrodes may revolutionize neural detection and modulation at cellular and molecular levels as nano-neuron interfaces. We developed a carbon-nanofiber neural chip with lithographically defined arrays of vertically aligned carbon nanofiber electrodes and demonstrated its capability of both stimulating and monitoring electrophysiological signals from brain tissues in vitro and monitoring dynamic information of neuroplasticity. This novel nano-neuron interface may potentially serve as a precise, informative, biocompatible, and dual-mode neural interface for monitoring of both neuroelectrical and neurochemical activity at the single-cell level and even inside the cell. The authors demonstrate the utility of a neural chip with lithographically defined arrays of vertically aligned carbon nanofiber electrodes. The new device can be used to stimulate and/or monitor signals from brain tissue in vitro and for monitoring dynamic information of neuroplasticity both intracellularly and at the single cell level including neuroelectrical and neurochemical activities. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Immersive audiomotor game play enhances neural and perceptual salience of weak signals in noise

    PubMed Central

    Whitton, Jonathon P.; Hancock, Kenneth E.; Polley, Daniel B.

    2014-01-01

    All sensory systems face the fundamental challenge of encoding weak signals in noisy backgrounds. Although discrimination abilities can improve with practice, these benefits rarely generalize to untrained stimulus dimensions. Inspired by recent findings that action video game training can impart a broader spectrum of benefits than traditional perceptual learning paradigms, we trained adult humans and mice in an immersive audio game that challenged them to forage for hidden auditory targets in a 2D soundscape. Both species learned to modulate their angular search vectors and target approach velocities based on real-time changes in the level of a weak tone embedded in broadband noise. In humans, mastery of this tone in noise task generalized to an improved ability to comprehend spoken sentences in speech babble noise. Neural plasticity in the auditory cortex of trained mice supported improved decoding of low-intensity sounds at the training frequency and an enhanced resistance to interference from background masking noise. These findings highlight the potential to improve the neural and perceptual salience of degraded sensory stimuli through immersive computerized games. PMID:24927596

  8. Immersive audiomotor game play enhances neural and perceptual salience of weak signals in noise.

    PubMed

    Whitton, Jonathon P; Hancock, Kenneth E; Polley, Daniel B

    2014-06-24

    All sensory systems face the fundamental challenge of encoding weak signals in noisy backgrounds. Although discrimination abilities can improve with practice, these benefits rarely generalize to untrained stimulus dimensions. Inspired by recent findings that action video game training can impart a broader spectrum of benefits than traditional perceptual learning paradigms, we trained adult humans and mice in an immersive audio game that challenged them to forage for hidden auditory targets in a 2D soundscape. Both species learned to modulate their angular search vectors and target approach velocities based on real-time changes in the level of a weak tone embedded in broadband noise. In humans, mastery of this tone in noise task generalized to an improved ability to comprehend spoken sentences in speech babble noise. Neural plasticity in the auditory cortex of trained mice supported improved decoding of low-intensity sounds at the training frequency and an enhanced resistance to interference from background masking noise. These findings highlight the potential to improve the neural and perceptual salience of degraded sensory stimuli through immersive computerized games.

  9. Human Orbitofrontal Cortex Represents a Cognitive Map of State Space.

    PubMed

    Schuck, Nicolas W; Cai, Ming Bo; Wilson, Robert C; Niv, Yael

    2016-09-21

    Although the orbitofrontal cortex (OFC) has been studied intensely for decades, its precise functions have remained elusive. We recently hypothesized that the OFC contains a "cognitive map" of task space in which the current state of the task is represented, and this representation is especially critical for behavior when states are unobservable from sensory input. To test this idea, we apply pattern-classification techniques to neuroimaging data from humans performing a decision-making task with 16 states. We show that unobservable task states can be decoded from activity in OFC, and decoding accuracy is related to task performance and the occurrence of individual behavioral errors. Moreover, similarity between the neural representations of consecutive states correlates with behavioral accuracy in corresponding state transitions. These results support the idea that OFC represents a cognitive map of task space and establish the feasibility of decoding state representations in humans using non-invasive neuroimaging. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Unsupervised learning of facial emotion decoding skills.

    PubMed

    Huelle, Jan O; Sack, Benjamin; Broer, Katja; Komlewa, Irina; Anders, Silke

    2014-01-01

    Research on the mechanisms underlying human facial emotion recognition has long focussed on genetically determined neural algorithms and often neglected the question of how these algorithms might be tuned by social learning. Here we show that facial emotion decoding skills can be significantly and sustainably improved by practice without an external teaching signal. Participants saw video clips of dynamic facial expressions of five different women and were asked to decide which of four possible emotions (anger, disgust, fear, and sadness) was shown in each clip. Although no external information about the correctness of the participant's response or the sender's true affective state was provided, participants showed a significant increase of facial emotion recognition accuracy both within and across two training sessions two days to several weeks apart. We discuss several similarities and differences between the unsupervised improvement of facial decoding skills observed in the current study, unsupervised perceptual learning of simple stimuli described in previous studies and practice effects often observed in cognitive tasks.

  11. Unsupervised learning of facial emotion decoding skills

    PubMed Central

    Huelle, Jan O.; Sack, Benjamin; Broer, Katja; Komlewa, Irina; Anders, Silke

    2013-01-01

    Research on the mechanisms underlying human facial emotion recognition has long focussed on genetically determined neural algorithms and often neglected the question of how these algorithms might be tuned by social learning. Here we show that facial emotion decoding skills can be significantly and sustainably improved by practice without an external teaching signal. Participants saw video clips of dynamic facial expressions of five different women and were asked to decide which of four possible emotions (anger, disgust, fear, and sadness) was shown in each clip. Although no external information about the correctness of the participant’s response or the sender’s true affective state was provided, participants showed a significant increase of facial emotion recognition accuracy both within and across two training sessions two days to several weeks apart. We discuss several similarities and differences between the unsupervised improvement of facial decoding skills observed in the current study, unsupervised perceptual learning of simple visual stimuli described in previous studies and practice effects often observed in cognitive tasks. PMID:24578686

  12. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.

  13. Restoring motor control and sensory feedback in people with upper extremity amputations using arrays of 96 microelectrodes implanted in the median and ulnar nerves.

    PubMed

    Davis, T S; Wark, H A C; Hutchinson, D T; Warren, D J; O'Neill, K; Scheinblum, T; Clark, G A; Normann, R A; Greger, B

    2016-06-01

    An important goal of neuroprosthetic research is to establish bidirectional communication between the user and new prosthetic limbs that are capable of controlling >20 different movements. One strategy for achieving this goal is to interface the prosthetic limb directly with efferent and afferent fibres in the peripheral nervous system using an array of intrafascicular microelectrodes. This approach would provide access to a large number of independent neural pathways for controlling high degree-of-freedom prosthetic limbs, as well as evoking multiple-complex sensory percepts. Utah Slanted Electrode Arrays (USEAs, 96 recording/stimulating electrodes) were implanted for 30 days into the median (Subject 1-M, 31 years post-amputation) or ulnar (Subject 2-U, 1.5 years post-amputation) nerves of two amputees. Neural activity was recorded during intended movements of the subject's phantom fingers and a linear Kalman filter was used to decode the neural data. Microelectrode stimulation of varying amplitudes and frequencies was delivered via single or multiple electrodes to investigate the number, size and quality of sensory percepts that could be evoked. Device performance over time was assessed by measuring: electrode impedances, signal-to-noise ratios (SNRs), stimulation thresholds, number and stability of evoked percepts. The subjects were able to proportionally, control individual fingers of a virtual robotic hand, with 13 different movements decoded offline (r = 0.48) and two movements decoded online. Electrical stimulation across one USEA evoked >80 sensory percepts. Varying the stimulation parameters modulated percept quality. Devices remained intrafascicularly implanted for the duration of the study with no significant changes in the SNRs or percept thresholds. This study demonstrated that an array of 96 microelectrodes can be implanted into the human peripheral nervous system for up to 1 month durations. Such an array could provide intuitive control of a virtual prosthetic hand with broad sensory feedback.

  14. Restoring motor control and sensory feedback in people with upper extremity amputations using arrays of 96 microelectrodes implanted in the median and ulnar nerves

    NASA Astrophysics Data System (ADS)

    Davis, T. S.; Wark, H. A. C.; Hutchinson, D. T.; Warren, D. J.; O'Neill, K.; Scheinblum, T.; Clark, G. A.; Normann, R. A.; Greger, B.

    2016-06-01

    Objective. An important goal of neuroprosthetic research is to establish bidirectional communication between the user and new prosthetic limbs that are capable of controlling >20 different movements. One strategy for achieving this goal is to interface the prosthetic limb directly with efferent and afferent fibres in the peripheral nervous system using an array of intrafascicular microelectrodes. This approach would provide access to a large number of independent neural pathways for controlling high degree-of-freedom prosthetic limbs, as well as evoking multiple-complex sensory percepts. Approach. Utah Slanted Electrode Arrays (USEAs, 96 recording/stimulating electrodes) were implanted for 30 days into the median (Subject 1-M, 31 years post-amputation) or ulnar (Subject 2-U, 1.5 years post-amputation) nerves of two amputees. Neural activity was recorded during intended movements of the subject’s phantom fingers and a linear Kalman filter was used to decode the neural data. Microelectrode stimulation of varying amplitudes and frequencies was delivered via single or multiple electrodes to investigate the number, size and quality of sensory percepts that could be evoked. Device performance over time was assessed by measuring: electrode impedances, signal-to-noise ratios (SNRs), stimulation thresholds, number and stability of evoked percepts. Main results. The subjects were able to proportionally, control individual fingers of a virtual robotic hand, with 13 different movements decoded offline (r = 0.48) and two movements decoded online. Electrical stimulation across one USEA evoked >80 sensory percepts. Varying the stimulation parameters modulated percept quality. Devices remained intrafascicularly implanted for the duration of the study with no significant changes in the SNRs or percept thresholds. Significance. This study demonstrated that an array of 96 microelectrodes can be implanted into the human peripheral nervous system for up to 1 month durations. Such an array could provide intuitive control of a virtual prosthetic hand with broad sensory feedback.

  15. Multi-stage decoding for multi-level block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1991-01-01

    In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  16. Decoding "us" and "them": Neural representations of generalized group concepts.

    PubMed

    Cikara, Mina; Van Bavel, Jay J; Ingbretsen, Zachary A; Lau, Tatiana

    2017-05-01

    Humans form social coalitions in every society on earth, yet we know very little about how the general concepts us and them are represented in the brain. Evolutionary psychologists have argued that the human capacity for group affiliation is a byproduct of adaptations that evolved for tracking coalitions in general. These theories suggest that humans possess a common neural code for the concepts in-group and out-group, regardless of the category by which group boundaries are instantiated. The authors used multivoxel pattern analysis to identify the neural substrates of generalized group concept representations. They trained a classifier to encode how people represented the most basic instantiation of a specific social group (i.e., arbitrary teams created in the lab with no history of interaction or associated stereotypes) and tested how well the neural data decoded membership along an objectively orthogonal, real-world category (i.e., political parties). The dorsal anterior cingulate cortex/middle cingulate cortex and anterior insula were associated with representing groups across multiple social categories. Restricting the analyses to these regions in a separate sample of participants performing an explicit categorization task, the authors replicated cross-categorization classification in anterior insula. Classification accuracy across categories was driven predominantly by the correct categorization of in-group targets, consistent with theories indicating in-group preference is more central than out-group derogation to group perception and cognition. These findings highlight the extent to which social group concepts rely on domain-general circuitry associated with encoding stimuli's functional significance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Limb-state information encoded by peripheral and central somatosensory neurons: Implications for an afferent interface

    PubMed Central

    Weber, Douglas J.; London, Brian M.; Hokanson, James A.; Ayers, Christopher A.; Gaunt, Robert A.; Torres, Ricardo R.; Zaaimi, Boubker; Miller, Lee E.

    2013-01-01

    A major issue to be addressed in the development of neural interfaces for prosthetic control is the need for somatosensory feedback. Here, we investigate two possible strategies: electrical stimulation of either dorsal root ganglia (DRG) or primary somatosensory cortex (S1). In each approach, we must determine a model that reflects the representation of limb state in terms of neural discharge. This model can then be used to design stimuli that artificially activate the nervous system to convey information about limb state to the subject. Electrically activating DRG neurons using naturalistic stimulus patterns, modeled on recordings made during passive limb movement, evoked activity in S1 that was similar to that of the original movement. We also found that S1 neural populations could accurately discriminate different patterns of DRG stimulation across a wide range of stimulus pulse-rates. In studying the neural coding of limb-state in S1, we also decoded the kinematics of active limb movement using multi-electrode recordings in the monkey. Neurons having both proprioceptive and cutaneous receptive fields contributed equally to this decoding. Some neurons were most informative of limb state in the recent past, but many others appeared to signal upcoming movements suggesting that they also were modulated by an efference copy signal. Finally, we show that a monkey was able to detect stimulation through a large percentage of electrodes implanted in area 2. We discuss the design of appropriate stimulus paradigms for conveying time-varying limb state information, and the relative merits and limitations of central and peripheral approaches. PMID:21878419

  18. “Doctor” or “darling”? Decoding the communication partner from ECoG of the anterior temporal lobe during non-experimental, real-life social interaction

    PubMed Central

    Derix, Johanna; Iljina, Olga; Schulze-Bonhage, Andreas; Aertsen, Ad; Ball, Tonio

    2012-01-01

    Human brain processes underlying real-life social interaction in everyday situations have been difficult to study and have, until now, remained largely unknown. Here, we investigated whether electrocorticography (ECoG) recorded for pre-neurosurgical diagnostics during the daily hospital life of epilepsy patients could provide a way to elucidate the neural correlates of non-experimental social interaction. We identified time periods in which patients were involved in conversations with either their respective life partners (Condition 1; C1) or attending physicians (Condition 2; C2). These two conditions can be expected to differentially involve subfunctions of social interaction which have been associated with activity in the anterior temporal lobe (ATL), including the temporal pole (TP). Therefore, we specifically focused on ECoG recordings from this brain region and investigated spectral power modulations in the alpha (8–12 Hz) and theta (3–5 Hz) frequency ranges, which have been previously assumed to play an important role in the processing of social interaction. We hypothesized that brain activity in this region might be sensitive to differences in the two interaction situations and tested whether these differences can be detected by single-trial decoding. Condition-specific effects in both theta and alpha bands were observed: the left and right TP exclusively showed increased power in C1 compared to C2, whereas more posterior parts of the ATL exhibited similar (C1 > C2) and also contrary (C2 > C1) effects. Single-trial decoding accuracies for classification of these effects were highly above chance. Our findings demonstrate that it is possible to study the neural correlates of human social interaction in non-experimental conditions. Decoding the identity of the communication partner and adjusting the speech output accordingly may be useful in the emerging field of brain-machine interfacing for restoration of expressive speech. PMID:22973215

  19. An fMRI Investigation of Preparatory Set in the Human Cerebral Cortex and Superior Colliculus for Pro- and Anti-Saccades

    PubMed Central

    Furlan, Michele; Smith, Andrew T.; Walker, Robin

    2016-01-01

    Previous studies have identified several cortical regions that show larger BOLD responses during preparation and execution of anti-saccades than pro-saccades. We confirmed this finding with a greater BOLD response for anti-saccades than pro-saccades during the preparation phase in the FEF, IPS and DLPFC and in the FEF and IPS in the execution phase. We then applied multi-voxel pattern analysis (MVPA) to establish whether different neural populations are involved in the two types of saccade. Pro-saccades and anti-saccades were reliably decoded during saccade execution in all three cortical regions (FEF, DLPFC and IPS) and in IPS during saccade preparation. This indicates neural specialization, for programming the desired response depending on the task rule, in these regions. In a further study tailored for imaging the superior colliculus in the midbrain a similar magnitude BOLD response was observed for pro-saccades and anti-saccades and the two saccade types could not be decoded with MVPA. This was the case both for activity related to the preparation phase and also for that elicited during the execution phase. We conclude that separate cortical neural populations are involved in the task-specific programming of a saccade while in contrast, the SC has a role in response preparation but may be less involved in high-level, task-specific aspects of the control of saccades. PMID:27391390

  20. Singer product apertures-A coded aperture system with a fast decoding algorithm

    NASA Astrophysics Data System (ADS)

    Byard, Kevin; Shutler, Paul M. E.

    2017-06-01

    A new type of coded aperture configuration that enables fast decoding of the coded aperture shadowgram data is presented. Based on the products of incidence vectors generated from the Singer difference sets, we call these Singer product apertures. For a range of aperture dimensions, we compare experimentally the performance of three decoding methods: standard decoding, induction decoding and direct vector decoding. In all cases the induction and direct vector methods are several orders of magnitude faster than the standard method, with direct vector decoding being significantly faster than induction decoding. For apertures of the same dimensions the increase in speed offered by direct vector decoding over induction decoding is better for lower throughput apertures.

  1. An Optimized Three-Level Design of Decoder Based on Nanoscale Quantum-Dot Cellular Automata

    NASA Astrophysics Data System (ADS)

    Seyedi, Saeid; Navimipour, Nima Jafari

    2018-03-01

    Quantum-dot Cellular Automata (QCA) has been potentially considered as a supersede to Complementary Metal-Oxide-Semiconductor (CMOS) because of its inherent advantages. Many QCA-based logic circuits with smaller feature size, improved operating frequency, and lower power consumption than CMOS have been offered. This technology works based on electron relations inside quantum-dots. Due to the importance of designing an optimized decoder in any digital circuit, in this paper, we design, implement and simulate a new 2-to-4 decoder based on QCA with low delay, area, and complexity. The logic functionality of the 2-to-4 decoder is verified using the QCADesigner tool. The results have shown that the proposed QCA-based decoder has high performance in terms of a number of cells, covered area, and time delay. Due to the lower clock pulse frequency, the proposed 2-to-4 decoder is helpful for building QCA-based sequential digital circuits with high performance.

  2. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    PubMed Central

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  3. Multi- and Unisensory Decoding of Words and Nonwords Result in Differential Brain Responses in Dyslexic and Nondyslexic Adults

    ERIC Educational Resources Information Center

    Kast, Monika; Bezzola, Ladina; Jancke, Lutz; Meyer, Martin

    2011-01-01

    The present functional magnetic resonance imaging (fMRI) study was designed, in order to investigate the neural substrates involved in the audiovisual processing of disyllabic German words and pseudowords. Twelve dyslexic and 13 nondyslexic adults performed a lexical decision task while stimuli were presented unimodally (either aurally or…

  4. An Integrated Gait and Balance Analysis System to Define Human Locomotor Control

    DTIC Science & Technology

    2016-04-29

    common in the “real-world”. Furthermore, BCI controllers need some sort of direct link into neural signals and this requires invasive surgery and...L. J., Simon, A. M., Young, A. J., Lipschutz, R. D., Finucane, S. B., Smith, D. G., & Kuiken, T. A. (2013). Robotic leg control with EMG decoding in

  5. Gaussian Process Autoregression for Simultaneous Proportional Multi-Modal Prosthetic Control With Natural Hand Kinematics.

    PubMed

    Xiloyannis, Michele; Gavriel, Constantinos; Thomik, Andreas A C; Faisal, A Aldo

    2017-10-01

    Matching the dexterity, versatility, and robustness of the human hand is still an unachieved goal in bionics, robotics, and neural engineering. A major limitation for hand prosthetics lies in the challenges of reliably decoding user intention from muscle signals when controlling complex robotic hands. Most of the commercially available prosthetic hands use muscle-related signals to decode a finite number of predefined motions and some offer proportional control of open/close movements of the whole hand. Here, in contrast, we aim to offer users flexible control of individual joints of their artificial hand. We propose a novel framework for decoding neural information that enables a user to independently control 11 joints of the hand in a continuous manner-much like we control our natural hands. Toward this end, we instructed six able-bodied subjects to perform everyday object manipulation tasks combining both dynamic, free movements (e.g., grasping) and isometric force tasks (e.g., squeezing). We recorded the electromyographic and mechanomyographic activities of five extrinsic muscles of the hand in the forearm, while simultaneously monitoring 11 joints of hand and fingers using a sensorized data glove that tracked the joints of the hand. Instead of learning just a direct mapping from current muscle activity to intended hand movement, we formulated a novel autoregressive approach that combines the context of previous hand movements with instantaneous muscle activity to predict future hand movements. Specifically, we evaluated a linear vector autoregressive moving average model with exogenous inputs and a novel Gaussian process ( ) autoregressive framework to learn the continuous mapping from hand joint dynamics and muscle activity to decode intended hand movement. Our approach achieves high levels of performance (RMSE of 8°/s and ). Crucially, we use a small set of sensors that allows us to control a larger set of independently actuated degrees of freedom of a hand. This novel undersensored control is enabled through the combination of nonlinear autoregressive continuous mapping between muscle activity and joint angles. The system evaluates the muscle signals in the context of previous natural hand movements. This enables us to resolve ambiguities in situations, where muscle signals alone cannot determine the correct action as we evaluate the muscle signals in their context of natural hand movements. autoregression is a particularly powerful approach which makes not only a prediction based on the context but also represents the associated uncertainty of its predictions, thus enabling the novel notion of risk-based control in neuroprosthetics. Our results suggest that autoregressive approaches with exogenous inputs lend themselves for natural, intuitive, and continuous control in neurotechnology, with the particular focus on prosthetic restoration of natural limb function, where high dexterity is required for complex movements.

  6. Prior Knowledge about Objects Determines Neural Color Representation in Human Visual Cortex.

    PubMed

    Vandenbroucke, A R E; Fahrenfort, J J; Meuwese, J D I; Scholte, H S; Lamme, V A F

    2016-04-01

    To create subjective experience, our brain must translate physical stimulus input by incorporating prior knowledge and expectations. For example, we perceive color and not wavelength information, and this in part depends on our past experience with colored objects ( Hansen et al. 2006; Mitterer and de Ruiter 2008). Here, we investigated the influence of object knowledge on the neural substrates underlying subjective color vision. In a functional magnetic resonance imaging experiment, human subjects viewed a color that lay midway between red and green (ambiguous with respect to its distance from red and green) presented on either typical red (e.g., tomato), typical green (e.g., clover), or semantically meaningless (nonsense) objects. Using decoding techniques, we could predict whether subjects viewed the ambiguous color on typical red or typical green objects based on the neural response of veridical red and green. This shift of neural response for the ambiguous color did not occur for nonsense objects. The modulation of neural responses was observed in visual areas (V3, V4, VO1, lateral occipital complex) involved in color and object processing, as well as frontal areas. This demonstrates that object memory influences wavelength information relatively early in the human visual system to produce subjective color vision. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Decoding bipedal locomotion from the rat sensorimotor cortex.

    PubMed

    Rigosa, J; Panarese, A; Dominici, N; Friedli, L; van den Brand, R; Carpaneto, J; DiGiovanna, J; Courtine, G; Micera, S

    2015-10-01

    Decoding forelimb movements from the firing activity of cortical neurons has been interfaced with robotic and prosthetic systems to replace lost upper limb functions in humans. Despite the potential of this approach to improve locomotion and facilitate gait rehabilitation, decoding lower limb movement from the motor cortex has received comparatively little attention. Here, we performed experiments to identify the type and amount of information that can be decoded from neuronal ensemble activity in the hindlimb area of the rat motor cortex during bipedal locomotor tasks. Rats were trained to stand, step on a treadmill, walk overground and climb staircases in a bipedal posture. To impose this gait, the rats were secured in a robotic interface that provided support against the direction of gravity and in the mediolateral direction, but behaved transparently in the forward direction. After completion of training, rats were chronically implanted with a micro-wire array spanning the left hindlimb motor cortex to record single and multi-unit activity, and bipolar electrodes into 10 muscles of the right hindlimb to monitor electromyographic signals. Whole-body kinematics, muscle activity, and neural signals were simultaneously recorded during execution of the trained tasks over multiple days of testing. Hindlimb kinematics, muscle activity, gait phases, and locomotor tasks were decoded using offline classification algorithms. We found that the stance and swing phases of gait and the locomotor tasks were detected with accuracies as robust as 90% in all rats. Decoded hindlimb kinematics and muscle activity exhibited a larger variability across rats and tasks. Our study shows that the rodent motor cortex contains useful information for lower limb neuroprosthetic development. However, brain-machine interfaces estimating gait phases or locomotor behaviors, instead of continuous variables such as limb joint positions or speeds, are likely to provide more robust control strategies for the design of such neuroprostheses.

  8. Iterative channel decoding of FEC-based multiple-description codes.

    PubMed

    Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B

    2012-03-01

    Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.

  9. Beyond mind-reading: multi-voxel pattern analysis of fMRI data.

    PubMed

    Norman, Kenneth A; Polyn, Sean M; Detre, Greg J; Haxby, James V

    2006-09-01

    A key challenge for cognitive neuroscience is determining how mental representations map onto patterns of neural activity. Recently, researchers have started to address this question by applying sophisticated pattern-classification algorithms to distributed (multi-voxel) patterns of functional MRI data, with the goal of decoding the information that is represented in the subject's brain at a particular point in time. This multi-voxel pattern analysis (MVPA) approach has led to several impressive feats of mind reading. More importantly, MVPA methods constitute a useful new tool for advancing our understanding of neural information processing. We review how researchers are using MVPA methods to characterize neural coding and information processing in domains ranging from visual perception to memory search.

  10. EEG-based usability assessment of 3D shutter glasses

    NASA Astrophysics Data System (ADS)

    Wenzel, Markus A.; Schultze-Kraft, Rafael; Meinecke, Frank C.; Cardinaux, Fabien; Kemp, Thomas; Müller, Klaus-Robert; Curio, Gabriel; Blankertz, Benjamin

    2016-02-01

    Objective. Neurotechnology can contribute to the usability assessment of products by providing objective measures of neural workload and can uncover usability impediments that are not consciously perceived by test persons. In this study, the neural processing effort imposed on the viewer of 3D television by shutter glasses was quantified as a function of shutter frequency. In particular, we sought to determine the critical shutter frequency at which the ‘neural flicker’ vanishes, such that visual fatigue due to this additional neural effort can be prevented by increasing the frequency of the system. Approach. Twenty-three participants viewed an image through 3D shutter glasses, while multichannel electroencephalogram (EEG) was recorded. In total ten shutter frequencies were employed, selected individually for each participant to cover the range below, at and above the threshold of flicker perception. The source of the neural flicker correlate was extracted using independent component analysis and the flicker impact on the visual cortex was quantified by decoding the state of the shutter from the EEG. Main Result. Effects of the shutter glasses were traced in the EEG up to around 67 Hz—about 20 Hz over the flicker perception threshold—and vanished at the subsequent frequency level of 77 Hz. Significance. The impact of the shutter glasses on the visual cortex can be detected by neurotechnology even when a flicker is not reported by the participants. Potential impact. Increasing the shutter frequency from the usual 50 Hz or 60 Hz to 77 Hz reduces the risk of visual fatigue and thus improves shutter-glass-based 3D usability.

  11. EEG-based usability assessment of 3D shutter glasses.

    PubMed

    Wenzel, Markus A; Schultze-Kraft, Rafael; Meinecke, Frank C; Fabien Cardinaux; Kemp, Thomas; Klaus-Robert Müller; Gabriel Curio; Benjamin Blankertz

    2016-02-01

    Neurotechnology can contribute to the usability assessment of products by providing objective measures of neural workload and can uncover usability impediments that are not consciously perceived by test persons. In this study, the neural processing effort imposed on the viewer of 3D television by shutter glasses was quantified as a function of shutter frequency. In particular, we sought to determine the critical shutter frequency at which the 'neural flicker' vanishes, such that visual fatigue due to this additional neural effort can be prevented by increasing the frequency of the system. Twenty-three participants viewed an image through 3D shutter glasses, while multichannel electroencephalogram (EEG) was recorded. In total ten shutter frequencies were employed, selected individually for each participant to cover the range below, at and above the threshold of flicker perception. The source of the neural flicker correlate was extracted using independent component analysis and the flicker impact on the visual cortex was quantified by decoding the state of the shutter from the EEG. Effects of the shutter glasses were traced in the EEG up to around 67 Hz-about 20 Hz over the flicker perception threshold-and vanished at the subsequent frequency level of 77 Hz. The impact of the shutter glasses on the visual cortex can be detected by neurotechnology even when a flicker is not reported by the participants. Potential impact. Increasing the shutter frequency from the usual 50 Hz or 60 Hz to 77 Hz reduces the risk of visual fatigue and thus improves shutter-glass-based 3D usability.

  12. The serial message-passing schedule for LDPC decoding algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue

    2015-12-01

    The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.

  13. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach

    PubMed Central

    Teng, Santani

    2017-01-01

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019

  14. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach.

    PubMed

    Cichy, Radoslaw Martin; Teng, Santani

    2017-02-19

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  15. Augmenting intracortical brain-machine interface with neurally driven error detectors

    NASA Astrophysics Data System (ADS)

    Even-Chen, Nir; Stavisky, Sergey D.; Kao, Jonathan C.; Ryu, Stephen I.; Shenoy, Krishna V.

    2017-12-01

    Objective. Making mistakes is inevitable, but identifying them allows us to correct or adapt our behavior to improve future performance. Current brain-machine interfaces (BMIs) make errors that need to be explicitly corrected by the user, thereby consuming time and thus hindering performance. We hypothesized that neural correlates of the user perceiving the mistake could be used by the BMI to automatically correct errors. However, it was unknown whether intracortical outcome error signals were present in the premotor and primary motor cortices, brain regions successfully used for intracortical BMIs. Approach. We report here for the first time a putative outcome error signal in spiking activity within these cortices when rhesus macaques performed an intracortical BMI computer cursor task. Main results. We decoded BMI trial outcomes shortly after and even before a trial ended with 96% and 84% accuracy, respectively. This led us to develop and implement in real-time a first-of-its-kind intracortical BMI error ‘detect-and-act’ system that attempts to automatically ‘undo’ or ‘prevent’ mistakes. The detect-and-act system works independently and in parallel to a kinematic BMI decoder. In a challenging task that resulted in substantial errors, this approach improved the performance of a BMI employing two variants of the ubiquitous Kalman velocity filter, including a state-of-the-art decoder (ReFIT-KF). Significance. Detecting errors in real-time from the same brain regions that are commonly used to control BMIs should improve the clinical viability of BMIs aimed at restoring motor function to people with paralysis.

  16. Multivariate Pattern Classification of Facial Expressions Based on Large-Scale Functional Connectivity.

    PubMed

    Liang, Yin; Liu, Baolin; Li, Xianglin; Wang, Peiyuan

    2018-01-01

    It is an important question how human beings achieve efficient recognition of others' facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition.

  17. Multivariate Pattern Classification of Facial Expressions Based on Large-Scale Functional Connectivity

    PubMed Central

    Liang, Yin; Liu, Baolin; Li, Xianglin; Wang, Peiyuan

    2018-01-01

    It is an important question how human beings achieve efficient recognition of others’ facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition. PMID:29615882

  18. Towards an unconscious neural reinforcement intervention for common fears

    PubMed Central

    Taschereau-Dumouchel, Vincent; Cortese, Aurelio; Chiba, Toshinori; Knotts, J. D.; Kawato, Mitsuo; Lau, Hakwan

    2018-01-01

    Can “hardwired” physiological fear responses (e.g., for spiders and snakes) be reprogramed unconsciously in the human brain? Currently, exposure therapy is among the most effective treatments for anxiety disorders, but this intervention is subjectively aversive to patients, causing many to drop out of treatment prematurely. Here we introduce a method to bypass the subjective unpleasantness in conscious exposure, by directly pairing monetary reward with unconscious occurrences of decoded representations of naturally feared animals in the brain. To decode physiological fear representations without triggering excessively aversive reactions, we capitalize on recent advancements in functional magnetic resonance imaging decoding techniques, and use a method called hyperalignment to infer the relevant representations of feared animals for a designated participant based on data from other “surrogate” participants. In this way, the procedure completely bypasses the need for a conscious encounter with feared animals. We demonstrate that our method can lead to reliable reductions in physiological fear responses, as measured by skin conductance as well as amygdala hemodynamic activity. Not only do these results raise the intriguing possibility that naturally occurring fear responses can be “reprogrammed” outside of conscious awareness, importantly, they also create the rare opportunity to rigorously test a psychological intervention of this nature in a double-blind, placebo-controlled fashion. This may pave the way for a new approach combining the appealing rationale and proven efficacy of conventional psychotherapy with the rigor and leverage of clinical neuroscience. PMID:29511106

  19. Adaptive quantization of local field potentials for wireless implants in freely moving animals: an open-source neural recording device

    NASA Astrophysics Data System (ADS)

    Martinez, Dominique; Clément, Maxime; Messaoudi, Belkacem; Gervasoni, Damien; Litaudon, Philippe; Buonviso, Nathalie

    2018-04-01

    Objective. Modern neuroscience research requires electrophysiological recording of local field potentials (LFPs) in moving animals. Wireless transmission has the advantage of removing the wires between the animal and the recording equipment but is hampered by the large number of data to be sent at a relatively high rate. Approach. To reduce transmission bandwidth, we propose an encoder/decoder scheme based on adaptive non-uniform quantization. Our algorithm uses the current transmitted codeword to adapt the quantization intervals to changing statistics in LFP signals. It is thus backward adaptive and does not require the sending of side information. The computational complexity is low and similar at the encoder and decoder sides. These features allow for real-time signal recovery and facilitate hardware implementation with low-cost commercial microcontrollers. Main results. As proof-of-concept, we developed an open-source neural recording device called NeRD. The NeRD prototype digitally transmits eight channels encoded at 10 kHz with 2 bits per sample. It occupies a volume of 2  ×  2  ×  2 cm3 and weighs 8 g with a small battery allowing for 2 h 40 min of autonomy. The power dissipation is 59.4 mW for a communication range of 8 m and transmission losses below 0.1%. The small weight and low power consumption offer the possibility of mounting the entire device on the head of a rodent without resorting to a separate head-stage and battery backpack. The NeRD prototype is validated in recording LFPs in freely moving rats at 2 bits per sample while maintaining an acceptable signal-to-noise ratio (>30 dB) over a range of noisy channels. Significance. Adaptive quantization in neural implants allows for lower transmission bandwidths while retaining high signal fidelity and preserving fundamental frequencies in LFPs.

  20. Decoding spectrotemporal features of overt and covert speech from the human cortex

    PubMed Central

    Martin, Stéphanie; Brunner, Peter; Holdgraf, Chris; Heinze, Hans-Jochen; Crone, Nathan E.; Rieger, Jochem; Schalk, Gerwin; Knight, Robert T.; Pasley, Brian N.

    2014-01-01

    Auditory perception and auditory imagery have been shown to activate overlapping brain regions. We hypothesized that these phenomena also share a common underlying neural representation. To assess this, we used electrocorticography intracranial recordings from epileptic patients performing an out loud or a silent reading task. In these tasks, short stories scrolled across a video screen in two conditions: subjects read the same stories both aloud (overt) and silently (covert). In a control condition the subject remained in a resting state. We first built a high gamma (70–150 Hz) neural decoding model to reconstruct spectrotemporal auditory features of self-generated overt speech. We then evaluated whether this same model could reconstruct auditory speech features in the covert speech condition. Two speech models were tested: a spectrogram and a modulation-based feature space. For the overt condition, reconstruction accuracy was evaluated as the correlation between original and predicted speech features, and was significant in each subject (p < 10−5; paired two-sample t-test). For the covert speech condition, dynamic time warping was first used to realign the covert speech reconstruction with the corresponding original speech from the overt condition. Reconstruction accuracy was then evaluated as the correlation between original and reconstructed speech features. Covert reconstruction accuracy was compared to the accuracy obtained from reconstructions in the baseline control condition. Reconstruction accuracy for the covert condition was significantly better than for the control condition (p < 0.005; paired two-sample t-test). The superior temporal gyrus, pre- and post-central gyrus provided the highest reconstruction information. The relationship between overt and covert speech reconstruction depended on anatomy. These results provide evidence that auditory representations of covert speech can be reconstructed from models that are built from an overt speech data set, supporting a partially shared neural substrate. PMID:24904404

  1. Adaptive quantization of local field potentials for wireless implants in freely moving animals: an open-source neural recording device.

    PubMed

    Martinez, Dominique; Clément, Maxime; Messaoudi, Belkacem; Gervasoni, Damien; Litaudon, Philippe; Buonviso, Nathalie

    2018-04-01

    Modern neuroscience research requires electrophysiological recording of local field potentials (LFPs) in moving animals. Wireless transmission has the advantage of removing the wires between the animal and the recording equipment but is hampered by the large number of data to be sent at a relatively high rate. To reduce transmission bandwidth, we propose an encoder/decoder scheme based on adaptive non-uniform quantization. Our algorithm uses the current transmitted codeword to adapt the quantization intervals to changing statistics in LFP signals. It is thus backward adaptive and does not require the sending of side information. The computational complexity is low and similar at the encoder and decoder sides. These features allow for real-time signal recovery and facilitate hardware implementation with low-cost commercial microcontrollers. As proof-of-concept, we developed an open-source neural recording device called NeRD. The NeRD prototype digitally transmits eight channels encoded at 10 kHz with 2 bits per sample. It occupies a volume of 2  ×  2  ×  2 cm 3 and weighs 8 g with a small battery allowing for 2 h 40 min of autonomy. The power dissipation is 59.4 mW for a communication range of 8 m and transmission losses below 0.1%. The small weight and low power consumption offer the possibility of mounting the entire device on the head of a rodent without resorting to a separate head-stage and battery backpack. The NeRD prototype is validated in recording LFPs in freely moving rats at 2 bits per sample while maintaining an acceptable signal-to-noise ratio (>30 dB) over a range of noisy channels. Adaptive quantization in neural implants allows for lower transmission bandwidths while retaining high signal fidelity and preserving fundamental frequencies in LFPs.

  2. Detecting and interpreting conscious experiences in behaviorally non-responsive patients.

    PubMed

    Naci, Lorina; Sinai, Leah; Owen, Adrian M

    2017-01-15

    Decoding the contents of consciousness from brain activity is one of the most challenging frontiers of cognitive neuroscience. The ability to interpret mental content without recourse to behavior is most relevant for understanding patients who may be demonstrably conscious, but entirely unable to speak or move willfully in any way, precluding any systematic investigation of their conscious experience. The lack of consistent behavioral responsivity engenders unique challenges to decoding any conscious experiences these patients may have solely based on their brain activity. For this reason, paradigms that have been successful in healthy individuals cannot serve to interpret conscious mental states in this patient group. Until recently, patient studies have used structured instructions to elicit willful modulation of brain activity according to command, in order to decode the presence of willful brain-based responses in this patient group. In recent work, we have used naturalistic paradigms, such as watching a movie or listening to an audio-story, to demonstrate that a common neural code supports conscious experiences in different individuals. Moreover, we have demonstrated that this code can be used to interpret the conscious experiences of a patient who had remained non-responsive for several years. This approach is easy to administer, brief, and does not require compliance with task instructions. Rather, it engages attention naturally through meaningful stimuli that are similar to the real-world sensory information in a patient's environment. Therefore, it may be particularly suited to probing consciousness and revealing residual brain function in highly impaired, acute, patients in a comatose state, thus helping to improve diagnostication and prognostication for this vulnerable patient group from the critical early stages of severe brain-injury. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. A confidence metric for using neurobiological feedback in actor-critic reinforcement learning based brain-machine interfaces

    PubMed Central

    Prins, Noeline W.; Sanchez, Justin C.; Prasad, Abhishek

    2014-01-01

    Brain-Machine Interfaces (BMIs) can be used to restore function in people living with paralysis. Current BMIs require extensive calibration that increase the set-up times and external inputs for decoder training that may be difficult to produce in paralyzed individuals. Both these factors have presented challenges in transitioning the technology from research environments to activities of daily living (ADL). For BMIs to be seamlessly used in ADL, these issues should be handled with minimal external input thus reducing the need for a technician/caregiver to calibrate the system. Reinforcement Learning (RL) based BMIs are a good tool to be used when there is no external training signal and can provide an adaptive modality to train BMI decoders. However, RL based BMIs are sensitive to the feedback provided to adapt the BMI. In actor-critic BMIs, this feedback is provided by the critic and the overall system performance is limited by the critic accuracy. In this work, we developed an adaptive BMI that could handle inaccuracies in the critic feedback in an effort to produce more accurate RL based BMIs. We developed a confidence measure, which indicated how appropriate the feedback is for updating the decoding parameters of the actor. The results show that with the new update formulation, the critic accuracy is no longer a limiting factor for the overall performance. We tested and validated the system onthree different data sets: synthetic data generated by an Izhikevich neural spiking model, synthetic data with a Gaussian noise distribution, and data collected from a non-human primate engaged in a reaching task. All results indicated that the system with the critic confidence built in always outperformed the system without the critic confidence. Results of this study suggest the potential application of the technique in developing an autonomous BMI that does not need an external signal for training or extensive calibration. PMID:24904257

  4. Decoding continuous three-dimensional hand trajectories from epidural electrocorticographic signals in Japanese macaques

    NASA Astrophysics Data System (ADS)

    Shimoda, Kentaro; Nagasaka, Yasuo; Chao, Zenas C.; Fujii, Naotaka

    2012-06-01

    Brain-machine interface (BMI) technology captures brain signals to enable control of prosthetic or communication devices with the goal of assisting patients who have limited or no ability to perform voluntary movements. Decoding of inherent information in brain signals to interpret the user's intention is one of main approaches for developing BMI technology. Subdural electrocorticography (sECoG)-based decoding provides good accuracy, but surgical complications are one of the major concerns for this approach to be applied in BMIs. In contrast, epidural electrocorticography (eECoG) is less invasive, thus it is theoretically more suitable for long-term implementation, although it is unclear whether eECoG signals carry sufficient information for decoding natural movements. We successfully decoded continuous three-dimensional hand trajectories from eECoG signals in Japanese macaques. A steady quantity of information of continuous hand movements could be acquired from the decoding system for at least several months, and a decoding model could be used for ˜10 days without significant degradation in accuracy or recalibration. The correlation coefficients between observed and predicted trajectories were lower than those for sECoG-based decoding experiments we previously reported, owing to a greater degree of chewing artifacts in eECoG-based decoding than is found in sECoG-based decoding. As one of the safest invasive recording methods available, eECoG provides an acceptable level of performance. With the ease of replacement and upgrades, eECoG systems could become the first-choice interface for real-life BMI applications.

  5. Multiformat decoder for a DSP-based IP set-top box

    NASA Astrophysics Data System (ADS)

    Pescador, F.; Garrido, M. J.; Sanz, C.; Juárez, E.; Samper, D.; Antoniello, R.

    2007-05-01

    Internet Protocol Set-Top Boxes (IP STBs) based on single-processor architectures have been recently introduced in the market. In this paper, the implementation of an MPEG-4 SP/ASP video decoder for a multi-format IP STB based on a TMS320DM641 DSP is presented. An initial decoder for PC platform was fully tested and ported to the DSP. Using this code an optimization process was started achieving a 90% speedup. This process allows real-time MPEG-4 SP/ASP decoding. The MPEG-4 decoder has been integrated in an IP STB and tested in a real environment using DVD movies and TV channels with excellent results.

  6. Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Moorthy, H. T.

    1997-01-01

    This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.

  7. Human motor cortical activity recorded with Micro-ECoG electrodes, during individual finger movements.

    PubMed

    Wang, W; Degenhart, A D; Collinger, J L; Vinjamuri, R; Sudre, G P; Adelson, P D; Holder, D L; Leuthardt, E C; Moran, D W; Boninger, M L; Schwartz, A B; Crammond, D J; Tyler-Kabara, E C; Weber, D J

    2009-01-01

    In this study human motor cortical activity was recorded with a customized micro-ECoG grid during individual finger movements. The quality of the recorded neural signals was characterized in the frequency domain from three different perspectives: (1) coherence between neural signals recorded from different electrodes, (2) modulation of neural signals by finger movement, and (3) accuracy of finger movement decoding. It was found that, for the high frequency band (60-120 Hz), coherence between neighboring micro-ECoG electrodes was 0.3. In addition, the high frequency band showed significant modulation by finger movement both temporally and spatially, and a classification accuracy of 73% (chance level: 20%) was achieved for individual finger movement using neural signals recorded from the micro-ECoG grid. These results suggest that the micro-ECoG grid presented here offers sufficient spatial and temporal resolution for the development of minimally-invasive brain-computer interface applications.

  8. Cognitive Neural Prosthetics

    PubMed Central

    Andersen, Richard A.; Hwang, Eun Jung; Mulliken, Grant H.

    2010-01-01

    The cognitive neural prosthetic (CNP) is a very versatile method for assisting paralyzed patients and patients with amputations. The CNP records the cognitive state of the subject, rather than signals strictly related to motor execution or sensation. We review a number of high-level cortical signals and their application for CNPs, including intention, motor imagery, decision making, forward estimation, executive function, attention, learning, and multi-effector movement planning. CNPs are defined by the cognitive function they extract, not the cortical region from which the signals are recorded. However, some cortical areas may be better than others for particular applications. Signals can also be extracted in parallel from multiple cortical areas using multiple implants, which in many circumstances can increase the range of applications of CNPs. The CNP approach relies on scientific understanding of the neural processes involved in cognition, and many of the decoding algorithms it uses also have parallels to underlying neural circuit functions. PMID:19575625

  9. On initial Brain Activity Mapping of episodic and semantic memory code in the hippocampus.

    PubMed

    Tsien, Joe Z; Li, Meng; Osan, Remus; Chen, Guifen; Lin, Longian; Wang, Phillip Lei; Frey, Sabine; Frey, Julietta; Zhu, Dajiang; Liu, Tianming; Zhao, Fang; Kuang, Hui

    2013-10-01

    It has been widely recognized that the understanding of the brain code would require large-scale recording and decoding of brain activity patterns. In 2007 with support from Georgia Research Alliance, we have launched the Brain Decoding Project Initiative with the basic idea which is now similarly advocated by BRAIN project or Brain Activity Map proposal. As the planning of the BRAIN project is currently underway, we share our insights and lessons from our efforts in mapping real-time episodic memory traces in the hippocampus of freely behaving mice. We show that appropriate large-scale statistical methods are essential to decipher and measure real-time memory traces and neural dynamics. We also provide an example of how the carefully designed, sometime thinking-outside-the-box, behavioral paradigms can be highly instrumental to the unraveling of memory-coding cell assembly organizing principle in the hippocampus. Our observations to date have led us to conclude that the specific-to-general categorical and combinatorial feature-coding cell assembly mechanism represents an emergent property for enabling the neural networks to generate and organize not only episodic memory, but also semantic knowledge and imagination. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  10. An extensible infrastructure for fully automated spike sorting during online experiments.

    PubMed

    Santhanam, Gopal; Sahani, Maneesh; Ryu, Stephen; Shenoy, Krishna

    2004-01-01

    When recording extracellular neural activity, it is often necessary to distinguish action potentials arising from distinct cells near the electrode tip, a process commonly referred to as "spike sorting." In a number of experiments, notably those that involve direct neuroprosthetic control of an effector, this cell-by-cell classification of the incoming signal must be achieved in real time. Several commercial offerings are available for this task, but all of these require some manual supervision per electrode, making each scheme cumbersome with large electrode counts. We present a new infrastructure that leverages existing unsupervised algorithms to sort and subsequently implement the resulting signal classification rules for each electrode using a commercially available Cerebus neural signal processor. We demonstrate an implementation of this infrastructure to classify signals from a cortical electrode array, using a probabilistic clustering algorithm (described elsewhere). The data were collected from a rhesus monkey performing a delayed center-out reach task. We used both sorted and unsorted (thresholded) action potentials from an array implanted in pre-motor cortex to "predict" the reach target, a common decoding operation in neuroprosthetic research. The use of sorted spikes led to an improvement in decoding accuracy of between 3.6 and 6.4%.

  11. On Initial Brain Activity Mapping of Associative Memory Code in the Hippocampus

    PubMed Central

    Tsien, Joe Z.; Li, Meng; Osan, Remus; Chen, Guifen; Lin, Longian; Lei Wang, Phillip; Frey, Sabine; Frey, Julietta; Zhu, Dajiang; Liu, Tianming; Zhao, Fang; Kuang, Hui

    2013-01-01

    It has been widely recognized that the understanding of the brain code would require large-scale recording and decoding of brain activity patterns. In 2007 with support from Georgia Research Alliance, we have launched the Brain Decoding Project Initiative with the basic idea which is now similarly advocated by BRAIN project or Brain Activity Map proposal. As the planning of the BRAIN project is currently underway, we share our insights and lessons from our efforts in mapping real-time episodic memory traces in the hippocampus of freely behaving mice. We show that appropriate large-scale statistical methods are essential to decipher and measure real-time memory traces and neural dynamics. We also provide an example of how the carefully designed, sometime thinking-outside-the-box, behavioral paradigms can be highly instrumental to the unraveling of memory-coding cell assembly organizing principle in the hippocampus. Our observations to date have led us to conclude that the specific-to-general categorical and combinatorial feature-coding cell assembly mechanism represents an emergent property for enabling the neural networks to generate and organize not only episodic memory, but also semantic knowledge and imagination. PMID:23838072

  12. A Spiking Neural Network System for Robust Sequence Recognition.

    PubMed

    Yu, Qiang; Yan, Rui; Tang, Huajin; Tan, Kay Chen; Li, Haizhou

    2016-03-01

    This paper proposes a biologically plausible network architecture with spiking neurons for sequence recognition. This architecture is a unified and consistent system with functional parts of sensory encoding, learning, and decoding. This is the first systematic model attempting to reveal the neural mechanisms considering both the upstream and the downstream neurons together. The whole system is a consistent temporal framework, where the precise timing of spikes is employed for information processing and cognitive computing. Experimental results show that the system is competent to perform the sequence recognition, being robust to noisy sensory inputs and invariant to changes in the intervals between input stimuli within a certain range. The classification ability of the temporal learning rule used in the system is investigated through two benchmark tasks that outperform the other two widely used learning rules for classification. The results also demonstrate the computational power of spiking neurons over perceptrons for processing spatiotemporal patterns. In summary, the system provides a general way with spiking neurons to encode external stimuli into spatiotemporal spikes, to learn the encoded spike patterns with temporal learning rules, and to decode the sequence order with downstream neurons. The system structure would be beneficial for developments in both hardware and software.

  13. Locating and decoding barcodes in fuzzy images captured by smart phones

    NASA Astrophysics Data System (ADS)

    Deng, Wupeng; Hu, Jiwei; Liu, Quan; Lou, Ping

    2017-07-01

    With the development of barcodes for commercial use, people's requirements for detecting barcodes by smart phone become increasingly pressing. The low quality of barcode image captured by mobile phone always affects the decoding and recognition rates. This paper focuses on locating and decoding EAN-13 barcodes in fuzzy images. We present a more accurate locating algorithm based on segment length and high fault-tolerant rate algorithm for decoding barcodes. Unlike existing approaches, location algorithm is based on the edge segment length of EAN -13 barcodes, while our decoding algorithm allows the appearance of fuzzy region in barcode image. Experimental results are performed on damaged, contaminated and scratched digital images, and provide a quite promising result for EAN -13 barcode location and decoding.

  14. Spatially Compact Neural Clusters in the Dorsal Striatum Encode Locomotion Relevant Information.

    PubMed

    Barbera, Giovanni; Liang, Bo; Zhang, Lifeng; Gerfen, Charles R; Culurciello, Eugenio; Chen, Rong; Li, Yun; Lin, Da-Ting

    2016-10-05

    An influential striatal model postulates that neural activities in the striatal direct and indirect pathways promote and inhibit movement, respectively. Normal behavior requires coordinated activity in the direct pathway to facilitate intended locomotion and indirect pathway to inhibit unwanted locomotion. In this striatal model, neuronal population activity is assumed to encode locomotion relevant information. Here, we propose a novel encoding mechanism for the dorsal striatum. We identified spatially compact neural clusters in both the direct and indirect pathways. Detailed characterization revealed similar cluster organization between the direct and indirect pathways, and cluster activities from both pathways were correlated with mouse locomotion velocities. Using machine-learning algorithms, cluster activities could be used to decode locomotion relevant behavioral states and locomotion velocity. We propose that neural clusters in the dorsal striatum encode locomotion relevant information and that coordinated activities of direct and indirect pathway neural clusters are required for normal striatal controlled behavior. VIDEO ABSTRACT. Published by Elsevier Inc.

  15. Neuronal integration in visual cortex elevates face category tuning to conscious face perception

    PubMed Central

    Fahrenfort, Johannes J.; Snijders, Tineke M.; Heinen, Klaartje; van Gaal, Simon; Scholte, H. Steven; Lamme, Victor A. F.

    2012-01-01

    The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning. PMID:23236162

  16. Bounded-Angle Iterative Decoding of LDPC Codes

    NASA Technical Reports Server (NTRS)

    Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2009-01-01

    Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).

  17. Reward Motivation Enhances Task Coding in Frontoparietal Cortex.

    PubMed

    Etzel, Joset A; Cole, Michael W; Zacks, Jeffrey M; Kay, Kendrick N; Braver, Todd S

    2016-04-01

    Reward motivation often enhances task performance, but the neural mechanisms underlying such cognitive enhancement remain unclear. Here, we used a multivariate pattern analysis (MVPA) approach to test the hypothesis that motivation-related enhancement of cognitive control results from improved encoding and representation of task set information. Participants underwent two fMRI sessions of cued task switching, the first under baseline conditions, and the second with randomly intermixed reward incentive and no-incentive trials. Information about the upcoming task could be successfully decoded from cue-related activation patterns in a set of frontoparietal regions typically associated with task control. More critically, MVPA classifiers trained on the baseline session had significantly higher decoding accuracy on incentive than non-incentive trials, with decoding improvement mediating reward-related enhancement of behavioral performance. These results strongly support the hypothesis that reward motivation enhances cognitive control, by improving the discriminability of task-relevant information coded and maintained in frontoparietal brain regions. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Perceived freedom of choice is associated with neural encoding of option availability.

    PubMed

    Rens, Natalie; Bode, Stefan; Cunnington, Ross

    2018-05-03

    Freedom of choice has been defined as the opportunity to choose alternative plans of action. In this fMRI study, we investigated how the perceived freedom of choice and the underlying neural correlates are influenced by the availability of options. Participants made an initial free choice between left or right doors before beginning a virtual walk along a corridor. At the mid-point of the corridor, lock cues appeared to reveal whether one or both doors remained available, requiring participants either to select a particular door or allowing them to freely choose to stay or switch their choice. We found that participants rated trials as free when they were able to carry out their initial choice, but even more so when both doors remained available. Multi-voxel pattern analysis showed that upcoming choices could initially be decoded from visual cortices before the appearance of the lock cues, and additionally from the motor cortex after the lock cues had confirmed which doors were open. When participants were able to maintain the same choice that they originally selected, the availability of alternative options was represented in fine-grained patterns of activity in the dorsolateral prefrontal cortex. Further, decoding accuracy in this region correlated with the subjective level of freedom that participants reported. These results suggest that there is neural encoding of the availability of alternative options in the dorsolateral prefrontal cortex, and the degree of this encoding predicts an individual's perceived freedom of choice. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Early visual responses predict conscious face perception within and between subjects during binocular rivalry

    PubMed Central

    Sandberg, Kristian; Bahrami, Bahador; Kanai, Ryota; Barnes, Gareth Robert; Overgaard, Morten; Rees, Geraint

    2014-01-01

    Previous studies indicate that conscious face perception may be related to neural activity in a large time window around 170-800ms after stimulus presentation, yet in the majority of these studies changes in conscious experience are confounded with changes in physical stimulation. Using multivariate classification on MEG data recorded when participants reported changes in conscious perception evoked by binocular rivalry between a face and a grating, we showed that only MEG signals in the 120-320ms time range, peaking at the M170 around 180ms and the P2m at around 260ms, reliably predicted conscious experience. Conscious perception could not only be decoded significantly better than chance from the sensors that showed the largest average difference, as previous studies suggest, but also from patterns of activity across groups of occipital sensors that individually were unable to predict perception better than chance. Additionally, source space analyses showed that sources in the early and late visual system predicted conscious perception more accurately than frontal and parietal sites, although conscious perception could also be decoded there. Finally, the patterns of neural activity associated with conscious face perception generalized from one participant to another around the times of maximum prediction accuracy. Our work thus demonstrates that the neural correlates of particular conscious contents (here, faces) are highly consistent in time and space within individuals and that these correlates are shared to some extent between individuals. PMID:23281780

  20. A cortical-spinal prosthesis for targeted limb movement in paralyzed primate avatars

    PubMed Central

    Shanechi, Maryam M.; Hu, Rollin C.; Williams, Ziv M.

    2014-01-01

    Motor paralysis is among the most disabling aspects of injury to the central nervous system. Here we develop and test a target-based cortical-spinal neural prosthesis that employs neural activity recorded from pre-motor neurons to control limb movements in functionally paralyzed primate avatars. Given the complexity by which muscle contractions are naturally controlled, we approach the problem of eliciting goal-directed limb movement in paralyzed animals by focusing on the intended targets of movement rather than their intermediate trajectories. We then match this information in real-time with spinal cord and muscle stimulation parameters that produce free planar limb movements to those intended target locations. We demonstrate that both the decoded activities of pre-motor populations and their adaptive responses can be used, after brief training, to effectively direct an avatar’s limb to distinct targets variably displayed on a screen. These findings advance the future possibility of reconstituting targeted limb movement in paralyzed subjects. PMID:24549394

  1. Video time encoding machines.

    PubMed

    Lazar, Aurel A; Pnevmatikakis, Eftychios A

    2011-03-01

    We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value.

  2. Video Time Encoding Machines

    PubMed Central

    Lazar, Aurel A.; Pnevmatikakis, Eftychios A.

    2013-01-01

    We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value. PMID:21296708

  3. Toward more versatile and intuitive cortical brain machine interfaces

    PubMed Central

    Andersen, Richard A.; Kellis, Spencer; Klaes, Christian; Aflalo, Tyson

    2015-01-01

    Brain machine interfaces have great potential in neuroprosthetic applications to assist patients with brain injury and neurodegenerative diseases. One type of BMI is a cortical motor prosthetic which is used to assist paralyzed subjects. Motor prosthetics to date have typically used the motor cortex as a source of neural signals for controlling external devices. The review will focus on several new topics in the arena of cortical prosthetics. These include using 1) recordings from cortical areas outside motor cortex; 2) local field potentials (LFPs) as a source of recorded signals; 3) somatosensory feedback for more dexterous control of robotics; and 4) new decoding methods that work in concert to form an ecology of decode algorithms. These new advances hold promise in greatly accelerating the applicability and ease of operation of motor prosthetics. PMID:25247368

  4. Adaptive decoding of convolutional codes

    NASA Astrophysics Data System (ADS)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  5. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    PubMed

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  6. The Bayesian Decoding of Force Stimuli from Slowly Adapting Type I Fibers in Humans.

    PubMed

    Kasi, Patrick; Wright, James; Khamis, Heba; Birznieks, Ingvars; van Schaik, André

    2016-01-01

    It is well known that signals encoded by mechanoreceptors facilitate precise object manipulation in humans. It is therefore of interest to study signals encoded by the mechanoreceptors because this will contribute further towards the understanding of fundamental sensory mechanisms that are responsible for coordinating force components during object manipulation. From a practical point of view, this may suggest strategies for designing sensory-controlled biomedical devices and robotic manipulators. We use a two-stage nonlinear decoding paradigm to reconstruct the force stimulus given signals from slowly adapting type one (SA-I) tactile afferents. First, we describe a nonhomogeneous Poisson encoding model which is a function of the force stimulus and the force's rate of change. In the decoding phase, we use a recursive nonlinear Bayesian filter to reconstruct the force profile, given the SA-I spike patterns and parameters described by the encoding model. Under the current encoding model, the mode ratio of force to its derivative is: 1.26 to 1.02. This indicates that the force derivative contributes significantly to the rate of change to the SA-I afferent spike modulation. Furthermore, using recursive Bayesian decoding algorithms is advantageous because it can incorporate past and current information in order to make predictions--consistent with neural systems--with little computational resources. This makes it suitable for interfacing with prostheses.

  7. The Bayesian Decoding of Force Stimuli from Slowly Adapting Type I Fibers in Humans

    PubMed Central

    Wright, James; Khamis, Heba; Birznieks, Ingvars; van Schaik, André

    2016-01-01

    It is well known that signals encoded by mechanoreceptors facilitate precise object manipulation in humans. It is therefore of interest to study signals encoded by the mechanoreceptors because this will contribute further towards the understanding of fundamental sensory mechanisms that are responsible for coordinating force components during object manipulation. From a practical point of view, this may suggest strategies for designing sensory-controlled biomedical devices and robotic manipulators. We use a two-stage nonlinear decoding paradigm to reconstruct the force stimulus given signals from slowly adapting type one (SA-I) tactile afferents. First, we describe a nonhomogeneous Poisson encoding model which is a function of the force stimulus and the force’s rate of change. In the decoding phase, we use a recursive nonlinear Bayesian filter to reconstruct the force profile, given the SA-I spike patterns and parameters described by the encoding model. Under the current encoding model, the mode ratio of force to its derivative is: 1.26 to 1.02. This indicates that the force derivative contributes significantly to the rate of change to the SA-I afferent spike modulation. Furthermore, using recursive Bayesian decoding algorithms is advantageous because it can incorporate past and current information in order to make predictions—consistent with neural systems—with little computational resources. This makes it suitable for interfacing with prostheses. PMID:27077750

  8. LDPC-based iterative joint source-channel decoding for JPEG2000.

    PubMed

    Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane

    2007-02-01

    A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.

  9. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation

    PubMed Central

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2013-01-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314

  10. Adaptive distributed video coding with correlation estimation using expectation propagation

    NASA Astrophysics Data System (ADS)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  11. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    PubMed

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  12. Factors that affect error potentials during a grasping task: toward a hybrid natural movement decoding BCI.

    PubMed

    Omedes, Jason; Schwarz, Andreas; Müller-Putz, Gernot R; Montesano, Luis

    2018-05-01

    This paper presents a hybrid BCI combining neural correlates of natural movements and interaction error-related potentials (ErrP) to perform a 3D reaching task. It focuses on the impact that design factors of such a hybrid BCI have on the ErrP signatures and in their classification. Approach. Users attempted to control a 3D virtual interface that simulated their own hand, to reach and grasp two different objects. Three factors of interest were modulated during the experimentation: (1) execution speed of the grasping, (2) type of grasping and (3) motor commands generated by motor imagery or real motion. Thirteen healthy subjects carried out the protocol. The peaks and latencies of the ErrP were analyzed for the different factors as well as the classification performance. Main results. ErrP are evoked for erroneous commands decoded from neural correlates of natural movements. The ANOVA analyses revealed that latency and magnitude of the most characteristic ErrP peaks were significantly influenced by the speed at which the grasping was executed, but not the type of grasp. This resulted in an greater accuracy of single-trial decoding of errors for fast movements (75.65%) compared to slow ones (68.99%). Significance. Invariance of ErrP to different type of grasping movements and mental strategies proves this type of hybrid interface to be useful for the design of out of the lab applications such as the operation/control of prosthesis. Factors such as the speed of the movements have to be carefully tuned in order to optimize the performance of the system. . © 2018 IOP Publishing Ltd.

  13. Mapping and Deciphering Neural Codes of NMDA Receptor-Dependent Fear Memory Engrams in the Hippocampus

    PubMed Central

    Tsien, Joe Z.

    2013-01-01

    Mapping and decoding brain activity patterns underlying learning and memory represents both great interest and immense challenge. At present, very little is known regarding many of the very basic questions regarding the neural codes of memory: are fear memories retrieved during the freezing state or non-freezing state of the animals? How do individual memory traces give arise to a holistic, real-time associative memory engram? How are memory codes regulated by synaptic plasticity? Here, by applying high-density electrode arrays and dimensionality-reduction decoding algorithms, we investigate hippocampal CA1 activity patterns of trace fear conditioning memory code in inducible NMDA receptor knockout mice and their control littermates. Our analyses showed that the conditioned tone (CS) and unconditioned foot-shock (US) can evoke hippocampal ensemble responses in control and mutant mice. Yet, temporal formats and contents of CA1 fear memory engrams differ significantly between the genotypes. The mutant mice with disabled NMDA receptor plasticity failed to generate CS-to-US or US-to-CS associative memory traces. Moreover, the mutant CA1 region lacked memory traces for “what at when” information that predicts the timing relationship between the conditioned tone and the foot shock. The degraded associative fear memory engram is further manifested in its lack of intertwined and alternating temporal association between CS and US memory traces that are characteristic to the holistic memory recall in the wild-type animals. Therefore, our study has decoded real-time memory contents, timing relationship between CS and US, and temporal organizing patterns of fear memory engrams and demonstrated how hippocampal memory codes are regulated by NMDA receptor synaptic plasticity. PMID:24302990

  14. Signal processing methods for reducing artifacts in microelectrode brain recordings caused by functional electrical stimulation

    NASA Astrophysics Data System (ADS)

    Young, D.; Willett, F.; Memberg, W. D.; Murphy, B.; Walter, B.; Sweet, J.; Miller, J.; Hochberg, L. R.; Kirsch, R. F.; Ajiboye, A. B.

    2018-04-01

    Objective. Functional electrical stimulation (FES) is a promising technology for restoring movement to paralyzed limbs. Intracortical brain-computer interfaces (iBCIs) have enabled intuitive control over virtual and robotic movements, and more recently over upper extremity FES neuroprostheses. However, electrical stimulation of muscles creates artifacts in intracortical microelectrode recordings that could degrade iBCI performance. Here, we investigate methods for reducing the cortically recorded artifacts that result from peripheral electrical stimulation. Approach. One participant in the BrainGate2 pilot clinical trial had two intracortical microelectrode arrays placed in the motor cortex, and thirty-six stimulating intramuscular electrodes placed in the muscles of the contralateral limb. We characterized intracortically recorded electrical artifacts during both intramuscular and surface stimulation. We compared the performance of three artifact reduction methods: blanking, common average reference (CAR) and linear regression reference (LRR), which creates channel-specific reference signals, composed of weighted sums of other channels. Main results. Electrical artifacts resulting from surface stimulation were 175  ×  larger than baseline neural recordings (which were 110 µV peak-to-peak), while intramuscular stimulation artifacts were only 4  ×  larger. The artifact waveforms were highly consistent across electrodes within each array. Application of LRR reduced artifact magnitudes to less than 10 µV and largely preserved the original neural feature values used for decoding. Unmitigated stimulation artifacts decreased iBCI decoding performance, but performance was almost completely recovered using LRR, which outperformed CAR and blanking and extracted useful neural information during stimulation artifact periods. Significance. The LRR method was effective at reducing electrical artifacts resulting from both intramuscular and surface FES, and almost completely restored iBCI decoding performance (>90% recovery for surface stimulation and full recovery for intramuscular stimulation). The results demonstrate that FES-induced artifacts can be easily mitigated in FES  +  iBCI systems by using LRR for artifact reduction, and suggest that the LRR method may also be useful in other noise reduction applications.

  15. Multi-stage decoding for multi-level block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao

    1991-01-01

    Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  16. Tail Biting Trellis Representation of Codes: Decoding and Construction

    NASA Technical Reports Server (NTRS)

    Shao. Rose Y.; Lin, Shu; Fossorier, Marc

    1999-01-01

    This paper presents two new iterative algorithms for decoding linear codes based on their tail biting trellises, one is unidirectional and the other is bidirectional. Both algorithms are computationally efficient and achieves virtually optimum error performance with a small number of decoding iterations. They outperform all the previous suboptimal decoding algorithms. The bidirectional algorithm also reduces decoding delay. Also presented in the paper is a method for constructing tail biting trellises for linear block codes.

  17. Classification of Land Cover and Land Use Based on Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Yang, Chun; Rottensteiner, Franz; Heipke, Christian

    2018-04-01

    Land cover describes the physical material of the earth's surface, whereas land use describes the socio-economic function of a piece of land. Land use information is typically collected in geospatial databases. As such databases become outdated quickly, an automatic update process is required. This paper presents a new approach to determine land cover and to classify land use objects based on convolutional neural networks (CNN). The input data are aerial images and derived data such as digital surface models. Firstly, we apply a CNN to determine the land cover for each pixel of the input image. We compare different CNN structures, all of them based on an encoder-decoder structure for obtaining dense class predictions. Secondly, we propose a new CNN-based methodology for the prediction of the land use label of objects from a geospatial database. In this context, we present a strategy for generating image patches of identical size from the input data, which are classified by a CNN. Again, we compare different CNN architectures. Our experiments show that an overall accuracy of up to 85.7 % and 77.4 % can be achieved for land cover and land use, respectively. The classification of land cover has a positive contribution to the classification of the land use classification.

  18. Efficient digital implementation of a conductance-based globus pallidus neuron and the dynamics analysis

    NASA Astrophysics Data System (ADS)

    Yang, Shuangming; Wei, Xile; Deng, Bin; Liu, Chen; Li, Huiyan; Wang, Jiang

    2018-03-01

    Balance between biological plausibility of dynamical activities and computational efficiency is one of challenging problems in computational neuroscience and neural system engineering. This paper proposes a set of efficient methods for the hardware realization of the conductance-based neuron model with relevant dynamics, targeting reproducing the biological behaviors with low-cost implementation on digital programmable platform, which can be applied in wide range of conductance-based neuron models. Modified GP neuron models for efficient hardware implementation are presented to reproduce reliable pallidal dynamics, which decode the information of basal ganglia and regulate the movement disorder related voluntary activities. Implementation results on a field-programmable gate array (FPGA) demonstrate that the proposed techniques and models can reduce the resource cost significantly and reproduce the biological dynamics accurately. Besides, the biological behaviors with weak network coupling are explored on the proposed platform, and theoretical analysis is also made for the investigation of biological characteristics of the structured pallidal oscillator and network. The implementation techniques provide an essential step towards the large-scale neural network to explore the dynamical mechanisms in real time. Furthermore, the proposed methodology enables the FPGA-based system a powerful platform for the investigation on neurodegenerative diseases and real-time control of bio-inspired neuro-robotics.

  19. Decoding bipedal locomotion from the rat sensorimotor cortex

    NASA Astrophysics Data System (ADS)

    Rigosa, J.; Panarese, A.; Dominici, N.; Friedli, L.; van den Brand, R.; Carpaneto, J.; DiGiovanna, J.; Courtine, G.; Micera, S.

    2015-10-01

    Objective. Decoding forelimb movements from the firing activity of cortical neurons has been interfaced with robotic and prosthetic systems to replace lost upper limb functions in humans. Despite the potential of this approach to improve locomotion and facilitate gait rehabilitation, decoding lower limb movement from the motor cortex has received comparatively little attention. Here, we performed experiments to identify the type and amount of information that can be decoded from neuronal ensemble activity in the hindlimb area of the rat motor cortex during bipedal locomotor tasks. Approach. Rats were trained to stand, step on a treadmill, walk overground and climb staircases in a bipedal posture. To impose this gait, the rats were secured in a robotic interface that provided support against the direction of gravity and in the mediolateral direction, but behaved transparently in the forward direction. After completion of training, rats were chronically implanted with a micro-wire array spanning the left hindlimb motor cortex to record single and multi-unit activity, and bipolar electrodes into 10 muscles of the right hindlimb to monitor electromyographic signals. Whole-body kinematics, muscle activity, and neural signals were simultaneously recorded during execution of the trained tasks over multiple days of testing. Hindlimb kinematics, muscle activity, gait phases, and locomotor tasks were decoded using offline classification algorithms. Main results. We found that the stance and swing phases of gait and the locomotor tasks were detected with accuracies as robust as 90% in all rats. Decoded hindlimb kinematics and muscle activity exhibited a larger variability across rats and tasks. Significance. Our study shows that the rodent motor cortex contains useful information for lower limb neuroprosthetic development. However, brain-machine interfaces estimating gait phases or locomotor behaviors, instead of continuous variables such as limb joint positions or speeds, are likely to provide more robust control strategies for the design of such neuroprostheses.

  20. The impact of command signal power distribution, processing delays, and speed scaling on neurally-controlled devices.

    PubMed

    Marathe, A R; Taylor, D M

    2015-08-01

    Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.

  1. The impact of command signal power distribution, processing delays, and speed scaling on neurally-controlled devices

    NASA Astrophysics Data System (ADS)

    Marathe, A. R.; Taylor, D. M.

    2015-08-01

    Objective. Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. Approach. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Main results. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. Significance. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.

  2. NP-hardness of decoding quantum error-correction codes

    NASA Astrophysics Data System (ADS)

    Hsieh, Min-Hsiu; Le Gall, François

    2011-05-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  3. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; The Map and Related Decoding Algirithms

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.

  4. Dissociation of Self-Motion and Object Motion by Linear Population Decoding That Approximates Marginalization

    PubMed Central

    Sasaki, Ryo; Angelaki, Dora E.

    2017-01-01

    We use visual image motion to judge the movement of objects, as well as our own movements through the environment. Generally, image motion components caused by object motion and self-motion are confounded in the retinal image. Thus, to estimate heading, the brain would ideally marginalize out the effects of object motion (or vice versa), but little is known about how this is accomplished neurally. Behavioral studies suggest that vestibular signals play a role in dissociating object motion and self-motion, and recent computational work suggests that a linear decoder can approximate marginalization by taking advantage of diverse multisensory representations. By measuring responses of MSTd neurons in two male rhesus monkeys and by applying a recently-developed method to approximate marginalization by linear population decoding, we tested the hypothesis that vestibular signals help to dissociate self-motion and object motion. We show that vestibular signals stabilize tuning for heading in neurons with congruent visual and vestibular heading preferences, whereas they stabilize tuning for object motion in neurons with discrepant preferences. Thus, vestibular signals enhance the separability of joint tuning for object motion and self-motion. We further show that a linear decoder, designed to approximate marginalization, allows the population to represent either self-motion or object motion with good accuracy. Decoder weights are broadly consistent with a readout strategy, suggested by recent computational work, in which responses are decoded according to the vestibular preferences of multisensory neurons. These results demonstrate, at both single neuron and population levels, that vestibular signals help to dissociate self-motion and object motion. SIGNIFICANCE STATEMENT The brain often needs to estimate one property of a changing environment while ignoring others. This can be difficult because multiple properties of the environment may be confounded in sensory signals. The brain can solve this problem by marginalizing over irrelevant properties to estimate the property-of-interest. We explore this problem in the context of self-motion and object motion, which are inherently confounded in the retinal image. We examine how diversity in a population of multisensory neurons may be exploited to decode self-motion and object motion from the population activity of neurons in macaque area MSTd. PMID:29030435

  5. Adaptive estimation of hand movement trajectory in an EEG based brain-computer interface system

    NASA Astrophysics Data System (ADS)

    Robinson, Neethu; Guan, Cuntai; Vinod, A. P.

    2015-12-01

    Objective. The various parameters that define a hand movement such as its trajectory, speed, etc, are encoded in distinct brain activities. Decoding this information from neurophysiological recordings is a less explored area of brain-computer interface (BCI) research. Applying non-invasive recordings such as electroencephalography (EEG) for decoding makes the problem more challenging, as the encoding is assumed to be deep within the brain and not easily accessible by scalp recordings. Approach. EEG based BCI systems can be developed to identify the neural features underlying movement parameters that can be further utilized to provide a detailed and well defined control command set to a BCI output device. A real-time continuous control is better suited for practical BCI systems, and can be achieved by continuous adaptive reconstruction of movement trajectory than discrete brain activity classifications. In this work, we adaptively reconstruct/estimate the parameters of two-dimensional hand movement trajectory, namely movement speed and position, from multi-channel EEG recordings. The data for analysis is collected by performing an experiment that involved center-out right-hand movement tasks in four different directions at two different speeds in random order. We estimate movement trajectory using a Kalman filter that models the relation between brain activity and recorded parameters based on a set of defined predictors. We propose a method to define these predictor variables that includes spatial, spectral and temporally localized neural information and to select optimally informative variables. Main results. The proposed method yielded correlation of (0.60 ± 0.07) between recorded and estimated data. Further, incorporating the proposed predictor subset selection, the correlation achieved is (0.57 ± 0.07, p {\\lt }0.004) with significant gain in stability of the system, as well as dramatic reduction in number of predictors (76%) for the savings of computational time. Significance. The proposed system provides a real time movement control system using EEG-BCI with control over movement speed and position. These results are higher and statistically significant compared to existing techniques in EEG based systems and thus promise the applicability of the proposed method for efficient estimation of movement parameters and for continuous motor control.

  6. Low complexity Reed-Solomon-based low-density parity-check design for software defined optical transmission system based on adaptive puncturing decoding algorithm

    NASA Astrophysics Data System (ADS)

    Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua

    2016-08-01

    We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.

  7. Superior arm-movement decoding from cortex with a new, unsupervised-learning algorithm

    NASA Astrophysics Data System (ADS)

    Makin, Joseph G.; O'Doherty, Joseph E.; Cardoso, Mariana M. B.; Sabes, Philip N.

    2018-04-01

    Objective. The aim of this work is to improve the state of the art for motor-control with a brain-machine interface (BMI). BMIs use neurological recording devices and decoding algorithms to transform brain activity directly into real-time control of a machine, archetypically a robotic arm or a cursor. The standard procedure treats neural activity—vectors of spike counts in small temporal windows—as noisy observations of the kinematic state (position, velocity, acceleration) of the fingertip. Inferring the state from the observations then takes the form of a dynamical filter, typically some variant on Kalman’s (KF). The KF, however, although fairly robust in practice, is optimal only when the relationships between variables are linear and the noise is Gaussian, conditions usually violated in practice. Approach. To overcome these limitations we introduce a new filter, the ‘recurrent exponential-family harmonium’ (rEFH), that models the spike counts explicitly as Poisson-distributed, and allows for arbitrary nonlinear dynamics and observation models. Furthermore, the model underlying the filter is acquired through unsupervised learning, which allows temporal correlations in spike counts to be explained by latent dynamics that do not necessarily correspond to the kinematic state of the fingertip. Main results. We test the rEFH on offline reconstruction of the kinematics of reaches in the plane. The rEFH outperforms the standard, as well as three other state-of-the-art, decoders, across three monkeys, two different tasks, most kinematic variables, and a range of bin widths, amounts of training data, and numbers of neurons. Significance. Our algorithm establishes a new state of the art for offline decoding of reaches—in particular, for fingertip velocities, the variable used for control in most online decoders.

  8. Toward more versatile and intuitive cortical brain-machine interfaces.

    PubMed

    Andersen, Richard A; Kellis, Spencer; Klaes, Christian; Aflalo, Tyson

    2014-09-22

    Brain-machine interfaces have great potential for the development of neuroprosthetic applications to assist patients suffering from brain injury or neurodegenerative disease. One type of brain-machine interface is a cortical motor prosthetic, which is used to assist paralyzed subjects. Motor prosthetics to date have typically used the motor cortex as a source of neural signals for controlling external devices. The review will focus on several new topics in the arena of cortical prosthetics. These include using: recordings from cortical areas outside motor cortex; local field potentials as a source of recorded signals; somatosensory feedback for more dexterous control of robotics; and new decoding methods that work in concert to form an ecology of decode algorithms. These new advances promise to greatly accelerate the applicability and ease of operation of motor prosthetics. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    NASA Astrophysics Data System (ADS)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  10. An Implantable Wireless Neural Interface for Recording Cortical Circuit Dynamics in Moving Primates

    PubMed Central

    Borton, David A.; Yin, Ming; Aceros, Juan; Nurmikko, Arto

    2013-01-01

    Objective Neural interface technology suitable for clinical translation has the potential to significantly impact the lives of amputees, spinal cord injury victims, and those living with severe neuromotor disease. Such systems must be chronically safe, durable, and effective. Approach We have designed and implemented a neural interface microsystem, housed in a compact, subcutaneous, and hermetically sealed titanium enclosure. The implanted device interfaces the brain with a 510k-approved, 100-element silicon-based MEA via a custom hermetic feedthrough design. Full spectrum neural signals were amplified (0.1Hz to 7.8kHz, ×200 gain) and multiplexed by a custom application specific integrated circuit, digitized, and then packaged for transmission. The neural data (24 Mbps) was transmitted by a wireless data link carried on an frequency shift key modulated signal at 3.2GHz and 3.8GHz to a receiver 1 meter away by design as a point-to-point communication link for human clinical use. The system was powered by an embedded medical grade rechargeable Li-ion battery for 7-hour continuous operation between recharge via an inductive transcutaneous wireless power link at 2MHz. Main results Device verification and early validation was performed in both swine and non-human primate freely-moving animal models and showed that the wireless implant was electrically stable, effective in capturing and delivering broadband neural data, and safe for over one year of testing. In addition, we have used the multichannel data from these mobile animal models to demonstrate the ability to decode neural population dynamics associated with motor activity. Significance We have developed an implanted wireless broadband neural recording device evaluated in non-human primate and swine. The use of this new implantable neural interface technology can provide insight on how to advance human neuroprostheses beyond the present early clinical trials. Further, such tools enable mobile patient use, have the potential for wider diagnosis of neurological conditions, and will advance brain research. PMID:23428937

  11. An implantable wireless neural interface for recording cortical circuit dynamics in moving primates

    NASA Astrophysics Data System (ADS)

    Borton, David A.; Yin, Ming; Aceros, Juan; Nurmikko, Arto

    2013-04-01

    Objective. Neural interface technology suitable for clinical translation has the potential to significantly impact the lives of amputees, spinal cord injury victims and those living with severe neuromotor disease. Such systems must be chronically safe, durable and effective. Approach. We have designed and implemented a neural interface microsystem, housed in a compact, subcutaneous and hermetically sealed titanium enclosure. The implanted device interfaces the brain with a 510k-approved, 100-element silicon-based microelectrode array via a custom hermetic feedthrough design. Full spectrum neural signals were amplified (0.1 Hz to 7.8 kHz, 200× gain) and multiplexed by a custom application specific integrated circuit, digitized and then packaged for transmission. The neural data (24 Mbps) were transmitted by a wireless data link carried on a frequency-shift-key-modulated signal at 3.2 and 3.8 GHz to a receiver 1 m away by design as a point-to-point communication link for human clinical use. The system was powered by an embedded medical grade rechargeable Li-ion battery for 7 h continuous operation between recharge via an inductive transcutaneous wireless power link at 2 MHz. Main results. Device verification and early validation were performed in both swine and non-human primate freely-moving animal models and showed that the wireless implant was electrically stable, effective in capturing and delivering broadband neural data, and safe for over one year of testing. In addition, we have used the multichannel data from these mobile animal models to demonstrate the ability to decode neural population dynamics associated with motor activity. Significance. We have developed an implanted wireless broadband neural recording device evaluated in non-human primate and swine. The use of this new implantable neural interface technology can provide insight into how to advance human neuroprostheses beyond the present early clinical trials. Further, such tools enable mobile patient use, have the potential for wider diagnosis of neurological conditions and will advance brain research.

  12. Creating Concepts from Converging Features in Human Cortex

    PubMed Central

    Coutanche, Marc N.; Thompson-Schill, Sharon L.

    2015-01-01

    To make sense of the world around us, our brain must remember the overlapping features of millions of objects. Crucially, it must also represent each object's unique feature-convergence. Some theories propose that an integration area (or “convergence zone”) binds together separate features. We report an investigation of our knowledge of objects' features and identity, and the link between them. We used functional magnetic resonance imaging to record neural activity, as humans attempted to detect a cued fruit or vegetable in visual noise. Crucially, we analyzed brain activity before a fruit or vegetable was present, allowing us to interrogate top-down activity. We found that pattern-classification algorithms could be used to decode the detection target's identity in the left anterior temporal lobe (ATL), its shape in lateral occipital cortex, and its color in right V4. A novel decoding-dependency analysis revealed that identity information in left ATL was specifically predicted by the temporal convergence of shape and color codes in early visual regions. People with stronger feature-and-identity dependencies had more similar top-down and bottom-up activity patterns. These results fulfill three key requirements for a neural convergence zone: a convergence result (object identity), ingredients (color and shape), and the link between them. PMID:24692512

  13. Word pair classification during imagined speech using direct brain recordings

    NASA Astrophysics Data System (ADS)

    Martin, Stephanie; Brunner, Peter; Iturrate, Iñaki; Millán, José Del R.; Schalk, Gerwin; Knight, Robert T.; Pasley, Brian N.

    2016-05-01

    People that cannot communicate due to neurological disorders would benefit from an internal speech decoder. Here, we showed the ability to classify individual words during imagined speech from electrocorticographic signals. In a word imagery task, we used high gamma (70-150 Hz) time features with a support vector machine model to classify individual words from a pair of words. To account for temporal irregularities during speech production, we introduced a non-linear time alignment into the SVM kernel. Classification accuracy reached 88% in a two-class classification framework (50% chance level), and average classification accuracy across fifteen word-pairs was significant across five subjects (mean = 58% p < 0.05). We also compared classification accuracy between imagined speech, overt speech and listening. As predicted, higher classification accuracy was obtained in the listening and overt speech conditions (mean = 89% and 86%, respectively; p < 0.0001), where speech stimuli were directly presented. The results provide evidence for a neural representation for imagined words in the temporal lobe, frontal lobe and sensorimotor cortex, consistent with previous findings in speech perception and production. These data represent a proof of concept study for basic decoding of speech imagery, and delineate a number of key challenges to usage of speech imagery neural representations for clinical applications.

  14. Neural population-level memory traces in the mouse hippocampus.

    PubMed

    Chen, Guifen; Wang, L Phillip; Tsien, Joe Z

    2009-12-16

    One of the fundamental goals in neurosciences is to elucidate the formation and retrieval of brain's associative memory traces in real-time. Here, we describe real-time neural ensemble transient dynamics in the mouse hippocampal CA1 region and demonstrate their relationships with behavioral performances during both learning and recall. We employed the classic trace fear conditioning paradigm involving a neutral tone followed by a mild foot-shock 20 seconds later. Our large-scale recording and decoding methods revealed that conditioned tone responses and tone-shock association patterns were not present in CA1 during the first pairing, but emerged quickly after multiple pairings. These encoding patterns showed increased immediate-replay, correlating tightly with increased immediate-freezing during learning. Moreover, during contextual recall, these patterns reappeared in tandem six-to-fourteen times per minute, again correlating tightly with behavioral recall. Upon traced tone recall, while various fear memories were retrieved, the shock traces exhibited a unique recall-peak around the 20-second trace interval, further signifying the memory of time for the expected shock. Therefore, our study has revealed various real-time associative memory traces during learning and recall in CA1, and demonstrates that real-time memory traces can be decoded on a moment-to-moment basis over any single trial.

  15. Word pair classification during imagined speech using direct brain recordings

    PubMed Central

    Martin, Stephanie; Brunner, Peter; Iturrate, Iñaki; Millán, José del R.; Schalk, Gerwin; Knight, Robert T.; Pasley, Brian N.

    2016-01-01

    People that cannot communicate due to neurological disorders would benefit from an internal speech decoder. Here, we showed the ability to classify individual words during imagined speech from electrocorticographic signals. In a word imagery task, we used high gamma (70–150 Hz) time features with a support vector machine model to classify individual words from a pair of words. To account for temporal irregularities during speech production, we introduced a non-linear time alignment into the SVM kernel. Classification accuracy reached 88% in a two-class classification framework (50% chance level), and average classification accuracy across fifteen word-pairs was significant across five subjects (mean = 58%; p < 0.05). We also compared classification accuracy between imagined speech, overt speech and listening. As predicted, higher classification accuracy was obtained in the listening and overt speech conditions (mean = 89% and 86%, respectively; p < 0.0001), where speech stimuli were directly presented. The results provide evidence for a neural representation for imagined words in the temporal lobe, frontal lobe and sensorimotor cortex, consistent with previous findings in speech perception and production. These data represent a proof of concept study for basic decoding of speech imagery, and delineate a number of key challenges to usage of speech imagery neural representations for clinical applications. PMID:27165452

  16. Semantic attributes are encoded in human electrocorticographic signals during visual object recognition.

    PubMed

    Rupp, Kyle; Roos, Matthew; Milsap, Griffin; Caceres, Carlos; Ratto, Christopher; Chevillet, Mark; Crone, Nathan E; Wolmetz, Michael

    2017-03-01

    Non-invasive neuroimaging studies have shown that semantic category and attribute information are encoded in neural population activity. Electrocorticography (ECoG) offers several advantages over non-invasive approaches, but the degree to which semantic attribute information is encoded in ECoG responses is not known. We recorded ECoG while patients named objects from 12 semantic categories and then trained high-dimensional encoding models to map semantic attributes to spectral-temporal features of the task-related neural responses. Using these semantic attribute encoding models, untrained objects were decoded with accuracies comparable to whole-brain functional Magnetic Resonance Imaging (fMRI), and we observed that high-gamma activity (70-110Hz) at basal occipitotemporal electrodes was associated with specific semantic dimensions (manmade-animate, canonically large-small, and places-tools). Individual patient results were in close agreement with reports from other imaging modalities on the time course and functional organization of semantic processing along the ventral visual pathway during object recognition. The semantic attribute encoding model approach is critical for decoding objects absent from a training set, as well as for studying complex semantic encodings without artificially restricting stimuli to a small number of semantic categories. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Hyperspectral imaging-based credit card verifier structure with adaptive learning.

    PubMed

    Sumriddetchkajorn, Sarun; Intaravanne, Yuttana

    2008-12-10

    We propose and experimentally demonstrate a hyperspectral imaging-based optical structure for verifying a credit card. Our key idea comes from the fact that the fine detail of the embossed hologram stamped on the credit card is hard to duplicate, and therefore its key color features can be used for distinguishing between the real and counterfeit ones. As the embossed hologram is a diffractive optical element, we shine a number of broadband light sources one at a time, each at a different incident angle, on the embossed hologram of the credit card in such a way that different color spectra per incident angle beam are diffracted and separated in space. In this way, the center of mass of the histogram on each color plane is investigated by using a feed-forward backpropagation neural-network configuration. Our experimental demonstration using two off-the-shelf broadband white light emitting diodes, one digital camera, and a three-layer neural network can effectively identify 38 genuine and 109 counterfeit credit cards with false rejection rates of 5.26% and 0.92%, respectively. Key features include low cost, simplicity, no moving parts, no need of an additional decoding key, and adaptive learning.

  18. Neural representations of the concepts in simple sentences: Concept activation prediction and context effects.

    PubMed

    Just, Marcel Adam; Wang, Jing; Cherkassky, Vladimir L

    2017-08-15

    Although it has been possible to identify individual concepts from a concept's brain activation pattern, there have been significant obstacles to identifying a proposition from its fMRI signature. Here we demonstrate the ability to decode individual prototype sentences from readers' brain activation patterns, by using theory-driven regions of interest and semantic properties. It is possible to predict the fMRI brain activation patterns evoked by propositions and words which are entirely new to the model with reliably above-chance rank accuracy. The two core components implemented in the model that reflect the theory were the choice of intermediate semantic features and the brain regions associated with the neurosemantic dimensions. This approach also predicts the neural representation of object nouns across participants, studies, and sentence contexts. Moreover, we find that the neural representation of an agent-verb-object proto-sentence is more accurately characterized by the neural signatures of its components as they occur in a similar context than by the neural signatures of these components as they occur in isolation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. de Bruijn cycles for neural decoding.

    PubMed

    Aguirre, Geoffrey Karl; Mattar, Marcelo Gomes; Magis-Weinberg, Lucía

    2011-06-01

    Stimulus counterbalance is critical for studies of neural habituation, bias, anticipation, and (more generally) the effect of stimulus history and context. We introduce de Bruijn cycles, a class of combinatorial objects, as the ideal source of pseudo-random stimulus sequences with arbitrary levels of counterbalance. Neuro-vascular imaging studies (such as BOLD fMRI) have an additional requirement imposed by the filtering and noise properties of the method: only some temporal frequencies of neural modulation are detectable. Extant methods of generating counterbalanced stimulus sequences yield neural modulations that are weakly (or not at all) detected by BOLD fMRI. We solve this limitation using a novel "path-guided" approach for the generation of de Bruijn cycles. The algorithm encodes a hypothesized neural modulation of specific temporal frequency within the seemingly random order of events. By positioning the modulation between the signal and noise bands of the neuro-vascular imaging method, the resulting sequence markedly improves detection power. These sequences may be used to study stimulus context and history effects in a manner not previously possible. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Decoding a neural circuit controlling global animal state in C. elegans

    PubMed Central

    Laurent, Patrick; Soltesz, Zoltan; Nelson, Geoffrey M; Chen, Changchun; Arellano-Carbajal, Fausto; Levy, Emmanuel; de Bono, Mario

    2015-01-01

    Brains organize behavior and physiology to optimize the response to threats or opportunities. We dissect how 21% O2, an indicator of surface exposure, reprograms C. elegans' global state, inducing sustained locomotory arousal and altering expression of neuropeptides, metabolic enzymes, and other non-neural genes. The URX O2-sensing neurons drive arousal at 21% O2 by tonically activating the RMG interneurons. Stimulating RMG is sufficient to switch behavioral state. Ablating the ASH, ADL, or ASK sensory neurons connected to RMG by gap junctions does not disrupt arousal. However, disrupting cation currents in these neurons curtails RMG neurosecretion and arousal. RMG signals high O2 by peptidergic secretion. Neuropeptide reporters reveal neural circuit state, as neurosecretion stimulates neuropeptide expression. Neural imaging in unrestrained animals shows that URX and RMG encode O2 concentration rather than behavior, while the activity of downstream interneurons such as AVB and AIY reflect both O2 levels and the behavior being executed. DOI: http://dx.doi.org/10.7554/eLife.04241.001 PMID:25760081

  1. Development of closed-loop neural interface technology in a rat model: combining motor cortex operant conditioning with visual cortex microstimulation.

    PubMed

    Marzullo, Timothy Charles; Lehmkuhle, Mark J; Gage, Gregory J; Kipke, Daryl R

    2010-04-01

    Closed-loop neural interface technology that combines neural ensemble decoding with simultaneous electrical microstimulation feedback is hypothesized to improve deep brain stimulation techniques, neuromotor prosthetic applications, and epilepsy treatment. Here we describe our iterative results in a rat model of a sensory and motor neurophysiological feedback control system. Three rats were chronically implanted with microelectrode arrays in both the motor and visual cortices. The rats were subsequently trained over a period of weeks to modulate their motor cortex ensemble unit activity upon delivery of intra-cortical microstimulation (ICMS) of the visual cortex in order to receive a food reward. Rats were given continuous feedback via visual cortex ICMS during the response periods that was representative of the motor cortex ensemble dynamics. Analysis revealed that the feedback provided the animals with indicators of the behavioral trials. At the hardware level, this preparation provides a tractable test model for improving the technology of closed-loop neural devices.

  2. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream.

    PubMed

    Güçlü, Umut; van Gerven, Marcel A J

    2015-07-08

    Converging evidence suggests that the primate ventral visual pathway encodes increasingly complex stimulus features in downstream areas. We quantitatively show that there indeed exists an explicit gradient for feature complexity in the ventral pathway of the human brain. This was achieved by mapping thousands of stimulus features of increasing complexity across the cortical sheet using a deep neural network. Our approach also revealed a fine-grained functional specialization of downstream areas of the ventral stream. Furthermore, it allowed decoding of representations from human brain activity at an unsurpassed degree of accuracy, confirming the quality of the developed approach. Stimulus features that successfully explained neural responses indicate that population receptive fields were explicitly tuned for object categorization. This provides strong support for the hypothesis that object categorization is a guiding principle in the functional organization of the primate ventral stream. Copyright © 2015 the authors 0270-6474/15/3510005-10$15.00/0.

  3. Loss tolerant speech decoder for telecommunications

    NASA Technical Reports Server (NTRS)

    Prieto, Jr., Jaime L. (Inventor)

    1999-01-01

    A method and device for extrapolating past signal-history data for insertion into missing data segments in order to conceal digital speech frame errors. The extrapolation method uses past-signal history that is stored in a buffer. The method is implemented with a device that utilizes a finite-impulse response (FIR) multi-layer feed-forward artificial neural network that is trained by back-propagation for one-step extrapolation of speech compression algorithm (SCA) parameters. Once a speech connection has been established, the speech compression algorithm device begins sending encoded speech frames. As the speech frames are received, they are decoded and converted back into speech signal voltages. During the normal decoding process, pre-processing of the required SCA parameters will occur and the results stored in the past-history buffer. If a speech frame is detected to be lost or in error, then extrapolation modules are executed and replacement SCA parameters are generated and sent as the parameters required by the SCA. In this way, the information transfer to the SCA is transparent, and the SCA processing continues as usual. The listener will not normally notice that a speech frame has been lost because of the smooth transition between the last-received, lost, and next-received speech frames.

  4. An online brain-machine interface using decoding of movement direction from the human electrocorticogram

    NASA Astrophysics Data System (ADS)

    Milekovic, Tomislav; Fischer, Jörg; Pistohl, Tobias; Ruescher, Johanna; Schulze-Bonhage, Andreas; Aertsen, Ad; Rickert, Jörn; Ball, Tonio; Mehring, Carsten

    2012-08-01

    A brain-machine interface (BMI) can be used to control movements of an artificial effector, e.g. movements of an arm prosthesis, by motor cortical signals that control the equivalent movements of the corresponding body part, e.g. arm movements. This approach has been successfully applied in monkeys and humans by accurately extracting parameters of movements from the spiking activity of multiple single neurons. We show that the same approach can be realized using brain activity measured directly from the surface of the human cortex using electrocorticography (ECoG). Five subjects, implanted with ECoG implants for the purpose of epilepsy assessment, took part in our study. Subjects used directionally dependent ECoG signals, recorded during active movements of a single arm, to control a computer cursor in one out of two directions. Significant BMI control was achieved in four out of five subjects with correct directional decoding in 69%-86% of the trials (75% on average). Our results demonstrate the feasibility of an online BMI using decoding of movement direction from human ECoG signals. Thus, to achieve such BMIs, ECoG signals might be used in conjunction with or as an alternative to intracortical neural signals.

  5. Decoding Reveals Plasticity in V3A as a Result of Motion Perceptual Learning

    PubMed Central

    Shibata, Kazuhisa; Chang, Li-Hung; Kim, Dongho; Náñez, José E.; Kamitani, Yukiyasu; Watanabe, Takeo; Sasaki, Yuka

    2012-01-01

    Visual perceptual learning (VPL) is defined as visual performance improvement after visual experiences. VPL is often highly specific for a visual feature presented during training. Such specificity is observed in behavioral tuning function changes with the highest improvement centered on the trained feature and was originally thought to be evidence for changes in the early visual system associated with VPL. However, results of neurophysiological studies have been highly controversial concerning whether the plasticity underlying VPL occurs within the visual cortex. The controversy may be partially due to the lack of observation of neural tuning function changes in multiple visual areas in association with VPL. Here using human subjects we systematically compared behavioral tuning function changes after global motion detection training with decoded tuning function changes for 8 visual areas using pattern classification analysis on functional magnetic resonance imaging (fMRI) signals. We found that the behavioral tuning function changes were extremely highly correlated to decoded tuning function changes only in V3A, which is known to be highly responsive to global motion with human subjects. We conclude that VPL of a global motion detection task involves plasticity in a specific visual cortical area. PMID:22952849

  6. Decoding rule search domain in the left inferior frontal gyrus

    PubMed Central

    Babcock, Laura; Vallesi, Antonino

    2018-01-01

    Traditionally, the left hemisphere has been thought to extract mainly verbal patterns of information, but recent evidence has shown that the left Inferior Frontal Gyrus (IFG) is active during inductive reasoning in both the verbal and spatial domains. We aimed to understand whether the left IFG supports inductive reasoning in a domain-specific or domain-general fashion. To do this we used Multi-Voxel Pattern Analysis to decode the representation of domain during a rule search task. Thirteen participants were asked to extract the rule underlying streams of letters presented in different spatial locations. Each rule was either verbal (letters forming words) or spatial (positions forming geometric figures). Our results show that domain was decodable in the left prefrontal cortex, suggesting that this region represents domain-specific information, rather than processes common to the two domains. A replication study with the same participants tested two years later confirmed these findings, though the individual representations changed, providing evidence for the flexible nature of representations. This study extends our knowledge on the neural basis of goal-directed behaviors and on how information relevant for rule extraction is flexibly mapped in the prefrontal cortex. PMID:29547623

  7. High-throughput GPU-based LDPC decoding

    NASA Astrophysics Data System (ADS)

    Chang, Yang-Lang; Chang, Cheng-Chun; Huang, Min-Yu; Huang, Bormin

    2010-08-01

    Low-density parity-check (LDPC) code is a linear block code known to approach the Shannon limit via the iterative sum-product algorithm. LDPC codes have been adopted in most current communication systems such as DVB-S2, WiMAX, WI-FI and 10GBASE-T. LDPC for the needs of reliable and flexible communication links for a wide variety of communication standards and configurations have inspired the demand for high-performance and flexibility computing. Accordingly, finding a fast and reconfigurable developing platform for designing the high-throughput LDPC decoder has become important especially for rapidly changing communication standards and configurations. In this paper, a new graphic-processing-unit (GPU) LDPC decoding platform with the asynchronous data transfer is proposed to realize this practical implementation. Experimental results showed that the proposed GPU-based decoder achieved 271x speedup compared to its CPU-based counterpart. It can serve as a high-throughput LDPC decoder.

  8. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  9. Differences in the Predictors of Reading Comprehension in First Graders from Low Socio-Economic Status Families with Either Good or Poor Decoding Skills

    PubMed Central

    Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne

    2015-01-01

    Based on the assumption that good decoding skills constitute a bootstrapping mechanism for reading comprehension, the present study investigated the relative contribution of the former skill to the latter compared to that of three other predictors of reading comprehension (listening comprehension, vocabulary and phonemic awareness) in 392 French-speaking first graders from low SES families. This large sample was split into three groups according to their level of decoding skills assessed by pseudoword reading. Using a cutoff of 1 SD above or below the mean of the entire population, there were 63 good decoders, 267 average decoders and 62 poor decoders. 58% of the variance in reading comprehension was explained by our four predictors, with decoding skills proving to be the best predictor (12.1%, 7.3% for listening comprehension, 4.6% for vocabulary and 3.3% for phonemic awareness). Interaction between group versus decoding skills, listening comprehension and phonemic awareness accounted for significant additional variance (3.6%, 1.1% and 1.0%, respectively). The effects on reading comprehension of decoding skills and phonemic awareness were higher in poor and average decoders than in good decoders whereas listening comprehension accounted for more variance in good and average decoders than in poor decoders. Furthermore, the percentage of children with impaired reading comprehension skills was higher in the group of poor decoders (55%) than in the two other groups (average decoders: 7%; good decoders: 0%) and only 6 children (1.5%) had impaired reading comprehension skills with unimpaired decoding skills, listening comprehension or vocabulary. These results challenge the outcomes of studies on “poor comprehenders” by showing that, at least in first grade, poor reading comprehension is strongly linked to the level of decoding skills. PMID:25793519

  10. Differences in the predictors of reading comprehension in first graders from low socio-economic status families with either good or poor decoding skills.

    PubMed

    Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne

    2015-01-01

    Based on the assumption that good decoding skills constitute a bootstrapping mechanism for reading comprehension, the present study investigated the relative contribution of the former skill to the latter compared to that of three other predictors of reading comprehension (listening comprehension, vocabulary and phonemic awareness) in 392 French-speaking first graders from low SES families. This large sample was split into three groups according to their level of decoding skills assessed by pseudoword reading. Using a cutoff of 1 SD above or below the mean of the entire population, there were 63 good decoders, 267 average decoders and 62 poor decoders. 58% of the variance in reading comprehension was explained by our four predictors, with decoding skills proving to be the best predictor (12.1%, 7.3% for listening comprehension, 4.6% for vocabulary and 3.3% for phonemic awareness). Interaction between group versus decoding skills, listening comprehension and phonemic awareness accounted for significant additional variance (3.6%, 1.1% and 1.0%, respectively). The effects on reading comprehension of decoding skills and phonemic awareness were higher in poor and average decoders than in good decoders whereas listening comprehension accounted for more variance in good and average decoders than in poor decoders. Furthermore, the percentage of children with impaired reading comprehension skills was higher in the group of poor decoders (55%) than in the two other groups (average decoders: 7%; good decoders: 0%) and only 6 children (1.5%) had impaired reading comprehension skills with unimpaired decoding skills, listening comprehension or vocabulary. These results challenge the outcomes of studies on "poor comprehenders" by showing that, at least in first grade, poor reading comprehension is strongly linked to the level of decoding skills.

  11. Causal network in a deafferented non-human primate brain.

    PubMed

    Balasubramanian, Karthikeyan; Takahashi, Kazutaka; Hatsopoulos, Nicholas G

    2015-01-01

    De-afferented/efferented neural ensembles can undergo causal changes when interfaced to neuroprosthetic devices. These changes occur via recruitment or isolation of neurons, alterations in functional connectivity within the ensemble and/or changes in the role of neurons, i.e., excitatory/inhibitory. In this work, emergence of a causal network and changes in the dynamics are demonstrated for a deafferented brain region exposed to BMI (brain-machine interface) learning. The BMI was controlling a robot for reach-and-grasp behavior. And, the motor cortical regions used for the BMI were deafferented due to chronic amputation, and ensembles of neurons were decoded for velocity control of the multi-DOF robot. A generalized linear model-framework based Granger causality (GLM-GC) technique was used in estimating the ensemble connectivity. Model selection was based on the AIC (Akaike Information Criterion).

  12. Advances in fMRI Real-Time Neurofeedback.

    PubMed

    Watanabe, Takeo; Sasaki, Yuka; Shibata, Kazuhisa; Kawato, Mitsuo

    2017-12-01

    Functional magnetic resonance imaging (fMRI) neurofeedback is a type of biofeedback in which real-time online fMRI signals are used to self-regulate brain function. Since its advent in 2003 significant progress has been made in fMRI neurofeedback techniques. Specifically, the use of implicit protocols, external rewards, multivariate analysis, and connectivity analysis has allowed neuroscientists to explore a possible causal involvement of modified brain activity in modified behavior. These techniques have also been integrated into groundbreaking new neurofeedback technologies, specifically decoded neurofeedback (DecNef) and functional connectivity-based neurofeedback (FCNef). By modulating neural activity and behavior, DecNef and FCNef have substantially advanced both basic and clinical research. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Decoding the representation of learned social roles in the human brain.

    PubMed

    Eger, Evelyn; Moretti, Laura; Dehaene, Stanislas; Sirigu, Angela

    2013-10-01

    Humans as social beings are profoundly affected by exclusion. Short experiences with people differing in their degree of prosocial behaviour can induce reliable preferences for including partners, but the neural mechanisms of this learning remain unclear. Here, we asked participants to play a short social interaction game based on "cyber-ball" where one fictive partner included and another excluded the subject, thus defining social roles (includer - "good", excluder - "bad"). We then used multivariate pattern recognition on high-resolution functional magnetic resonance imaging (fMRI) data acquired before and after this game to test whether neural responses to the partners' and neutral control faces during a perceptual task reflect their learned social valence. Support vector classification scores revealed a learning-related increase in neural discrimination of social status in anterior insula and anterior cingulate regions, which was mainly driven by includer faces becoming distinguishable from excluder and control faces. Thus, face-evoked responses in anterior insula and anterior cingulate cortex contain fine-grained information shaped by prior social interactions that allow for categorisation of faces according to their learned social status. These lasting traces of social experience in cortical areas important for emotional and social processing could provide a substrate of how social inclusion shapes future behaviour and promotes cooperative interactions between individuals. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Oscillatory phase dynamics in neural entrainment underpin illusory percepts of time.

    PubMed

    Herrmann, Björn; Henry, Molly J; Grigutsch, Maren; Obleser, Jonas

    2013-10-02

    Neural oscillatory dynamics are a candidate mechanism to steer perception of time and temporal rate change. While oscillator models of time perception are strongly supported by behavioral evidence, a direct link to neural oscillations and oscillatory entrainment has not yet been provided. In addition, it has thus far remained unaddressed how context-induced illusory percepts of time are coded for in oscillator models of time perception. To investigate these questions, we used magnetoencephalography and examined the neural oscillatory dynamics that underpin pitch-induced illusory percepts of temporal rate change. Human participants listened to frequency-modulated sounds that varied over time in both modulation rate and pitch, and judged the direction of rate change (decrease vs increase). Our results demonstrate distinct neural mechanisms of rate perception: Modulation rate changes directly affected listeners' rate percept as well as the exact frequency of the neural oscillation. However, pitch-induced illusory rate changes were unrelated to the exact frequency of the neural responses. The rate change illusion was instead linked to changes in neural phase patterns, which allowed for single-trial decoding of percepts. That is, illusory underestimations or overestimations of perceived rate change were tightly coupled to increased intertrial phase coherence and changes in cerebro-acoustic phase lag. The results provide insight on how illusory percepts of time are coded for by neural oscillatory dynamics.

  15. Non-overlapping Neural Networks in Hydra vulgaris.

    PubMed

    Dupre, Christophe; Yuste, Rafael

    2017-04-24

    To understand the emergent properties of neural circuits, it would be ideal to record the activity of every neuron in a behaving animal and decode how it relates to behavior. We have achieved this with the cnidarian Hydra vulgaris, using calcium imaging of genetically engineered animals to measure the activity of essentially all of its neurons. Although the nervous system of Hydra is traditionally described as a simple nerve net, we surprisingly find instead a series of functional networks that are anatomically non-overlapping and are associated with specific behaviors. Three major functional networks extend through the entire animal and are activated selectively during longitudinal contractions, elongations in response to light, and radial contractions, whereas an additional network is located near the hypostome and is active during nodding. These results demonstrate the functional sophistication of apparently simple nerve nets, and the potential of Hydra and other basal metazoans as a model system for neural circuit studies. Published by Elsevier Ltd.

  16. Enhanced decoding for the Galileo S-band mission

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Belongie, M.

    1993-01-01

    A coding system under consideration for the Galileo S-band low-gain antenna mission is a concatenated system using a variable redundancy Reed-Solomon outer code and a (14,1/4) convolutional inner code. The 8-bit Reed-Solomon symbols are interleaved to depth 8, and the eight 255-symbol codewords in each interleaved block have redundancies 64, 20, 20, 20, 64, 20, 20, and 20, respectively (or equivalently, the codewords have 191, 235, 235, 235, 191, 235, 235, and 235 8-bit information symbols, respectively). This concatenated code is to be decoded by an enhanced decoder that utilizes a maximum likelihood (Viterbi) convolutional decoder; a Reed Solomon decoder capable of processing erasures; an algorithm for declaring erasures in undecoded codewords based on known erroneous symbols in neighboring decodable words; a second Viterbi decoding operation (redecoding) constrained to follow only paths consistent with the known symbols from previously decodable Reed-Solomon codewords; and a second Reed-Solomon decoding operation using the output from the Viterbi redecoder and additional erasure declarations to the extent possible. It is estimated that this code and decoder can achieve a decoded bit error rate of 1 x 10(exp 7) at a concatenated code signal-to-noise ratio of 0.76 dB. By comparison, a threshold of 1.17 dB is required for a baseline coding system consisting of the same (14,1/4) convolutional code, a (255,223) Reed-Solomon code with constant redundancy 32 also interleaved to depth 8, a one-pass Viterbi decoder, and a Reed Solomon decoder incapable of declaring or utilizing erasures. The relative gain of the enhanced system is thus 0.41 dB. It is predicted from analysis based on an assumption of infinite interleaving that the coding gain could be further improved by approximately 0.2 dB if four stages of Viterbi decoding and four levels of Reed-Solomon redundancy are permitted. Confirmation of this effect and specification of the optimum four-level redundancy profile for depth-8 interleaving is currently being done.

  17. Neural control of finger movement via intracortical brain-machine interface

    NASA Astrophysics Data System (ADS)

    Irwin, Z. T.; Schroeder, K. E.; Vu, P. P.; Bullard, A. J.; Tat, D. M.; Nu, C. S.; Vaskov, A.; Nason, S. R.; Thompson, D. E.; Bentley, J. N.; Patil, P. G.; Chestek, C. A.

    2017-12-01

    Objective. Intracortical brain-machine interfaces (BMIs) are a promising source of prosthesis control signals for individuals with severe motor disabilities. Previous BMI studies have primarily focused on predicting and controlling whole-arm movements; precise control of hand kinematics, however, has not been fully demonstrated. Here, we investigate the continuous decoding of precise finger movements in rhesus macaques. Approach. In order to elicit precise and repeatable finger movements, we have developed a novel behavioral task paradigm which requires the subject to acquire virtual fingertip position targets. In the physical control condition, four rhesus macaques performed this task by moving all four fingers together in order to acquire a single target. This movement was equivalent to controlling the aperture of a power grasp. During this task performance, we recorded neural spikes from intracortical electrode arrays in primary motor cortex. Main results. Using a standard Kalman filter, we could reconstruct continuous finger movement offline with an average correlation of ρ  =  0.78 between actual and predicted position across four rhesus macaques. For two of the monkeys, this movement prediction was performed in real-time to enable direct brain control of the virtual hand. Compared to physical control, neural control performance was slightly degraded; however, the monkeys were still able to successfully perform the task with an average target acquisition rate of 83.1%. The monkeys’ ability to arbitrarily specify fingertip position was also quantified using an information throughput metric. During brain control task performance, the monkeys achieved an average 1.01 bits s-1 throughput, similar to that achieved in previous studies which decoded upper-arm movements to control computer cursors using a standard Kalman filter. Significance. This is, to our knowledge, the first demonstration of brain control of finger-level fine motor skills. We believe that these results represent an important step towards full and dexterous control of neural prosthetic devices.

  18. Motor-commands decoding using peripheral nerve signals: a review

    NASA Astrophysics Data System (ADS)

    Hong, Keum-Shik; Aziz, Nida; Ghafoor, Usman

    2018-06-01

    During the last few decades, substantial scientific and technological efforts have been focused on the development of neuroprostheses. The major emphasis has been on techniques for connecting the human nervous system with a robotic prosthesis via natural-feeling interfaces. The peripheral nerves provide access to highly processed and segregated neural command signals from the brain that can in principle be used to determine user intent and control muscles. If these signals could be used, they might allow near-natural and intuitive control of prosthetic limbs with multiple degrees of freedom. This review summarizes the history of neuroprosthetic interfaces and their ability to record from and stimulate peripheral nerves. We also discuss the types of interfaces available and their applications, the kinds of peripheral nerve signals that are used, and the algorithms used to decode them. Finally, we explore the prospects for future development in this area.

  19. Generating Poetry Title Based on Semantic Relevance with Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Li, Z.; Niu, K.; He, Z. Q.

    2017-09-01

    Several approaches have been proposed to automatically generate Chinese classical poetry (CCP) in the past few years, but automatically generating the title of CCP is still a difficult problem. The difficulties are mainly reflected in two aspects. First, the words used in CCP are very different from modern Chinese words and there are no valid word segmentation tools. Second, the semantic relevance of characters in CCP not only exists in one sentence but also exists between the same positions of adjacent sentences, which is hard to grasp by the traditional text summarization models. In this paper, we propose an encoder-decoder model for generating the title of CCP. Our model encoder is a convolutional neural network (CNN) with two kinds of filters. To capture the commonly used words in one sentence, one kind of filters covers two characters horizontally at each step. The other covers two characters vertically at each step and can grasp the semantic relevance of characters between adjacent sentences. Experimental results show that our model is better than several other related models and can capture the semantic relevance of CCP more accurately.

  20. An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces.

    PubMed

    Li, Simin; Li, Jie; Li, Zheng

    2016-01-01

    Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well.

  1. An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces

    PubMed Central

    Li, Simin; Li, Jie; Li, Zheng

    2016-01-01

    Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well. PMID:28066170

  2. Real-time control of hind limb functional electrical stimulation using feedback from dorsal root ganglia recordings

    NASA Astrophysics Data System (ADS)

    Bruns, Tim M.; Wagenaar, Joost B.; Bauman, Matthew J.; Gaunt, Robert A.; Weber, Douglas J.

    2013-04-01

    Objective. Functional electrical stimulation (FES) approaches often utilize an open-loop controller to drive state transitions. The addition of sensory feedback may allow for closed-loop control that can respond effectively to perturbations and muscle fatigue. Approach. We evaluated the use of natural sensory nerve signals obtained with penetrating microelectrode arrays in lumbar dorsal root ganglia (DRG) as real-time feedback for closed-loop control of FES-generated hind limb stepping in anesthetized cats. Main results. Leg position feedback was obtained in near real-time at 50 ms intervals by decoding the firing rates of more than 120 DRG neurons recorded simultaneously. Over 5 m of effective linear distance was traversed during closed-loop stepping trials in each of two cats. The controller compensated effectively for perturbations in the stepping path when DRG sensory feedback was provided. The presence of stimulation artifacts and the quality of DRG unit sorting did not significantly affect the accuracy of leg position feedback obtained from the linear decoding model as long as at least 20 DRG units were included in the model. Significance. This work demonstrates the feasibility and utility of closed-loop FES control based on natural neural sensors. Further work is needed to improve the controller and electrode technologies and to evaluate long-term viability.

  3. Real-time control of hind limb functional electrical stimulation using feedback from dorsal root ganglia recordings

    PubMed Central

    Bruns, Tim M; Wagenaar, Joost B; Bauman, Matthew J; Gaunt, Robert A; Weber, Douglas J

    2013-01-01

    Objective Functional electrical stimulation (FES) approaches often utilize an open-loop controller to drive state transitions. The addition of sensory feedback may allow for closed-loop control that can respond effectively to perturbations and muscle fatigue. Approach We evaluated the use of natural sensory nerve signals obtained with penetrating microelectrode arrays in lumbar dorsal root ganglia (DRG) as real-time feedback for closed-loop control of FES-generated hind limb stepping in anesthetized cats. Main results Leg position feedback was obtained in near real-time at 50 ms intervals by decoding the firing rates of more than 120 DRG neurons recorded simultaneously. Over 5 m of effective linear distance was traversed during closed-loop stepping trials in each of two cats. The controller compensated effectively for perturbations in the stepping path when DRG sensory feedback was provided. The presence of stimulation artifacts and the quality of DRG unit sorting did not significantly affect the accuracy of leg position feedback obtained from the linear decoding model as long as at least 20 DRG units were included in the model. Significance This work demonstrates the feasibility and utility of closed-loop FES control based on natural neural sensors. Further work is needed to improve the controller and electrode technologies and to evaluate long-term viability. PMID:23503062

  4. Visual coding with a population of direction-selective neurons.

    PubMed

    Fiscella, Michele; Franke, Felix; Farrow, Karl; Müller, Jan; Roska, Botond; da Silveira, Rava Azeredo; Hierlemann, Andreas

    2015-10-01

    The brain decodes the visual scene from the action potentials of ∼20 retinal ganglion cell types. Among the retinal ganglion cells, direction-selective ganglion cells (DSGCs) encode motion direction. Several studies have focused on the encoding or decoding of motion direction by recording multiunit activity, mainly in the visual cortex. In this study, we simultaneously recorded from all four types of ON-OFF DSGCs of the rabbit retina using a microelectronics-based high-density microelectrode array (HDMEA) and decoded their concerted activity using probabilistic and linear decoders. Furthermore, we investigated how the modification of stimulus parameters (velocity, size, angle of moving object) and the use of different tuning curve fits influenced decoding precision. Finally, we simulated ON-OFF DSGC activity, based on real data, in order to understand how tuning curve widths and the angular distribution of the cells' preferred directions influence decoding performance. We found that probabilistic decoding strategies outperformed, on average, linear methods and that decoding precision was robust to changes in stimulus parameters such as velocity. The removal of noise correlations among cells, by random shuffling trials, caused a drop in decoding precision. Moreover, we found that tuning curves are broad in order to minimize large errors at the expense of a higher average error, and that the retinal direction-selective system would not substantially benefit, on average, from having more than four types of ON-OFF DSGCs or from a perfect alignment of the cells' preferred directions. Copyright © 2015 the American Physiological Society.

  5. Visual coding with a population of direction-selective neurons

    PubMed Central

    Farrow, Karl; Müller, Jan; Roska, Botond; Azeredo da Silveira, Rava; Hierlemann, Andreas

    2015-01-01

    The brain decodes the visual scene from the action potentials of ∼20 retinal ganglion cell types. Among the retinal ganglion cells, direction-selective ganglion cells (DSGCs) encode motion direction. Several studies have focused on the encoding or decoding of motion direction by recording multiunit activity, mainly in the visual cortex. In this study, we simultaneously recorded from all four types of ON-OFF DSGCs of the rabbit retina using a microelectronics-based high-density microelectrode array (HDMEA) and decoded their concerted activity using probabilistic and linear decoders. Furthermore, we investigated how the modification of stimulus parameters (velocity, size, angle of moving object) and the use of different tuning curve fits influenced decoding precision. Finally, we simulated ON-OFF DSGC activity, based on real data, in order to understand how tuning curve widths and the angular distribution of the cells' preferred directions influence decoding performance. We found that probabilistic decoding strategies outperformed, on average, linear methods and that decoding precision was robust to changes in stimulus parameters such as velocity. The removal of noise correlations among cells, by random shuffling trials, caused a drop in decoding precision. Moreover, we found that tuning curves are broad in order to minimize large errors at the expense of a higher average error, and that the retinal direction-selective system would not substantially benefit, on average, from having more than four types of ON-OFF DSGCs or from a perfect alignment of the cells' preferred directions. PMID:26289471

  6. Decoding DNA labels by melting curve analysis using real-time PCR.

    PubMed

    Balog, József A; Fehér, Liliána Z; Puskás, László G

    2017-12-01

    Synthetic DNA has been used as an authentication code for a diverse number of applications. However, existing decoding approaches are based on either DNA sequencing or the determination of DNA length variations. Here, we present a simple alternative protocol for labeling different objects using a small number of short DNA sequences that differ in their melting points. Code amplification and decoding can be done in two steps using quantitative PCR (qPCR). To obtain a DNA barcode with high complexity, we defined 8 template groups, each having 4 different DNA templates, yielding 158 (>2.5 billion) combinations of different individual melting temperature (Tm) values and corresponding ID codes. The reproducibility and specificity of the decoding was confirmed by using the most complex template mixture, which had 32 different products in 8 groups with different Tm values. The industrial applicability of our protocol was also demonstrated by labeling a drone with an oil-based paint containing a predefined DNA code, which was then successfully decoded. The method presented here consists of a simple code system based on a small number of synthetic DNA sequences and a cost-effective, rapid decoding protocol using a few qPCR reactions, enabling a wide range of authentication applications.

  7. Neurally dissociable cognitive components of reading deficits in subacute stroke.

    PubMed

    Boukrina, Olga; Barrett, A M; Alexander, Edward J; Yao, Bing; Graves, William W

    2015-01-01

    According to cognitive models of reading, words are processed by interacting orthographic (spelling), phonological (sound), and semantic (meaning) information. Despite extensive study of the neural basis of reading in healthy participants, little group data exist on patients with reading deficits from focal brain damage pointing to critical neural systems for reading. Here, we report on one such study. We have performed neuropsychological testing and magnetic resonance imaging on 11 patients with left-hemisphere stroke (<=5 weeks post-stroke). Patients completed tasks assessing cognitive components of reading such as semantics (matching picture or word choices to a target based on meaning), phonology (matching word choices to a target based on rhyming), and orthography (a two-alternative forced choice of the most plausible non-word). They also read aloud pseudowords and words with high or low levels of usage frequency, imageability, and spelling-sound consistency. As predicted by the cognitive model, when averaged across patients, the influence of semantics was most salient for low-frequency, low-consistency words, when phonological decoding is especially difficult. Qualitative subtraction analyses revealed lesion sites specific to phonological processing. These areas were consistent with those shown previously to activate for phonology in healthy participants, including supramarginal, posterior superior temporal, middle temporal, inferior frontal gyri, and underlying white matter. Notable divergence between this analysis and previous functional imaging is the association of lesions in the mid-fusiform gyrus and anterior temporal lobe with phonological reading deficits. This study represents progress toward identifying brain lesion-deficit relationships in the cognitive components of reading. Such correspondences are expected to help not only better understand the neural mechanisms of reading, but may also help tailor reading therapy to individual neurocognitive deficit profiles.

  8. Achievable Information Rates for Coded Modulation With Hard Decision Decoding for Coherent Fiber-Optic Systems

    NASA Astrophysics Data System (ADS)

    Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi

    2017-12-01

    We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.

  9. Dissociation of Self-Motion and Object Motion by Linear Population Decoding That Approximates Marginalization.

    PubMed

    Sasaki, Ryo; Angelaki, Dora E; DeAngelis, Gregory C

    2017-11-15

    We use visual image motion to judge the movement of objects, as well as our own movements through the environment. Generally, image motion components caused by object motion and self-motion are confounded in the retinal image. Thus, to estimate heading, the brain would ideally marginalize out the effects of object motion (or vice versa), but little is known about how this is accomplished neurally. Behavioral studies suggest that vestibular signals play a role in dissociating object motion and self-motion, and recent computational work suggests that a linear decoder can approximate marginalization by taking advantage of diverse multisensory representations. By measuring responses of MSTd neurons in two male rhesus monkeys and by applying a recently-developed method to approximate marginalization by linear population decoding, we tested the hypothesis that vestibular signals help to dissociate self-motion and object motion. We show that vestibular signals stabilize tuning for heading in neurons with congruent visual and vestibular heading preferences, whereas they stabilize tuning for object motion in neurons with discrepant preferences. Thus, vestibular signals enhance the separability of joint tuning for object motion and self-motion. We further show that a linear decoder, designed to approximate marginalization, allows the population to represent either self-motion or object motion with good accuracy. Decoder weights are broadly consistent with a readout strategy, suggested by recent computational work, in which responses are decoded according to the vestibular preferences of multisensory neurons. These results demonstrate, at both single neuron and population levels, that vestibular signals help to dissociate self-motion and object motion. SIGNIFICANCE STATEMENT The brain often needs to estimate one property of a changing environment while ignoring others. This can be difficult because multiple properties of the environment may be confounded in sensory signals. The brain can solve this problem by marginalizing over irrelevant properties to estimate the property-of-interest. We explore this problem in the context of self-motion and object motion, which are inherently confounded in the retinal image. We examine how diversity in a population of multisensory neurons may be exploited to decode self-motion and object motion from the population activity of neurons in macaque area MSTd. Copyright © 2017 the authors 0270-6474/17/3711204-16$15.00/0.

  10. Relation of Neural Structure to Persistently Low Academic Achievement: A Longitudinal Study of Children with Differing Birth Weights

    PubMed Central

    Clark, Caron A. C.; Fang, Hua; Espy, Kimberly Andrews; Filipek, Pauline A.; Juranek, Jenifer; Bangert, Barbara; Hack, Maureen; Taylor, H. Gerry

    2013-01-01

    Objective Children with very low birth weight (VLBW; <1500g) are at risk for academic underachievement, although less is known regarding the developmental course of these difficulties or their neural basis. This study examined whether cerebral tissue reductions related to VLBW are associated with poor patterns of growth in core academic skills. Method Children born <750 g, 750–1499 g or >2500 g completed measures of calculation, mathematical problem solving and word decoding at several time points spanning middle childhood and adolescence. Espy, Fang, Charak, Minich and Taylor (2009) used growth mixture modeling to identify two distinct growth trajectories (growth clusters) for each academic domain: an average achievement trajectory and a persistently low achievement trajectory. In this study, 97 of the same participants underwent MRI in late adolescence. MRI measures of cerebral tissue volume were used to predict the probability of low growth cluster membership for each domain. Results After adjusting for whole brain volume, each 1cm3 reduction in caudate volume was associated with a 1.7 – 2.1 fold increase in the odds of low cluster membership for each academic domain. Each 1mm2 decrease in corpus callosum surface area increased these odds approximately 1.02 fold. Reductions in cerebellar white matter volume were associated specifically with low calculation and decoding growth while reduced cerebral white matter volume was associated with low calculation growth. Findings were similar when analyses were confined to the VLBW groups. Conclusions Volumetric reductions in neural regions involved in connectivity, executive attention and motor control may help to explain heterogeneous academic growth trajectories amongst children with VLBW. PMID:23688218

  11. Emotion Decoding and Incidental Processing Fluency as Antecedents of Attitude Certainty.

    PubMed

    Petrocelli, John V; Whitmire, Melanie B

    2017-07-01

    Previous research demonstrates that attitude certainty influences the degree to which an attitude changes in response to persuasive appeals. In the current research, decoding emotions from facial expressions and incidental processing fluency, during attitude formation, are examined as antecedents of both attitude certainty and attitude change. In Experiment 1, participants who decoded anger or happiness during attitude formation expressed their greater attitude certainty, and showed more resistance to persuasion than participants who decoded sadness. By manipulating the emotion decoded, the diagnosticity of processing fluency experienced during emotion decoding, and the gaze direction of the social targets, Experiment 2 suggests that the link between emotion decoding and attitude certainty results from incidental processing fluency. Experiment 3 demonstrated that fluency in processing irrelevant stimuli influences attitude certainty, which in turn influences resistance to persuasion. Implications for appraisal-based accounts of attitude formation and attitude change are discussed.

  12. Exploring multiple feature combination strategies with a recurrent neural network architecture for off-line handwriting recognition

    NASA Astrophysics Data System (ADS)

    Mioulet, L.; Bideault, G.; Chatelain, C.; Paquet, T.; Brunessaux, S.

    2015-01-01

    The BLSTM-CTC is a novel recurrent neural network architecture that has outperformed previous state of the art algorithms in tasks such as speech recognition or handwriting recognition. It has the ability to process long term dependencies in temporal signals in order to label unsegmented data. This paper describes different ways of combining features using a BLSTM-CTC architecture. Not only do we explore the low level combination (feature space combination) but we also explore high level combination (decoding combination) and mid-level (internal system representation combination). The results are compared on the RIMES word database. Our results show that the low level combination works best, thanks to the powerful data modeling of the LSTM neurons.

  13. Multivoxel neurofeedback selectively modulates confidence without changing perceptual performance

    PubMed Central

    Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Kawato, Mitsuo; Lau, Hakwan

    2016-01-01

    A central controversy in metacognition studies concerns whether subjective confidence directly reflects the reliability of perceptual or cognitive processes, as suggested by normative models based on the assumption that neural computations are generally optimal. This view enjoys popularity in the computational and animal literatures, but it has also been suggested that confidence may depend on a late-stage estimation dissociable from perceptual processes. Yet, at least in humans, experimental tools have lacked the power to resolve these issues convincingly. Here, we overcome this difficulty by using the recently developed method of decoded neurofeedback (DecNef) to systematically manipulate multivoxel correlates of confidence in a frontoparietal network. Here we report that bi-directional changes in confidence do not affect perceptual accuracy. Further psychophysical analyses rule out accounts based on simple shifts in reporting strategy. Our results provide clear neuroscientific evidence for the systematic dissociation between confidence and perceptual performance, and thereby challenge current theoretical thinking. PMID:27976739

  14. Predictive Coding Strategies for Invariant Object Recognition and Volitional Motion Control in Neuromorphic Agents

    DTIC Science & Technology

    2015-09-02

    human behavior. In this project, we hypothesized that visual memory of past motion trajectories may be used for selecting future behavior. In other...34Decoding sequence of actions using fMRI ", Society for Neuroscience Annual Meeting, San Diego, CA, USA, Nov 9-13 2013 (only abstract) 3. Hansol Choi, Dae...Shik Kim, "Planning as inference in a Hierarchical Predictive Memory ", Proceedings of International Conference on Neural Information Processing

  15. FPGA Implementation of Reed-Solomon Decoder for IEEE 802.16 WiMAX Systems using Simulink-Sysgen Design Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobrek, Miljko; Albright, Austin P

    This paper presents FPGA implementation of the Reed-Solomon decoder for use in IEEE 802.16 WiMAX systems. The decoder is based on RS(255,239) code, and is additionally shortened and punctured according to the WiMAX specifications. Simulink model based on Sysgen library of Xilinx blocks was used for simulation and hardware implementation. At the end, simulation results and hardware implementation performances are presented.

  16. Robust tactile sensory responses in finger area of primate motor cortex relevant to prosthetic control

    NASA Astrophysics Data System (ADS)

    Schroeder, Karen E.; Irwin, Zachary T.; Bullard, Autumn J.; Thompson, David E.; Bentley, J. Nicole; Stacey, William C.; Patil, Parag G.; Chestek, Cynthia A.

    2017-08-01

    Objective. Challenges in improving the performance of dexterous upper-limb brain-machine interfaces (BMIs) have prompted renewed interest in quantifying the amount and type of sensory information naturally encoded in the primary motor cortex (M1). Previous single unit studies in monkeys showed M1 is responsive to tactile stimulation, as well as passive and active movement of the limbs. However, recent work in this area has focused primarily on proprioception. Here we examined instead how tactile somatosensation of the hand and fingers is represented in M1. Approach. We recorded multi- and single units and thresholded neural activity from macaque M1 while gently brushing individual finger pads at 2 Hz. We also recorded broadband neural activity from electrocorticogram (ECoG) grids placed on human motor cortex, while applying the same tactile stimulus. Main results. Units displaying significant differences in firing rates between individual fingers (p  <  0.05) represented up to 76.7% of sorted multiunits across four monkeys. After normalizing by the number of channels with significant motor finger responses, the percentage of electrodes with significant tactile responses was 74.9%  ±  24.7%. No somatotopic organization of finger preference was obvious across cortex, but many units exhibited cosine-like tuning across multiple digits. Sufficient sensory information was present in M1 to correctly decode stimulus position from multiunit activity above chance levels in all monkeys, and also from ECoG gamma power in two human subjects. Significance. These results provide some explanation for difficulties experienced by motor decoders in clinical trials of cortically controlled prosthetic hands, as well as the general problem of disentangling motor and sensory signals in primate motor cortex during dextrous tasks. Additionally, examination of unit tuning during tactile and proprioceptive inputs indicates cells are often tuned differently in different contexts, reinforcing the need for continued refinement of BMI training and decoding approaches to closed-loop BMI systems for dexterous grasping.

  17. Dissociable Decoding of Spatial Attention and Working Memory from EEG Oscillations and Sustained Potentials.

    PubMed

    Bae, Gi-Yeul; Luck, Steven J

    2018-01-10

    In human scalp EEG recordings, both sustained potentials and alpha-band oscillations are present during the delay period of working memory tasks and may therefore reflect the representation of information in working memory. However, these signals may instead reflect support mechanisms rather than the actual contents of memory. In particular, alpha-band oscillations have been tightly tied to spatial attention and may not reflect location-independent memory representations per se. To determine how sustained and oscillating EEG signals are related to attention and working memory, we attempted to decode which of 16 orientations was being held in working memory by human observers (both women and men). We found that sustained EEG activity could be used to decode the remembered orientation of a stimulus, even when the orientation of the stimulus varied independently of its location. Alpha-band oscillations also carried clear information about the location of the stimulus, but they provided little or no information about orientation independently of location. Thus, sustained potentials contain information about the object properties being maintained in working memory, consistent with previous evidence of a tight link between these potentials and working memory capacity. In contrast, alpha-band oscillations primarily carry location information, consistent with their link to spatial attention. SIGNIFICANCE STATEMENT Working memory plays a key role in cognition, and working memory is impaired in several neurological and psychiatric disorders. Previous research has suggested that human scalp EEG recordings contain signals that reflect the neural representation of information in working memory. However, to conclude that a neural signal actually represents the object being remembered, it is necessary to show that the signal contains fine-grained information about that object. Here, we show that sustained voltages in human EEG recordings contain fine-grained information about the orientation of an object being held in memory, consistent with a memory storage signal. Copyright © 2018 the authors 0270-6474/18/380409-14$15.00/0.

  18. Decoded fMRI neurofeedback can induce bidirectional confidence changes within single participants.

    PubMed

    Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Lau, Hakwan; Kawato, Mitsuo

    2017-04-01

    Neurofeedback studies using real-time functional magnetic resonance imaging (rt-fMRI) have recently incorporated the multi-voxel pattern decoding approach, allowing for fMRI to serve as a tool to manipulate fine-grained neural activity embedded in voxel patterns. Because of its tremendous potential for clinical applications, certain questions regarding decoded neurofeedback (DecNef) must be addressed. Specifically, can the same participants learn to induce neural patterns in opposite directions in different sessions? If so, how does previous learning affect subsequent induction effectiveness? These questions are critical because neurofeedback effects can last for months, but the short- to mid-term dynamics of such effects are unknown. Here we employed a within-subjects design, where participants underwent two DecNef training sessions to induce behavioural changes of opposing directionality (up or down regulation of perceptual confidence in a visual discrimination task), with the order of training counterbalanced across participants. Behavioral results indicated that the manipulation was strongly influenced by the order and the directionality of neurofeedback training. We applied nonlinear mathematical modeling to parametrize four main consequences of DecNef: main effect of change in confidence, strength of down-regulation of confidence relative to up-regulation, maintenance of learning effects, and anterograde learning interference. Modeling results revealed that DecNef successfully induced bidirectional confidence changes in different sessions within single participants. Furthermore, the effect of up- compared to down-regulation was more prominent, and confidence changes (regardless of the direction) were largely preserved even after a week-long interval. Lastly, the effect of the second session was markedly diminished as compared to the effect of the first session, indicating strong anterograde learning interference. These results are interpreted in the framework of reinforcement learning and provide important implications for its application to basic neuroscience, to occupational and sports training, and to therapy. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Decoded fMRI neurofeedback can induce bidirectional confidence changes within single participants

    PubMed Central

    Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Lau, Hakwan; Kawato, Mitsuo

    2017-01-01

    Neurofeedback studies using real-time functional magnetic resonance imaging (rt-fMRI) have recently incorporated the multi-voxel pattern decoding approach, allowing for fMRI to serve as a tool to manipulate fine-grained neural activity embedded in voxel patterns. Because of its tremendous potential for clinical applications, certain questions regarding decoded neurofeedback (DecNef) must be addressed. Specifically, can the same participants learn to induce neural patterns in opposite directions in different sessions? If so, how does previous learning affect subsequent induction effectiveness? These questions are critical because neurofeedback effects can last for months, but the short- to mid-term dynamics of such effects are unknown. Here we employed a within-subjects design, where participants underwent two DecNef training sessions to induce behavioural changes of opposing directionality (up or down regulation of perceptual confidence in a visual discrimination task), with the order of training counterbalanced across participants. Behavioral results indicated that the manipulation was strongly influenced by the order and the directionality of neurofeedback training. We applied nonlinear mathematical modeling to parametrize four main consequences of DecNef: main effect of change in confidence, strength of down-regulation of confidence relative to up-regulation, maintenance of learning effects, and anterograde learning interference. Modeling results revealed that DecNef successfully induced bidirectional confidence changes in different sessions within single participants. Furthermore, the effect of up- compared to down-regulation was more prominent, and confidence changes (regardless of the direction) were largely preserved even after a week-long interval. Lastly, the effect of the second session was markedly diminished as compared to the effect of the first session, indicating strong anterograde learning interference. These results are interpreted in the framework of reinforcement learning and provide important implications for its application to basic neuroscience, to occupational and sports training, and to therapy. PMID:28163140

  20. Playing charades in the fMRI: are mirror and/or mentalizing areas involved in gestural communication?

    PubMed

    Schippers, Marleen B; Gazzola, Valeria; Goebel, Rainer; Keysers, Christian

    2009-08-27

    Communication is an important aspect of human life, allowing us to powerfully coordinate our behaviour with that of others. Boiled down to its mere essentials, communication entails transferring a mental content from one brain to another. Spoken language obviously plays an important role in communication between human individuals. Manual gestures however often aid the semantic interpretation of the spoken message, and gestures may have played a central role in the earlier evolution of communication. Here we used the social game of charades to investigate the neural basis of gestural communication by having participants produce and interpret meaningful gestures while their brain activity was measured using functional magnetic resonance imaging. While participants decoded observed gestures, the putative mirror neuron system (pMNS: premotor, parietal and posterior mid-temporal cortex), associated with motor simulation, and the temporo-parietal junction (TPJ), associated with mentalizing and agency attribution, were significantly recruited. Of these areas only the pMNS was recruited during the production of gestures. This suggests that gestural communication relies on a combination of simulation and, during decoding, mentalizing/agency attribution brain areas. Comparing the decoding of gestures with a condition in which participants viewed the same gestures with an instruction not to interpret the gestures showed that although parts of the pMNS responded more strongly during active decoding, most of the pMNS and the TPJ did not show such significant task effects. This suggests that the mere observation of gestures recruits most of the system involved in voluntary interpretation.

  1. Comparison of incoming dental school patients with and without disabilities.

    PubMed

    Stiefel, D J; Truelove, E L; Martin, M D; Mandel, L S

    1997-01-01

    A survey of incoming dental school patients compared 64 adult patients (DECOD) and 73 patients without disability (ND), regarding past dental experience, current needs, and basis for selecting the school's clinics. The responses indicated that, for DECOD patients, clinic selection was based largely on Medicaid acceptance, staff experience, and inability of other dentists to manage their disability; for ND patients, selection was based on lower fee structure. Both groups expressed high treatment need, but the rate was lower for DECOD than for ND patients. More DECOD patients reported severe dental anxiety and adverse effects of dental problems on general health. Chart records revealed that clinical findings exceeded perceived need for both DECOD and ND patients. While both groups had high periodontal disease rates (91%), DECOD patients had significantly poorer oral hygiene and less restorative need than ND patients. The findings suggest differences between persons with disabilities and other patient groups in difficulty of access to dental services in the community, reasons for entering the dental school system, and in presenting treatment need and/or treatment planning.

  2. Deconstructing multivariate decoding for the study of brain function.

    PubMed

    Hebart, Martin N; Baker, Chris I

    2017-08-04

    Multivariate decoding methods were developed originally as tools to enable accurate predictions in real-world applications. The realization that these methods can also be employed to study brain function has led to their widespread adoption in the neurosciences. However, prior to the rise of multivariate decoding, the study of brain function was firmly embedded in a statistical philosophy grounded on univariate methods of data analysis. In this way, multivariate decoding for brain interpretation grew out of two established frameworks: multivariate decoding for predictions in real-world applications, and classical univariate analysis based on the study and interpretation of brain activation. We argue that this led to two confusions, one reflecting a mixture of multivariate decoding for prediction or interpretation, and the other a mixture of the conceptual and statistical philosophies underlying multivariate decoding and classical univariate analysis. Here we attempt to systematically disambiguate multivariate decoding for the study of brain function from the frameworks it grew out of. After elaborating these confusions and their consequences, we describe six, often unappreciated, differences between classical univariate analysis and multivariate decoding. We then focus on how the common interpretation of what is signal and noise changes in multivariate decoding. Finally, we use four examples to illustrate where these confusions may impact the interpretation of neuroimaging data. We conclude with a discussion of potential strategies to help resolve these confusions in interpreting multivariate decoding results, including the potential departure from multivariate decoding methods for the study of brain function. Copyright © 2017. Published by Elsevier Inc.

  3. Real-time SHVC software decoding with multi-threaded parallel processing

    NASA Astrophysics Data System (ADS)

    Gudumasu, Srinivas; He, Yuwen; Ye, Yan; He, Yong; Ryu, Eun-Seok; Dong, Jie; Xiu, Xiaoyu

    2014-09-01

    This paper proposes a parallel decoding framework for scalable HEVC (SHVC). Various optimization technologies are implemented on the basis of SHVC reference software SHM-2.0 to achieve real-time decoding speed for the two layer spatial scalability configuration. SHVC decoder complexity is analyzed with profiling information. The decoding process at each layer and the up-sampling process are designed in parallel and scheduled by a high level application task manager. Within each layer, multi-threaded decoding is applied to accelerate the layer decoding speed. Entropy decoding, reconstruction, and in-loop processing are pipeline designed with multiple threads based on groups of coding tree units (CTU). A group of CTUs is treated as a processing unit in each pipeline stage to achieve a better trade-off between parallelism and synchronization. Motion compensation, inverse quantization, and inverse transform modules are further optimized with SSE4 SIMD instructions. Simulations on a desktop with an Intel i7 processor 2600 running at 3.4 GHz show that the parallel SHVC software decoder is able to decode 1080p spatial 2x at up to 60 fps (frames per second) and 1080p spatial 1.5x at up to 50 fps for those bitstreams generated with SHVC common test conditions in the JCT-VC standardization group. The decoding performance at various bitrates with different optimization technologies and different numbers of threads are compared in terms of decoding speed and resource usage, including processor and memory.

  4. Speech Intelligibility Predicted from Neural Entrainment of the Speech Envelope.

    PubMed

    Vanthornhout, Jonas; Decruy, Lien; Wouters, Jan; Simon, Jonathan Z; Francart, Tom

    2018-04-01

    Speech intelligibility is currently measured by scoring how well a person can identify a speech signal. The results of such behavioral measures reflect neural processing of the speech signal, but are also influenced by language processing, motivation, and memory. Very often, electrophysiological measures of hearing give insight in the neural processing of sound. However, in most methods, non-speech stimuli are used, making it hard to relate the results to behavioral measures of speech intelligibility. The use of natural running speech as a stimulus in electrophysiological measures of hearing is a paradigm shift which allows to bridge the gap between behavioral and electrophysiological measures. Here, by decoding the speech envelope from the electroencephalogram, and correlating it with the stimulus envelope, we demonstrate an electrophysiological measure of neural processing of running speech. We show that behaviorally measured speech intelligibility is strongly correlated with our electrophysiological measure. Our results pave the way towards an objective and automatic way of assessing neural processing of speech presented through auditory prostheses, reducing confounds such as attention and cognitive capabilities. We anticipate that our electrophysiological measure will allow better differential diagnosis of the auditory system, and will allow the development of closed-loop auditory prostheses that automatically adapt to individual users.

  5. Increasingly complex representations of natural movies across the dorsal stream are shared between subjects.

    PubMed

    Güçlü, Umut; van Gerven, Marcel A J

    2017-01-15

    Recently, deep neural networks (DNNs) have been shown to provide accurate predictions of neural responses across the ventral visual pathway. We here explore whether they also provide accurate predictions of neural responses across the dorsal visual pathway, which is thought to be devoted to motion processing and action recognition. This is achieved by training deep neural networks to recognize actions in videos and subsequently using them to predict neural responses while subjects are watching natural movies. Moreover, we explore whether dorsal stream representations are shared between subjects. In order to address this question, we examine if individual subject predictions can be made in a common representational space estimated via hyperalignment. Results show that a DNN trained for action recognition can be used to accurately predict how dorsal stream responds to natural movies, revealing a correspondence in representations of DNN layers and dorsal stream areas. It is also demonstrated that models operating in a common representational space can generalize to responses of multiple or even unseen individual subjects to novel spatio-temporal stimuli in both encoding and decoding settings, suggesting that a common representational space underlies dorsal stream responses across multiple subjects. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Direct migration motion estimation and mode decision to decoder for a low-complexity decoder Wyner-Ziv video coding

    NASA Astrophysics Data System (ADS)

    Lei, Ted Chih-Wei; Tseng, Fan-Shuo

    2017-07-01

    This paper addresses the problem of high-computational complexity decoding in traditional Wyner-Ziv video coding (WZVC). The key focus is the migration of two traditionally high-computationally complex encoder algorithms, namely motion estimation and mode decision. In order to reduce the computational burden in this process, the proposed architecture adopts the partial boundary matching algorithm and four flexible types of block mode decision at the decoder. This approach does away with the need for motion estimation and mode decision at the encoder. The experimental results show that the proposed padding block-based WZVC not only decreases decoder complexity to approximately one hundredth that of the state-of-the-art DISCOVER decoding but also outperforms DISCOVER codec by up to 3 to 4 dB.

  7. Representations of Invariant Musical Categories Are Decodable by Pattern Analysis of Locally Distributed BOLD Responses in Superior Temporal and Intraparietal Sulci

    PubMed Central

    Klein, Mike E.; Zatorre, Robert J.

    2015-01-01

    In categorical perception (CP), continuous physical signals are mapped to discrete perceptual bins: mental categories not found in the physical world. CP has been demonstrated across multiple sensory modalities and, in audition, for certain over-learned speech and musical sounds. The neural basis of auditory CP, however, remains ambiguous, including its robustness in nonspeech processes and the relative roles of left/right hemispheres; primary/nonprimary cortices; and ventral/dorsal perceptual processing streams. Here, highly trained musicians listened to 2-tone musical intervals, which they perceive categorically while undergoing functional magnetic resonance imaging. Multivariate pattern analyses were performed after grouping sounds by interval quality (determined by frequency ratio between tones) or pitch height (perceived noncategorically, frequency ratios remain constant). Distributed activity patterns in spheres of voxels were used to determine sound sample identities. For intervals, significant decoding accuracy was observed in the right superior temporal and left intraparietal sulci, with smaller peaks observed homologously in contralateral hemispheres. For pitch height, no significant decoding accuracy was observed, consistent with the non-CP of this dimension. These results suggest that similar mechanisms are operative for nonspeech categories as for speech; espouse roles for 2 segregated processing streams; and support hierarchical processing models for CP. PMID:24488957

  8. A Brain-Machine-Muscle Interface for Restoring Hindlimb Locomotion after Complete Spinal Transection in Rats

    PubMed Central

    Alam, Monzurul; Chen, Xi; Zhang, Zicong; Li, Yan; He, Jufang

    2014-01-01

    A brain-machine interface (BMI) is a neuroprosthetic device that can restore motor function of individuals with paralysis. Although the feasibility of BMI control of upper-limb neuroprostheses has been demonstrated, a BMI for the restoration of lower-limb motor functions has not yet been developed. The objective of this study was to determine if gait-related information can be captured from neural activity recorded from the primary motor cortex of rats, and if this neural information can be used to stimulate paralysed hindlimb muscles after complete spinal cord transection. Neural activity was recorded from the hindlimb area of the primary motor cortex of six female Sprague Dawley rats during treadmill locomotion before and after mid-thoracic transection. Before spinal transection there was a strong association between neural activity and the step cycle. This association decreased after spinal transection. However, the locomotive state (standing vs. walking) could still be successfully decoded from neural recordings made after spinal transection. A novel BMI device was developed that processed this neural information in real-time and used it to control electrical stimulation of paralysed hindlimb muscles. This system was able to elicit hindlimb muscle contractions that mimicked forelimb stepping. We propose this lower-limb BMI as a future neuroprosthesis for human paraplegics. PMID:25084446

  9. Emotion identification and aging: Behavioral and neural age-related changes.

    PubMed

    Gonçalves, Ana R; Fernandes, Carina; Pasion, Rita; Ferreira-Santos, Fernando; Barbosa, Fernando; Marques-Teixeira, João

    2018-05-01

    Aging is known to alter the processing of facial expressions of emotion (FEE), however the impact of this alteration is less clear. Additionally, there is little information about the temporal dynamics of the neural processing of facial affect. We examined behavioral and neural age-related changes in the identification of FEE using event-related potentials. Furthermore, we analyze the relationship between behavioral/neural responses and neuropsychological functioning. To this purpose, 30 younger adults, 29 middle-aged adults and 26 older adults identified FEE. The behavioral results showed a similar performance between groups. The neural results showed no significant differences between groups for the P100 component and an increased N170 amplitude in the older group. Furthermore, a pattern of asymmetric activation was evident in the N170 component. Results also suggest deficits in facial feature decoding abilities, reflected by a reduced N250 amplitude in older adults. Neuropsychological functioning predicts P100 modulation, but does not seem to influence emotion identification ability. The findings suggest the existence of a compensatory function that would explain the age-equivalent performance in emotion identification. The study may help future research addressing behavioral and neural processes involved on processing of FEE in neurodegenerative conditions. Copyright © 2018 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  10. The Population Tracking Model: A Simple, Scalable Statistical Model for Neural Population Data

    PubMed Central

    O'Donnell, Cian; alves, J. Tiago Gonç; Whiteley, Nick; Portera-Cailliau, Carlos; Sejnowski, Terrence J.

    2017-01-01

    Our understanding of neural population coding has been limited by a lack of analysis methods to characterize spiking data from large populations. The biggest challenge comes from the fact that the number of possible network activity patterns scales exponentially with the number of neurons recorded (∼2Neurons). Here we introduce a new statistical method for characterizing neural population activity that requires semi-independent fitting of only as many parameters as the square of the number of neurons, requiring drastically smaller data sets and minimal computation time. The model works by matching the population rate (the number of neurons synchronously active) and the probability that each individual neuron fires given the population rate. We found that this model can accurately fit synthetic data from up to 1000 neurons. We also found that the model could rapidly decode visual stimuli from neural population data from macaque primary visual cortex about 65 ms after stimulus onset. Finally, we used the model to estimate the entropy of neural population activity in developing mouse somatosensory cortex and, surprisingly, found that it first increases, and then decreases during development. This statistical model opens new options for interrogating neural population data and can bolster the use of modern large-scale in vivo Ca2+ and voltage imaging tools. PMID:27870612

  11. Unscented Kalman Filter for Brain-Machine Interfaces

    PubMed Central

    Li, Zheng; O'Doherty, Joseph E.; Hanson, Timothy L.; Lebedev, Mikhail A.; Henriquez, Craig S.; Nicolelis, Miguel A. L.

    2009-01-01

    Brain machine interfaces (BMIs) are devices that convert neural signals into commands to directly control artificial actuators, such as limb prostheses. Previous real-time methods applied to decoding behavioral commands from the activity of populations of neurons have generally relied upon linear models of neural tuning and were limited in the way they used the abundant statistical information contained in the movement profiles of motor tasks. Here, we propose an n-th order unscented Kalman filter which implements two key features: (1) use of a non-linear (quadratic) model of neural tuning which describes neural activity significantly better than commonly-used linear tuning models, and (2) augmentation of the movement state variables with a history of n-1 recent states, which improves prediction of the desired command even before incorporating neural activity information and allows the tuning model to capture relationships between neural activity and movement at multiple time offsets simultaneously. This new filter was tested in BMI experiments in which rhesus monkeys used their cortical activity, recorded through chronically implanted multielectrode arrays, to directly control computer cursors. The 10th order unscented Kalman filter outperformed the standard Kalman filter and the Wiener filter in both off-line reconstruction of movement trajectories and real-time, closed-loop BMI operation. PMID:19603074

  12. A brain-machine-muscle interface for restoring hindlimb locomotion after complete spinal transection in rats.

    PubMed

    Alam, Monzurul; Chen, Xi; Zhang, Zicong; Li, Yan; He, Jufang

    2014-01-01

    A brain-machine interface (BMI) is a neuroprosthetic device that can restore motor function of individuals with paralysis. Although the feasibility of BMI control of upper-limb neuroprostheses has been demonstrated, a BMI for the restoration of lower-limb motor functions has not yet been developed. The objective of this study was to determine if gait-related information can be captured from neural activity recorded from the primary motor cortex of rats, and if this neural information can be used to stimulate paralysed hindlimb muscles after complete spinal cord transection. Neural activity was recorded from the hindlimb area of the primary motor cortex of six female Sprague Dawley rats during treadmill locomotion before and after mid-thoracic transection. Before spinal transection there was a strong association between neural activity and the step cycle. This association decreased after spinal transection. However, the locomotive state (standing vs. walking) could still be successfully decoded from neural recordings made after spinal transection. A novel BMI device was developed that processed this neural information in real-time and used it to control electrical stimulation of paralysed hindlimb muscles. This system was able to elicit hindlimb muscle contractions that mimicked forelimb stepping. We propose this lower-limb BMI as a future neuroprosthesis for human paraplegics.

  13. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.

    PubMed

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-15

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.

  14. Closed-Loop Estimation of Retinal Network Sensitivity by Local Empirical Linearization

    PubMed Central

    2018-01-01

    Abstract Understanding how sensory systems process information depends crucially on identifying which features of the stimulus drive the response of sensory neurons, and which ones leave their response invariant. This task is made difficult by the many nonlinearities that shape sensory processing. Here, we present a novel perturbative approach to understand information processing by sensory neurons, where we linearize their collective response locally in stimulus space. We added small perturbations to reference stimuli and tested if they triggered visible changes in the responses, adapting their amplitude according to the previous responses with closed-loop experiments. We developed a local linear model that accurately predicts the sensitivity of the neural responses to these perturbations. Applying this approach to the rat retina, we estimated the optimal performance of a neural decoder and showed that the nonlinear sensitivity of the retina is consistent with an efficient encoding of stimulus information. Our approach can be used to characterize experimentally the sensitivity of neural systems to external stimuli locally, quantify experimentally the capacity of neural networks to encode sensory information, and relate their activity to behavior. PMID:29379871

  15. Decoding attended information in short-term memory: an EEG study.

    PubMed

    LaRocque, Joshua J; Lewis-Peacock, Jarrod A; Drysdale, Andrew T; Oberauer, Klaus; Postle, Bradley R

    2013-01-01

    For decades it has been assumed that sustained, elevated neural activity--the so-called active trace--is the neural correlate of the short-term retention of information. However, a recent fMRI study has suggested that this activity may be more related to attention than to retention. Specifically, a multivariate pattern analysis failed to find evidence that information that was outside the focus of attention, but nonetheless in STM, was retained in an active state. Here, we replicate and extend this finding by querying the neural signatures of attended versus unattended information within STM with electroencephalograpy (EEG), a method sensitive to oscillatory neural activity to which the previous fMRI study was insensitive. We demonstrate that in the delay-period EEG activity, there is information only about memory items that are also in the focus of attention. Information about items outside the focus of attention is not detectable. This result converges with the fMRI findings to suggest that, contrary to conventional wisdom, an active memory trace may be unnecessary for the short-term retention of information.

  16. A Novel Experimental and Analytical Approach to the Multimodal Neural Decoding of Intent During Social Interaction in Freely-behaving Human Infants.

    PubMed

    Cruz-Garza, Jesus G; Hernandez, Zachery R; Tse, Teresa; Caducoy, Eunice; Abibullaev, Berdakh; Contreras-Vidal, Jose L

    2015-10-04

    Understanding typical and atypical development remains one of the fundamental questions in developmental human neuroscience. Traditionally, experimental paradigms and analysis tools have been limited to constrained laboratory tasks and contexts due to technical limitations imposed by the available set of measuring and analysis techniques and the age of the subjects. These limitations severely limit the study of developmental neural dynamics and associated neural networks engaged in cognition, perception and action in infants performing "in action and in context". This protocol presents a novel approach to study infants and young children as they freely organize their own behavior, and its consequences in a complex, partly unpredictable and highly dynamic environment. The proposed methodology integrates synchronized high-density active scalp electroencephalography (EEG), inertial measurement units (IMUs), video recording and behavioral analysis to capture brain activity and movement non-invasively in freely-behaving infants. This setup allows for the study of neural network dynamics in the developing brain, in action and context, as these networks are recruited during goal-oriented, exploration and social interaction tasks.

  17. Securing information display by use of visual cryptography.

    PubMed

    Yamamoto, Hirotsugu; Hayasaki, Yoshio; Nishida, Nobuo

    2003-09-01

    We propose a secure display technique based on visual cryptography. The proposed technique ensures the security of visual information. The display employs a decoding mask based on visual cryptography. Without the decoding mask, the displayed information cannot be viewed. The viewing zone is limited by the decoding mask so that only one person can view the information. We have developed a set of encryption codes to maintain the designed viewing zone and have demonstrated a display that provides a limited viewing zone.

  18. DeepSAT's CloudCNN: A Deep Neural Network for Rapid Cloud Detection from Geostationary Satellites

    NASA Astrophysics Data System (ADS)

    Kalia, S.; Li, S.; Ganguly, S.; Nemani, R. R.

    2017-12-01

    Cloud and cloud shadow detection has important applications in weather and climate studies. It is even more crucial when we introduce geostationary satellites into the field of terrestrial remotesensing. With the challenges associated with data acquired in very high frequency (10-15 mins per scan), the ability to derive an accurate cloud/shadow mask from geostationary satellite data iscritical. The key to the success for most of the existing algorithms depends on spatially and temporally varying thresholds, which better capture local atmospheric and surface effects.However, the selection of proper threshold is difficult and may lead to erroneous results. In this work, we propose a deep neural network based approach called CloudCNN to classifycloud/shadow from Himawari-8 AHI and GOES-16 ABI multispectral data. DeepSAT's CloudCNN consists of an encoder-decoder based architecture for binary-class pixel wise segmentation. We train CloudCNN on multi-GPU Nvidia Devbox cluster, and deploy the prediction pipeline on NASA Earth Exchange (NEX) Pleiades supercomputer. We achieved an overall accuracy of 93.29% on test samples. Since, the predictions take only a few seconds to segment a full multi-spectral GOES-16 or Himawari-8 Full Disk image, the developed framework can be used for real-time cloud detection, cyclone detection, or extreme weather event predictions.

  19. Single-trial analysis of the neural correlates of speech quality perception.

    PubMed

    Porbadnigk, Anne K; Treder, Matthias S; Blankertz, Benjamin; Antons, Jan-Niklas; Schleicher, Robert; Möller, Sebastian; Curio, Gabriel; Müller, Klaus-Robert

    2013-10-01

    Assessing speech quality perception is a challenge typically addressed in behavioral and opinion-seeking experiments. Only recently, neuroimaging methods were introduced, which were used to study the neural processing of quality at group level. However, our electroencephalography (EEG) studies show that the neural correlates of quality perception are highly individual. Therefore, it became necessary to establish dedicated machine learning methods for decoding subject-specific effects. The effectiveness of our methods is shown by the data of an EEG study that investigates how the quality of spoken vowels is processed neurally. Participants were asked to indicate whether they had perceived a degradation of quality (signal-correlated noise) in vowels, presented in an oddball paradigm. We find that the P3 amplitude is attenuated with increasing noise. Single-trial analysis allows one to show that this is partly due to an increasing jitter of the P3 component. A novel classification approach helps to detect trials with presumably non-conscious processing at the threshold of perception. We show that this approach uncovers a non-trivial confounder between neural hits and neural misses. The combined use of EEG signals and machine learning methods results in a significant 'neural' gain in sensitivity (in processing quality loss) when compared to standard behavioral evaluation; averaged over 11 subjects, this amounts to a relative improvement in sensitivity of 35%.

  20. A novel coherent optical en/decoder for optical label processing of OCDM-based optical packets switching networks

    NASA Astrophysics Data System (ADS)

    Zhang, Chongfu; Qiu, Kun

    2007-11-01

    A coherent optical en/decoder based on photonic crystal (PhC) for optical code-division-multiple (OCDM)-based optical label (OCDM-OL) optical packets switching (OPS) networks is proposed in this paper. In this scheme, the optical pulse phase and time delay can be flexibly controlled by the photonic crystal phase shifter and delayer using the appropriate design of fabrication. In this design, the combination calculation of the impurity and normal period layers is applied, according to the PhC transmission matrix theorem. The design and theoretical analysis of the PhC-based optical coherent en/decoder is mainly focused. In addition, the performances of the PhC-based optical en/decoders are analyzed in detail. The reflection, the transmission, delay characteristic and the optical spectrum of pulse en/decoded are studied for the waves tuned in the photonic band-gap by the numerical calculation, taking into account 1-Dimension (1D) PhC. Theoretical analysis and numerical results show that optical pulse is achieved to properly phase modulation and time delay by the proposed scheme, optical label based on OCDM is rewrote successfully by new code for OCDM-based OPS (OCDM-OPS), and an over 8.5 dB ration of auto- and cross-correlation is gained, which demonstrates the applicability of true pulse phase modulation in a number of applications.

  1. A test of the role of the medial temporal lobe in single-word decoding.

    PubMed

    Osipowicz, Karol; Rickards, Tyler; Shah, Atif; Sharan, Ashwini; Sperling, Michael; Kahn, Waseem; Tracy, Joseph

    2011-01-15

    The degree to which the MTL system contributes to effective language skills is not well delineated. We sought to determine if the MTL plays a role in single-word decoding in healthy, normal skilled readers. The experiment follows from the implications of the dual-process model of single-word decoding, which provides distinct predictions about the nature of MTL involvement. The paradigm utilized word (regular and irregularly spelled words) and pseudoword (phonetically regular) stimuli that differed in their demand for non-lexical as opposed lexical decoding. The data clearly showed that the MTL system was not involved in single word decoding in skilled, native English readers. Neither the hippocampus nor the MTL system as a whole showed significant activation during lexical or non-lexical based decoding. The results provide evidence that lexical and non-lexical decoding are implemented by distinct but overlapping neuroanatomical networks. Non-lexical decoding appeared most uniquely associated with cuneus and fusiform gyrus activation biased toward the left hemisphere. In contrast, lexical decoding appeared associated with right middle frontal and supramarginal, and bilateral cerebellar activation. Both these decoding operations appeared in the context of a shared widespread network of activations including bilateral occipital cortex and superior frontal regions. These activations suggest that the absence of MTL involvement in either lexical or non-lexical decoding appears likely a function of the skilled reading ability of our sample such that whole-word recognition and retrieval processes do not utilize the declarative memory system, in the case of lexical decoding, and require only minimal analysis and recombination of the phonetic elements of a word, in the case of non-lexical decoding. Copyright © 2010 Elsevier Inc. All rights reserved.

  2. A Test of the Role of the Medial Temporal Lobe in Single-Word Decoding

    PubMed Central

    Osipowicz, Karol; Rickards, Tyler; Shah, Atif; Sharan, Ashwini; Sperling, Michael; Kahn, Waseem; Tracy, Joseph

    2012-01-01

    The degree to which the MTL system contributes to effective language skills is not well delineated. We sought to determine if the MTL plays a role in single-word decoding in healthy, normal skilled readers. The experiment follows from the implications of the dual-process model of single-word decoding, which provides distinct predictions about the nature of MTL involvement. The paradigm utilized word (regular and irregularly spelled words) and pseudoword (phonetically regular) stimuli that differed in their demand for non-lexical as opposed lexical decoding. The data clearly showed that the MTL system was not involved in single word decoding in skilled, native English readers. Neither the hippocampus, nor the MTL system as a whole showed significant activation during lexical or non-lexical based decoding. The results provide evidence that lexical and non-lexical decoding are implemented by distinct but overlapping neuroanatomical networks. Non-lexical decoding appeared most uniquely associated with cuneus and fusiform gyrus activation biased toward the left hemisphere. In contrast, lexical decoding appeared associated with right middle frontal and supramarginal, and bilateral cerebellar activation. Both these decoding operations appeared in the context of a shared widespread network of activations including bilateral occipital cortex and superior frontal regions. These activations suggest that the absence of MTL involvement in either lexical or non-lexical decoding appears likely a function of the skilled reading ability of our sample such that whole-word recognition and retrieval processes do not utilize the declarative memory system, in the case of lexical decoding, and require only minimal analysis and recombination of the phonetic elements of a word, in the case of non-lexical decoding. PMID:20884357

  3. Approximate maximum likelihood decoding of block codes

    NASA Technical Reports Server (NTRS)

    Greenberger, H. J.

    1979-01-01

    Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.

  4. Mathematics is differentially related to reading comprehension and word decoding: Evidence from a genetically-sensitive design

    PubMed Central

    Harlaar, Nicole; Kovas, Yulia; Dale, Philip S.; Petrill, Stephen A.; Plomin, Robert

    2013-01-01

    Although evidence suggests that individual differences in reading and mathematics skills are correlated, this relationship has typically only been studied in relation to word decoding or global measures of reading. It is unclear whether mathematics is differentially related to word decoding and reading comprehension. The current study examined these relationships at both a phenotypic and etiological level in a population-based cohort of 5162 twin pairs at age 12. Multivariate genetic analyses of latent phenotypic factors of mathematics, word decoding and reading comprehension revealed substantial genetic and shared environmental correlations among all three domains. However, the phenotypic and genetic correlations between mathematics and reading comprehension were significantly greater than between mathematics and word decoding. Independent of mathematics, there was also evidence for genetic and nonshared environmental links between word decoding and reading comprehension. These findings indicate that word decoding and reading comprehension have partly distinct relationships with mathematics in the middle school years. PMID:24319294

  5. Decoding brain activity using a large-scale probabilistic functional-anatomical atlas of human cognition

    PubMed Central

    Jones, Michael N.

    2017-01-01

    A central goal of cognitive neuroscience is to decode human brain activity—that is, to infer mental processes from observed patterns of whole-brain activation. Previous decoding efforts have focused on classifying brain activity into a small set of discrete cognitive states. To attain maximal utility, a decoding framework must be open-ended, systematic, and context-sensitive—that is, capable of interpreting numerous brain states, presented in arbitrary combinations, in light of prior information. Here we take steps towards this objective by introducing a probabilistic decoding framework based on a novel topic model—Generalized Correspondence Latent Dirichlet Allocation—that learns latent topics from a database of over 11,000 published fMRI studies. The model produces highly interpretable, spatially-circumscribed topics that enable flexible decoding of whole-brain images. Importantly, the Bayesian nature of the model allows one to “seed” decoder priors with arbitrary images and text—enabling researchers, for the first time, to generate quantitative, context-sensitive interpretations of whole-brain patterns of brain activity. PMID:29059185

  6. Mathematics is differentially related to reading comprehension and word decoding: Evidence from a genetically-sensitive design.

    PubMed

    Harlaar, Nicole; Kovas, Yulia; Dale, Philip S; Petrill, Stephen A; Plomin, Robert

    2012-08-01

    Although evidence suggests that individual differences in reading and mathematics skills are correlated, this relationship has typically only been studied in relation to word decoding or global measures of reading. It is unclear whether mathematics is differentially related to word decoding and reading comprehension. The current study examined these relationships at both a phenotypic and etiological level in a population-based cohort of 5162 twin pairs at age 12. Multivariate genetic analyses of latent phenotypic factors of mathematics, word decoding and reading comprehension revealed substantial genetic and shared environmental correlations among all three domains. However, the phenotypic and genetic correlations between mathematics and reading comprehension were significantly greater than between mathematics and word decoding. Independent of mathematics, there was also evidence for genetic and nonshared environmental links between word decoding and reading comprehension. These findings indicate that word decoding and reading comprehension have partly distinct relationships with mathematics in the middle school years.

  7. An Animal Model for Collective Behavior in Humans: The Impact of Manipulated Trust and Aggression

    DTIC Science & Technology

    2014-04-30

    grouped with males compared with females grouped with females. This raises the issue of the reciprocal impact of gender on the response to owl attack...337–342 Ecksteina MP, Dasa K, Phama BT, Petersona MF, Abbeya CK, Sya JL, Giesbrechta B (2012) Neural decoding of collective wisdom with multi- brain ...spiny mice flee in alternating patterns. Behav Brain Res 155:207–216 Eilam D (2003) Open-field behavior withstands drastic changes in arena size. Behav

  8. Information encoder/decoder using chaotic systems

    DOEpatents

    Miller, Samuel Lee; Miller, William Michael; McWhorter, Paul Jackson

    1997-01-01

    The present invention discloses a chaotic system-based information encoder and decoder that operates according to a relationship defining a chaotic system. Encoder input signals modify the dynamics of the chaotic system comprising the encoder. The modifications result in chaotic, encoder output signals that contain the encoder input signals encoded within them. The encoder output signals are then capable of secure transmissions using conventional transmission techniques. A decoder receives the encoder output signals (i.e., decoder input signals) and inverts the dynamics of the encoding system to directly reconstruct the original encoder input signals.

  9. Information encoder/decoder using chaotic systems

    DOEpatents

    Miller, S.L.; Miller, W.M.; McWhorter, P.J.

    1997-10-21

    The present invention discloses a chaotic system-based information encoder and decoder that operates according to a relationship defining a chaotic system. Encoder input signals modify the dynamics of the chaotic system comprising the encoder. The modifications result in chaotic, encoder output signals that contain the encoder input signals encoded within them. The encoder output signals are then capable of secure transmissions using conventional transmission techniques. A decoder receives the encoder output signals (i.e., decoder input signals) and inverts the dynamics of the encoding system to directly reconstruct the original encoder input signals. 32 figs.

  10. Evidence for similar patterns of neural activity elicited by picture- and word-based representations of natural scenes.

    PubMed

    Kumar, Manoj; Federmeier, Kara D; Fei-Fei, Li; Beck, Diane M

    2017-07-15

    A long-standing core question in cognitive science is whether different modalities and representation types (pictures, words, sounds, etc.) access a common store of semantic information. Although different input types have been shown to activate a shared network of brain regions, this does not necessitate that there is a common representation, as the neurons in these regions could still differentially process the different modalities. However, multi-voxel pattern analysis can be used to assess whether, e.g., pictures and words evoke a similar pattern of activity, such that the patterns that separate categories in one modality transfer to the other. Prior work using this method has found support for a common code, but has two limitations: they have either only examined disparate categories (e.g. animals vs. tools) that are known to activate different brain regions, raising the possibility that the pattern separation and inferred similarity reflects only large scale differences between the categories or they have been limited to individual object representations. By using natural scene categories, we not only extend the current literature on cross-modal representations beyond objects, but also, because natural scene categories activate a common set of brain regions, we identify a more fine-grained (i.e. higher spatial resolution) common representation. Specifically, we studied picture- and word-based representations of natural scene stimuli from four different categories: beaches, cities, highways, and mountains. Participants passively viewed blocks of either phrases (e.g. "sandy beach") describing scenes or photographs from those same scene categories. To determine whether the phrases and pictures evoke a common code, we asked whether a classifier trained on one stimulus type (e.g. phrase stimuli) would transfer (i.e. cross-decode) to the other stimulus type (e.g. picture stimuli). The analysis revealed cross-decoding in the occipitotemporal, posterior parietal and frontal cortices. This similarity of neural activity patterns across the two input types, for categories that co-activate local brain regions, provides strong evidence of a common semantic code for pictures and words in the brain. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Good Trellises for IC Implementation of Viterbi Decoders for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Moorthy, Hari T.; Lin, Shu; Uehara, Gregory T.

    1997-01-01

    This paper investigates trellis structures of linear block codes for the integrated circuit (IC) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper-bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called add-compare-select (ACS)-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the very large scale integration (VISI) complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a nonminimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.

  12. Good trellises for IC implementation of viterbi decoders for linear block codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Moorthy, Hari T.; Uehara, Gregory T.

    1996-01-01

    This paper investigates trellis structures of linear block codes for the IC (integrated circuit) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called ACS-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the VLSI complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a non-minimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.

  13. Listening to Brain Microcircuits for Interfacing With External World—Progress in Wireless Implantable Microelectronic Neuroengineering Devices

    PubMed Central

    Nurmikko, Arto V.; Donoghue, John P.; Hochberg, Leigh R.; Patterson, William R.; Song, Yoon-Kyu; Bull, Christopher W.; Borton, David A.; Laiwalla, Farah; Park, Sunmee; Ming, Yin; Aceros, Juan

    2011-01-01

    Acquiring neural signals at high spatial and temporal resolution directly from brain microcircuits and decoding their activity to interpret commands and/or prior planning activity, such as motion of an arm or a leg, is a prime goal of modern neurotechnology. Its practical aims include assistive devices for subjects whose normal neural information pathways are not functioning due to physical damage or disease. On the fundamental side, researchers are striving to decipher the code of multiple neural microcircuits which collectively make up nature’s amazing computing machine, the brain. By implanting biocompatible neural sensor probes directly into the brain, in the form of microelectrode arrays, it is now possible to extract information from interacting populations of neural cells with spatial and temporal resolution at the single cell level. With parallel advances in application of statistical and mathematical techniques tools for deciphering the neural code, extracted populations or correlated neurons, significant understanding has been achieved of those brain commands that control, e.g., the motion of an arm in a primate (monkey or a human subject). These developments are accelerating the work on neural prosthetics where brain derived signals may be employed to bypass, e.g., an injured spinal cord. One key element in achieving the goals for practical and versatile neural prostheses is the development of fully implantable wireless microelectronic “brain-interfaces” within the body, a point of special emphasis of this paper. PMID:21654935

  14. A low-cost, scalable, current-sensing digital headstage for high channel count μECoG.

    PubMed

    Trumpis, Michael; Insanally, Michele; Zou, Jialin; Elsharif, Ashraf; Ghomashchi, Ali; Sertac Artan, N; Froemke, Robert C; Viventi, Jonathan

    2017-04-01

    High channel count electrode arrays allow for the monitoring of large-scale neural activity at high spatial resolution. Implantable arrays featuring many recording sites require compact, high bandwidth front-end electronics. In the present study, we investigated the use of a small, light weight, and low cost digital current-sensing integrated circuit for acquiring cortical surface signals from a 61-channel micro-electrocorticographic (μECoG) array. We recorded both acute and chronic μECoG signal from rat auditory cortex using our novel digital current-sensing headstage. For direct comparison, separate recordings were made in the same anesthetized preparations using an analog voltage headstage. A model of electrode impedance explained the transformation between current- and voltage-sensed signals, and was used to reconstruct cortical potential. We evaluated the digital headstage using several metrics of the baseline and response signals. The digital current headstage recorded neural signal with similar spatiotemporal statistics and auditory frequency tuning compared to the voltage signal. The signal-to-noise ratio of auditory evoked responses (AERs) was significantly stronger in the current signal. Stimulus decoding based on true and reconstructed voltage signals were not significantly different. Recordings from an implanted system showed AERs that were detectable and decodable for 52 d. The reconstruction filter mitigated the thermal current noise of the electrode impedance and enhanced overall SNR. We developed and validated a novel approach to headstage acquisition that used current-input circuits to independently digitize 61 channels of μECoG measurements of the cortical field. These low-cost circuits, intended to measure photo-currents in digital imaging, not only provided a signal representing the local cortical field with virtually the same sensitivity and specificity as a traditional voltage headstage but also resulted in a small, light headstage that can easily be scaled to record from hundreds of channels.

  15. A low-cost, scalable, current-sensing digital headstage for high channel count μECoG

    NASA Astrophysics Data System (ADS)

    Trumpis, Michael; Insanally, Michele; Zou, Jialin; Elsharif, Ashraf; Ghomashchi, Ali; Sertac Artan, N.; Froemke, Robert C.; Viventi, Jonathan

    2017-04-01

    Objective. High channel count electrode arrays allow for the monitoring of large-scale neural activity at high spatial resolution. Implantable arrays featuring many recording sites require compact, high bandwidth front-end electronics. In the present study, we investigated the use of a small, light weight, and low cost digital current-sensing integrated circuit for acquiring cortical surface signals from a 61-channel micro-electrocorticographic (μECoG) array. Approach. We recorded both acute and chronic μECoG signal from rat auditory cortex using our novel digital current-sensing headstage. For direct comparison, separate recordings were made in the same anesthetized preparations using an analog voltage headstage. A model of electrode impedance explained the transformation between current- and voltage-sensed signals, and was used to reconstruct cortical potential. We evaluated the digital headstage using several metrics of the baseline and response signals. Main results. The digital current headstage recorded neural signal with similar spatiotemporal statistics and auditory frequency tuning compared to the voltage signal. The signal-to-noise ratio of auditory evoked responses (AERs) was significantly stronger in the current signal. Stimulus decoding based on true and reconstructed voltage signals were not significantly different. Recordings from an implanted system showed AERs that were detectable and decodable for 52 d. The reconstruction filter mitigated the thermal current noise of the electrode impedance and enhanced overall SNR. Significance. We developed and validated a novel approach to headstage acquisition that used current-input circuits to independently digitize 61 channels of μECoG measurements of the cortical field. These low-cost circuits, intended to measure photo-currents in digital imaging, not only provided a signal representing the local cortical field with virtually the same sensitivity and specificity as a traditional voltage headstage but also resulted in a small, light headstage that can easily be scaled to record from hundreds of channels.

  16. A low-cost, scalable, current-sensing digital headstage for high channel count μECoG

    PubMed Central

    Trumpis, Michael; Insanally, Michele; Zou, Jialin; Elsharif, Ashraf; Ghomashchi, Ali; Artan, N. Sertac; Froemke, Robert C.; Viventi, Jonathan

    2017-01-01

    Objective High channel count electrode arrays allow for the monitoring of large-scale neural activity at high spatial resolution. Implantable arrays featuring many recording sites require compact, high bandwidth front-end electronics. In the present study, we investigated the use of a small, light weight, and low cost digital current-sensing integrated circuit for acquiring cortical surface signals from a 61-channel micro-electrocorticographic (μECoG) array. Approach We recorded both acute and chronic μECoG signal from rat auditory cortex using our novel digital current-sensing headstage. For direct comparison, separate recordings were made in the same anesthetized preparations using an analog voltage headstage. A model of electrode impedance explained the transformation between current- and voltage-sensed signals, and was used to reconstruct cortical potential. We evaluated the digital headstage using several metrics of the baseline and response signals. Main results The digital current headstage recorded neural signal with similar spatiotemporal statistics and auditory frequency tuning compared to the voltage signal. The signal-to-noise ratio of auditory evoked responses (AERs) was significantly stronger in the current signal. Stimulus decoding based on true and reconstructed voltage signals were not significantly different. Recordings from an implanted system showed AERs that were detectable and decodable for 52 days. The reconstruction filter mitigated the thermal current noise of the electrode impedance and enhanced overall SNR. Significance We developed and validated a novel approach to headstage acquisition that used current-input circuits to independently digitize 61 channels of μECoG measurements of the cortical field. These low-cost circuits, intended to measure photo-currents in digital imaging, not only provided a signal representing the local cortical field with virtually the same sensitivity and specificity as a traditional voltage headstage but also resulted in a small, light headstage that can easily be scaled to record from hundreds of channels. PMID:28102827

  17. Decoding Skills Acquired by Low Readers Taught in Regular Classrooms Using Clinical Techniques. Research Report No. 35.

    ERIC Educational Resources Information Center

    Gallistel, Elizabeth; Fischer, Phyllis

    This study evaluated the decoding skills acquired by low readers in an experimental project that taught low readers in regular class through the use of clinical procedures based on a synthetic phonic, multisensory approach. An evaluation instrument which permitted the tabulation of specific decoding skills was administered as a pretest and…

  18. Method and system for efficient video compression with low-complexity encoder

    NASA Technical Reports Server (NTRS)

    Chen, Jun (Inventor); He, Dake (Inventor); Sheinin, Vadim (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor)

    2012-01-01

    Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.

  19. High-Throughput Bit-Serial LDPC Decoder LSI Based on Multiple-Valued Asynchronous Interleaving

    NASA Astrophysics Data System (ADS)

    Onizawa, Naoya; Hanyu, Takahiro; Gaudet, Vincent C.

    This paper presents a high-throughput bit-serial low-density parity-check (LDPC) decoder that uses an asynchronous interleaver. Since consecutive log-likelihood message values on the interleaver are similar, node computations are continuously performed by using the most recently arrived messages without significantly affecting bit-error rate (BER) performance. In the asynchronous interleaver, each message's arrival rate is based on the delay due to the wire length, so that the decoding throughput is not restricted by the worst-case latency, which results in a higher average rate of computation. Moreover, the use of a multiple-valued data representation makes it possible to multiplex control signals and data from mutual nodes, thus minimizing the number of handshaking steps in the asynchronous interleaver and eliminating the clock signal entirely. As a result, the decoding throughput becomes 1.3 times faster than that of a bit-serial synchronous decoder under a 90nm CMOS technology, at a comparable BER.

  20. A common neural code for similar conscious experiences in different individuals

    PubMed Central

    Naci, Lorina; Cusack, Rhodri; Anello, Mimma; Owen, Adrian M.

    2014-01-01

    The interpretation of human consciousness from brain activity, without recourse to speech or action, is one of the most provoking and challenging frontiers of modern neuroscience. We asked whether there is a common neural code that underpins similar conscious experiences, which could be used to decode these experiences in the absence of behavior. To this end, we used richly evocative stimulation (an engaging movie) portraying real-world events to elicit a similar conscious experience in different people. Common neural correlates of conscious experience were quantified and related to measurable, quantitative and qualitative, executive components of the movie through two additional behavioral investigations. The movie’s executive demands drove synchronized brain activity across healthy participants’ frontal and parietal cortices in regions known to support executive function. Moreover, the timing of activity in these regions was predicted by participants’ highly similar qualitative experience of the movie’s moment-to-moment executive demands, suggesting that synchronization of activity across participants underpinned their similar experience. Thus we demonstrate, for the first time to our knowledge, that a neural index based on executive function reliably predicted every healthy individual’s similar conscious experience in response to real-world events unfolding over time. This approach provided strong evidence for the conscious experience of a brain-injured patient, who had remained entirely behaviorally nonresponsive for 16 y. The patient’s executive engagement and moment-to-moment perception of the movie content were highly similar to that of every healthy participant. These findings shed light on the common basis of human consciousness and enable the interpretation of conscious experience in the absence of behavior. PMID:25225384

  1. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection

    PubMed Central

    Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang

    2018-01-01

    In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963

  2. Biological and bionic hands: natural neural coding and artificial perception.

    PubMed

    Bensmaia, Sliman J

    2015-09-19

    The first decade and a half of the twenty-first century brought about two major innovations in neuroprosthetics: the development of anthropomorphic robotic limbs that replicate much of the function of a native human arm and the refinement of algorithms that decode intended movements from brain activity. However, skilled manipulation of objects requires somatosensory feedback, for which vision is a poor substitute. For upper-limb neuroprostheses to be clinically viable, they must therefore provide for the restoration of touch and proprioception. In this review, I discuss efforts to elicit meaningful tactile sensations through stimulation of neurons in somatosensory cortex. I focus on biomimetic approaches to sensory restoration, which leverage our current understanding about how information about grasped objects is encoded in the brain of intact individuals. I argue that not only can sensory neuroscience inform the development of sensory neuroprostheses, but also that the converse is true: stimulating the brain offers an exceptional opportunity to causally interrogate neural circuits and test hypotheses about natural neural coding.

  3. Comparison of memory thresholds for planar qudit geometries

    NASA Astrophysics Data System (ADS)

    Marks, Jacob; Jochym-O'Connor, Tomas; Gheorghiu, Vlad

    2017-11-01

    We introduce and analyze a new type of decoding algorithm called general color clustering, based on renormalization group methods, to be used in qudit color codes. The performance of this decoder is analyzed under a generalized bit-flip error model, and is used to obtain the first memory threshold estimates for qudit 6-6-6 color codes. The proposed decoder is compared with similar decoding schemes for qudit surface codes as well as the current leading qubit decoders for both sets of codes. We find that, as with surface codes, clustering performs sub-optimally for qubit color codes, giving a threshold of 5.6 % compared to the 8.0 % obtained through surface projection decoding methods. However, the threshold rate increases by up to 112% for large qudit dimensions, plateauing around 11.9 % . All the analysis is performed using QTop, a new open-source software for simulating and visualizing topological quantum error correcting codes.

  4. A high data rate universal lattice decoder on FPGA

    NASA Astrophysics Data System (ADS)

    Ma, Jing; Huang, Xinming; Kura, Swapna

    2005-06-01

    This paper presents the architecture design of a high data rate universal lattice decoder for MIMO channels on FPGA platform. A phost strategy based lattice decoding algorithm is modified in this paper to reduce the complexity of the closest lattice point search. The data dependency of the improved algorithm is examined and a parallel and pipeline architecture is developed with the iterative decoding function on FPGA and the division intensive channel matrix preprocessing on DSP. Simulation results demonstrate that the improved lattice decoding algorithm provides better bit error rate and less iteration number compared with the original algorithm. The system prototype of the decoder shows that it supports data rate up to 7Mbit/s on a Virtex2-1000 FPGA, which is about 8 times faster than the original algorithm on FPGA platform and two-orders of magnitude better than its implementation on a DSP platform.

  5. An architecture of entropy decoder, inverse quantiser and predictor for multi-standard video decoding

    NASA Astrophysics Data System (ADS)

    Liu, Leibo; Chen, Yingjie; Yin, Shouyi; Lei, Hao; He, Guanghui; Wei, Shaojun

    2014-07-01

    A VLSI architecture for entropy decoder, inverse quantiser and predictor is proposed in this article. This architecture is used for decoding video streams of three standards on a single chip, i.e. H.264/AVC, AVS (China National Audio Video coding Standard) and MPEG2. The proposed scheme is called MPMP (Macro-block-Parallel based Multilevel Pipeline), which is intended to improve the decoding performance to satisfy the real-time requirements while maintaining a reasonable area and power consumption. Several techniques, such as slice level pipeline, MB (Macro-Block) level pipeline, MB level parallel, etc., are adopted. Input and output buffers for the inverse quantiser and predictor are shared by the decoding engines for H.264, AVS and MPEG2, therefore effectively reducing the implementation overhead. Simulation shows that decoding process consumes 512, 435 and 438 clock cycles per MB in H.264, AVS and MPEG2, respectively. Owing to the proposed techniques, the video decoder can support H.264 HP (High Profile) 1920 × 1088@30fps (frame per second) streams, AVS JP (Jizhun Profile) 1920 × 1088@41fps streams and MPEG2 MP (Main Profile) 1920 × 1088@39fps streams when exploiting a 200 MHz working frequency.

  6. Nonlinear BCJR equalizer for suppression of intrachannel nonlinearities in 40 Gb/s optical communications systems.

    PubMed

    Djordjevic, Ivan B; Vasic, Bane

    2006-05-29

    A maximum a posteriori probability (MAP) symbol decoding supplemented with iterative decoding is proposed as an effective mean for suppression of intrachannel nonlinearities. The MAP detector, based on Bahl-Cocke-Jelinek-Raviv algorithm, operates on the channel trellis, a dynamical model of intersymbol interference, and provides soft-decision outputs processed further in an iterative decoder. A dramatic performance improvement is demonstrated. The main reason is that the conventional maximum-likelihood sequence detector based on Viterbi algorithm provides hard-decision outputs only, hence preventing the soft iterative decoding. The proposed scheme operates very well in the presence of strong intrachannel intersymbol interference, when other advanced forward error correction schemes fail, and it is also suitable for 40 Gb/s upgrade over existing 10 Gb/s infrastructure.

  7. Emerging Object Representations in the Visual System Predict Reaction Times for Categorization

    PubMed Central

    Ritchie, J. Brendan; Tovar, David A.; Carlson, Thomas A.

    2015-01-01

    Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition. PMID:26107634

  8. Design of optical seven-segment decoder using Pockel's effect inside lithium niobate-based waveguide

    NASA Astrophysics Data System (ADS)

    Pal, Amrindra; Kumar, Santosh; Sharma, Sandeep

    2017-01-01

    Seven-segment decoder is a device that allows placing digital information from many inputs to many outputs optically, having 11 Mach-Zehnder interferometers (MZIs) for their implementation. The layout of the circuit is implemented to fit the electrical method on an optical logic circuit based on the beam propagation method (BPM). Seven-segment decoder is proposed using electro-optic effect inside lithium niobate-based MZIs. MZI structures are able to switch an optical signal to a desired output port. It consists of a mathematical explanation about the proposed device. The BPM is also used to analyze the study.

  9. Using convolutional decoding to improve time delay and phase estimation in digital communications

    DOEpatents

    Ormesher, Richard C [Albuquerque, NM; Mason, John J [Albuquerque, NM

    2010-01-26

    The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.

  10. LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor

    NASA Astrophysics Data System (ADS)

    Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram

    2007-09-01

    Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.

  11. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things

    PubMed Central

    Akan, Ozgur B.

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST). PMID:29538405

  12. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.

    PubMed

    Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).

  13. Neuronal ensemble control of prosthetic devices by a human with tetraplegia

    NASA Astrophysics Data System (ADS)

    Hochberg, Leigh R.; Serruya, Mijail D.; Friehs, Gerhard M.; Mukand, Jon A.; Saleh, Maryam; Caplan, Abraham H.; Branner, Almut; Chen, David; Penn, Richard D.; Donoghue, John P.

    2006-07-01

    Neuromotor prostheses (NMPs) aim to replace or restore lost motor functions in paralysed humans by routeing movement-related signals from the brain, around damaged parts of the nervous system, to external effectors. To translate preclinical results from intact animals to a clinically useful NMP, movement signals must persist in cortex after spinal cord injury and be engaged by movement intent when sensory inputs and limb movement are long absent. Furthermore, NMPs would require that intention-driven neuronal activity be converted into a control signal that enables useful tasks. Here we show initial results for a tetraplegic human (MN) using a pilot NMP. Neuronal ensemble activity recorded through a 96-microelectrode array implanted in primary motor cortex demonstrated that intended hand motion modulates cortical spiking patterns three years after spinal cord injury. Decoders were created, providing a `neural cursor' with which MN opened simulated e-mail and operated devices such as a television, even while conversing. Furthermore, MN used neural control to open and close a prosthetic hand, and perform rudimentary actions with a multi-jointed robotic arm. These early results suggest that NMPs based upon intracortical neuronal ensemble spiking activity could provide a valuable new neurotechnology to restore independence for humans with paralysis.

  14. Neural Activation during Anticipation of Near Pain-Threshold Stimulation among the Pain-Fearful.

    PubMed

    Yang, Zhou; Jackson, Todd; Huang, Chengzhi

    2016-01-01

    Fear of pain (FOP) can increase risk for chronic pain and disability but little is known about corresponding neural responses in anticipation of potential pain. In this study, more (10 women, 6 men) and less (7 women, 6 men) pain-fearful groups underwent whole-brain functional magnetic resonance imaging (fMRI) during anticipation of near pain-threshold stimulation. Groups did not differ in the proportion of stimuli judged to be painful but pain-fearful participants reported significantly more state fear prior to stimulus exposure. Within the entire sample, stronger activation was found in several pain perception regions (e.g., bilateral insula, midcingulate cortex (MCC), thalamus, superior frontal gyrus) and visual areas linked to decoding stimulus valences (inferior orbital cortex) during anticipation of "painful" stimuli. Between groups and correlation analyses indicated pain-fearful participants experienced comparatively more activity in regions implicated in evaluating potential threats and processing negative emotions during anticipation (i.e., MCC, mid occipital cortex, superior temporal pole), though group differences were not apparent in most so-called "pain matrix" regions. In sum, trait- and task-based FOP is associated with enhanced responsiveness in regions involved in threat processing and negative affect during anticipation of potentially painful stimulation.

  15. New Syndrome Decoding Techniques for the (n, K) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.

  16. An embedded controller for a 7-degree of freedom prosthetic arm.

    PubMed

    Tenore, Francesco; Armiger, Robert S; Vogelstein, R Jacob; Wenstrand, Douglas S; Harshbarger, Stuart D; Englehart, Kevin

    2008-01-01

    We present results from an embedded real-time hardware system capable of decoding surface myoelectric signals (sMES) to control a seven degree of freedom upper limb prosthesis. This is one of the first hardware implementations of sMES decoding algorithms and the most advanced controller to-date. We compare decoding results from the device to simulation results from a real-time PC-based operating system. Performance of both systems is shown to be similar, with decoding accuracy greater than 90% for the floating point software simulation and 80% for fixed point hardware and software implementations.

  17. Testing interconnected VLSI circuits in the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    The Big Viterbi Decoder (BVD) is a powerful error-correcting hardware device for the Deep Space Network (DSN), in support of the Galileo and Comet Rendezvous Asteroid Flyby (CRAF)/Cassini Missions. Recently, a prototype was completed and run successfully at 400,000 or more decoded bits per second. This prototype is a complex digital system whose core arithmetic unit consists of 256 identical very large scale integration (VLSI) gate-array chips, 16 on each of 16 identical boards which are connected through a 28-layer, printed-circuit backplane using 4416 wires. Special techniques were developed for debugging, testing, and locating faults inside individual chips, on boards, and within the entire decoder. The methods are based upon hierarchical structure in the decoder, and require that chips or boards be wired themselves as Viterbi decoders. The basic procedure consists of sending a small set of known, very noisy channel symbols through a decoder, and matching observables against values computed by a software simulation. Also, tests were devised for finding open and short-circuited wires which connect VLSI chips on the boards and through the backplane.

  18. Improving zero-training brain-computer interfaces by mixing model estimators

    NASA Astrophysics Data System (ADS)

    Verhoeven, T.; Hübner, D.; Tangermann, M.; Müller, K. R.; Dambre, J.; Kindermans, P. J.

    2017-06-01

    Objective. Brain-computer interfaces (BCI) based on event-related potentials (ERP) incorporate a decoder to classify recorded brain signals and subsequently select a control signal that drives a computer application. Standard supervised BCI decoders require a tedious calibration procedure prior to every session. Several unsupervised classification methods have been proposed that tune the decoder during actual use and as such omit this calibration. Each of these methods has its own strengths and weaknesses. Our aim is to improve overall accuracy of ERP-based BCIs without calibration. Approach. We consider two approaches for unsupervised classification of ERP signals. Learning from label proportions (LLP) was recently shown to be guaranteed to converge to a supervised decoder when enough data is available. In contrast, the formerly proposed expectation maximization (EM) based decoding for ERP-BCI does not have this guarantee. However, while this decoder has high variance due to random initialization of its parameters, it obtains a higher accuracy faster than LLP when the initialization is good. We introduce a method to optimally combine these two unsupervised decoding methods, letting one method’s strengths compensate for the weaknesses of the other and vice versa. The new method is compared to the aforementioned methods in a resimulation of an experiment with a visual speller. Main results. Analysis of the experimental results shows that the new method exceeds the performance of the previous unsupervised classification approaches in terms of ERP classification accuracy and symbol selection accuracy during the spelling experiment. Furthermore, the method shows less dependency on random initialization of model parameters and is consequently more reliable. Significance. Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.

  19. The role of ECoG magnitude and phase in decoding position, velocity, and acceleration during continuous motor behavior

    PubMed Central

    Hammer, Jiri; Fischer, Jörg; Ruescher, Johanna; Schulze-Bonhage, Andreas; Aertsen, Ad; Ball, Tonio

    2013-01-01

    In neuronal population signals, including the electroencephalogram (EEG) and electrocorticogram (ECoG), the low-frequency component (LFC) is particularly informative about motor behavior and can be used for decoding movement parameters for brain-machine interface (BMI) applications. An idea previously expressed, but as of yet not quantitatively tested, is that it is the LFC phase that is the main source of decodable information. To test this issue, we analyzed human ECoG recorded during a game-like, one-dimensional, continuous motor task with a novel decoding method suitable for unfolding magnitude and phase explicitly into a complex-valued, time-frequency signal representation, enabling quantification of the decodable information within the temporal, spatial and frequency domains and allowing disambiguation of the phase contribution from that of the spectral magnitude. The decoding accuracy based only on phase information was substantially (at least 2 fold) and significantly higher than that based only on magnitudes for position, velocity and acceleration. The frequency profile of movement-related information in the ECoG data matched well with the frequency profile expected when assuming a close time-domain correlate of movement velocity in the ECoG, e.g., a (noisy) “copy” of hand velocity. No such match was observed with the frequency profiles expected when assuming a copy of either hand position or acceleration. There was also no indication of additional magnitude-based mechanisms encoding movement information in the LFC range. Thus, our study contributes to elucidating the nature of the informative LFC of motor cortical population activity and may hence contribute to improve decoding strategies and BMI performance. PMID:24198757

  20. Neural Population Coding of Multiple Stimuli

    PubMed Central

    Ma, Wei Ji

    2015-01-01

    In natural scenes, objects generally appear together with other objects. Yet, theoretical studies of neural population coding typically focus on the encoding of single objects in isolation. Experimental studies suggest that neural responses to multiple objects are well described by linear or nonlinear combinations of the responses to constituent objects, a phenomenon we call stimulus mixing. Here, we present a theoretical analysis of the consequences of common forms of stimulus mixing observed in cortical responses. We show that some of these mixing rules can severely compromise the brain's ability to decode the individual objects. This cost is usually greater than the cost incurred by even large reductions in the gain or large increases in neural variability, explaining why the benefits of attention can be understood primarily in terms of a stimulus selection, or demixing, mechanism rather than purely as a gain increase or noise reduction mechanism. The cost of stimulus mixing becomes even higher when the number of encoded objects increases, suggesting a novel mechanism that might contribute to set size effects observed in myriad psychophysical tasks. We further show that a specific form of neural correlation and heterogeneity in stimulus mixing among the neurons can partially alleviate the harmful effects of stimulus mixing. Finally, we derive simple conditions that must be satisfied for unharmful mixing of stimuli. PMID:25740513

Top