-
Continuous decoding of human grasp kinematics using epidural and subdural signals
PubMed Central
Flint, Robert D.; Rosenow, Joshua M.; Tate, Matthew C.; Slutzky, Marc W.
2017-01-01
Objective Restoring or replacing function in paralyzed individuals will one day be achieved through the use of brain-machine interfaces (BMIs). Regaining hand function is a major goal for paralyzed patients. Two competing prerequisites for the widespread adoption of any hand neuroprosthesis are: accurate control over the fine details of movement, and minimized invasiveness. Here, we explore the interplay between these two goals by comparing our ability to decode hand movements with subdural and epidural field potentials. Approach We measured the accuracy of decoding continuous hand and finger kinematics during naturalistic grasping motions in five human subjects. We recorded subdural surface potentials (electrocorticography; ECoG) as well as with epidural field potentials (EFPs), with both standard- and high-resolution electrode arrays. Main results In all five subjects, decoding of continuous kinematics significantly exceeded chance, using either EGoG or EFPs. ECoG decoding accuracy compared favorably with prior investigations of grasp kinematics (mean± SD grasp aperture variance accounted for was 0.54± 0.05 across all subjects, 0.75± 0.09 for the best subject). In general, EFP decoding performed comparably to ECoG decoding. The 7–20 Hz and 70–115 Hz spectral bands contained the most information about grasp kinematics, with the 70–115 Hz band containing greater information about more subtle movements. Higher-resolution recording arrays provided clearly superior performance compared to standard-resolution arrays. Significance To approach the fine motor control achieved by an intact brain-body system, it will be necessary to execute motor intent on a continuous basis with high accuracy. The current results demonstrate that this level of accuracy might be achievable not just with ECoG, but with EFPs as well. Epidural placement of electrodes is less invasive, and therefore may incur less risk of encephalitis or stroke than subdural placement of electrodes. Accurately decoding motor commands at the epidural level may be an important step towards a clinically viable brain-machine interface. PMID:27900947
-
Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task
NASA Astrophysics Data System (ADS)
Revechkis, Boris; Aflalo, Tyson NS; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A.
2014-12-01
Objective. To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. Approach. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like ‘Face in a Crowd’ task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the ‘Crowd’) using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a ‘Crowd Off’ condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Main results. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Significance. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.
-
Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task.
PubMed
Revechkis, Boris; Aflalo, Tyson N S; Kellis, Spencer; Pouratian, Nader; Andersen, Richard A
2014-12-01
To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like 'Face in a Crowd' task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the 'Crowd') using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a 'Crowd Off' condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.
-
Temporal Response Properties of Accessory Olfactory Bulb Neurons: Limitations and Opportunities for Decoding.
PubMed
Yoles-Frenkel, Michal; Kahan, Anat; Ben-Shaul, Yoram
2018-05-23
The vomeronasal system (VNS) is a major vertebrate chemosensory system that functions in parallel to the main olfactory system (MOS). Despite many similarities, the two systems dramatically differ in the temporal domain. While MOS responses are governed by breathing and follow a subsecond temporal scale, VNS responses are uncoupled from breathing and evolve over seconds. This suggests that the contribution of response dynamics to stimulus information will differ between these systems. While temporal dynamics in the MOS are widely investigated, similar analyses in the accessory olfactory bulb (AOB) are lacking. Here, we have addressed this issue using controlled stimulus delivery to the vomeronasal organ of male and female mice. We first analyzed the temporal properties of AOB projection neurons and demonstrated that neurons display prolonged, variable, and neuron-specific characteristics. We then analyzed various decoding schemes using AOB population responses. We showed that compared with the simplest scheme (i.e., integration of spike counts over the entire response period), the division of this period into smaller temporal bins actually yields poorer decoding accuracy. However, optimal classification accuracy can be achieved well before the end of the response period by integrating spike counts within temporally defined windows. Since VNS stimulus uptake is variable, we analyzed decoding using limited information about stimulus uptake time, and showed that with enough neurons, such time-invariant decoding is feasible. Finally, we conducted simulations that demonstrated that, unlike the main olfactory bulb, the temporal features of AOB neurons disfavor decoding with high temporal accuracy, and, rather, support decoding without precise knowledge of stimulus uptake time. SIGNIFICANCE STATEMENT A key goal in sensory system research is to identify which metrics of neuronal activity are relevant for decoding stimulus features. Here, we describe the first systematic analysis of temporal coding in the vomeronasal system (VNS), a chemosensory system devoted to socially relevant cues. Compared with the main olfactory system, timescales of VNS function are inherently slower and variable. Using various analyses of real and simulated data, we show that the consideration of response times relative to stimulus uptake can aid the decoding of stimulus information from neuronal activity. However, response properties of accessory olfactory bulb neurons favor decoding schemes that do not rely on the precise timing of stimulus uptake. Such schemes are consistent with the variable nature of VNS stimulus uptake. Copyright © 2018 the authors 0270-6474/18/384957-20$15.00/0.
-
Computational analysis of TRAPPC9: candidate gene for autosomal recessive non-syndromic mental retardation.
PubMed
Khattak, Naureen Aslam; Mir, Asif
2014-01-01
Mental retardation (MR)/ intellectual disability (ID) is a neuro-developmental disorder characterized by a low intellectual quotient (IQ) and deficits in adaptive behavior related to everyday life tasks such as delayed language acquisition, social skills or self-help skills with onset before age 18. To date, a few genes (PRSS12, CRBN, CC2D1A, GRIK2, TUSC3, TRAPPC9, TECR, ST3GAL3, MED23, MAN1B1, NSUN1) for autosomal-recessive forms of non syndromic MR (NS-ARMR) have been identified and established in various families with ID. The recently reported candidate gene TRAPPC9 was selected for computational analysis to explore its potentially important role in pathology as it is the only gene for ID reported in more than five different familial cases worldwide. YASARA (12.4.1) was utilized to generate three dimensional structures of the candidate gene TRAPPC9. Hybrid structure prediction was employed. Crystal Structure of a Conserved Metalloprotein From Bacillus Cereus (3D19-C) was selected as best suitable template using position-specific iteration-BLAST. Template (3D19-C) parameters were based on E-value, Z-score and resolution and quality score of 0.32, -1.152, 2.30°A and 0.684 respectively. Model reliability showed 93.1% residues placed in the most favored region with 96.684 quality factor, and overall 0.20 G-factor (dihedrals 0.06 and covalent 0.39 respectively). Protein-Protein docking analysis demonstrated that TRAPPC9 showed strong interactions of the amino acid residues S(253), S(251), Y(256), G(243), D(131) with R(105), Q(425), W(226), N(255), S(233), its functional partner 1KBKB. Protein-protein interacting residues could facilitate the exploration of structural and functional outcomes of wild type and mutated TRAPCC9 protein. Actively involved residues can be used to elucidate the binding properties of the protein, and to develop drug therapy for NS-ARMR patients.
-
Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1977-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
-
Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm
NASA Technical Reports Server (NTRS)
Lee, L. N.
1976-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
-
Evaluation of various deformable image registrations for point and volume variations
NASA Astrophysics Data System (ADS)
Han, Su Chul; Lee, Soon Sung; Kim, Mi-Sook; Ji, Young Hoon; Kim, Kum Bae; Choi, Sang Hyun; Park, Seungwoo; Jung, Haijo; Yoo, Hyung Jun; Yi, Chul Young
2015-07-01
The accuracy of deformable image registration (DIR) has a significant dosimetric impact in radiationtreatment planning. Many groups have studied the accuracy of DIR. In this study, we evaluatedthe accuracy of various DIR algorithms by using variations of the deformation point and volume.The reference image (I ref ) and volume (V ref ) were first generated by using virtual deformation QAsoftware (ImSimQA, Oncology System Limited, UK). We deformed I ref with axial movement of thedeformation point and V ref , depending on the type of deformation (relaxation and contraction) inImSimQA software. The deformed image (I def ) and volume (V def ) acquired by using the ImSimQAsoftware were inversely deformed relative to I ref and V ref by using DIR algorithms. As a result,we acquired a deformed image (I id ) from I def and volume (V id ) from V ref . Four intensity-basedalgorithms were tested by following the horn-schunk optical flow (HS), iterative optical flow (IOF),modified demons (MD) and fast demons (FD) with the Deformable Image Registration and AdaptiveRadiotherapy Toolkit (DIRART) of MATLAB. The image similarity between I ref and I id wascalculated to evaluate the accuracy of DIR algorithms using by Normalized Mutual Information(NMI) and Normalized Cross Correlation (NCC) metrics, when the distance of point deformationwas moved 4 mm, the value of NMI was above 1.81 and that of NCC was above 0.99 in all DIRalgorithms. As the degree of deformation was increased, the degree of image similarity decreased.When the V ref was increased or decreased by about 12%, the difference between V ref and V id waswithin ±5% regardless of the type of deformation, the deformation was classified into two types:deformation 1 increased the V ref (relaxation) and deformation 2 decreased the V ref (contraction).The value of the Dice Similarity Coefficient (DSC) was above 0.95 in deformation 1 except for theMD algorithm. In the case of deformation 2, the value of the DSC was above 0.95 in all DIR algorithms.The I def and the V def were not completely restored to I ref and V ref , and the accuracy ofthe DIR algorithms were different, depending on the degree of deformation. Hence, the performanceof DIR algorithms should be verified for the desired applications
-
Error-Rate Bounds for Coded PPM on a Poisson Channel
NASA Technical Reports Server (NTRS)
Moision, Bruce; Hamkins, Jon
2009-01-01
Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.
-
Feature Selection Methods for Robust Decoding of Finger Movements in a Non-human Primate
PubMed Central
Padmanaban, Subash; Baker, Justin; Greger, Bradley
2018-01-01
Objective: The performance of machine learning algorithms used for neural decoding of dexterous tasks may be impeded due to problems arising when dealing with high-dimensional data. The objective of feature selection algorithms is to choose a near-optimal subset of features from the original feature space to improve the performance of the decoding algorithm. The aim of our study was to compare the effects of four feature selection techniques, Wilcoxon signed-rank test, Relative Importance, Principal Component Analysis (PCA), and Mutual Information Maximization on SVM classification performance for a dexterous decoding task. Approach: A nonhuman primate (NHP) was trained to perform small coordinated movements—similar to typing. An array of microelectrodes was implanted in the hand area of the motor cortex of the NHP and used to record action potentials (AP) during finger movements. A Support Vector Machine (SVM) was used to classify which finger movement the NHP was making based upon AP firing rates. We used the SVM classification to examine the functional parameters of (i) robustness to simulated failure and (ii) longevity of classification. We also compared the effect of using isolated-neuron and multi-unit firing rates as the feature vector supplied to the SVM. Main results: The average decoding accuracy for multi-unit features and single-unit features using Mutual Information Maximization (MIM) across 47 sessions was 96.74 ± 3.5% and 97.65 ± 3.36% respectively. The reduction in decoding accuracy between using 100% of the features and 10% of features based on MIM was 45.56% (from 93.7 to 51.09%) and 4.75% (from 95.32 to 90.79%) for multi-unit and single-unit features respectively. MIM had best performance compared to other feature selection methods. Significance: These results suggest improved decoding performance can be achieved by using optimally selected features. The results based on clinically relevant performance metrics also suggest that the decoding algorithm can be made robust by using optimal features and feature selection algorithms. We believe that even a few percent increase in performance is important and improves the decoding accuracy of the machine learning algorithm potentially increasing the ease of use of a brain machine interface. PMID:29467602
-
A Bidirectional Brain-Machine Interface Featuring a Neuromorphic Hardware Decoder.
PubMed
Boi, Fabio; Moraitis, Timoleon; De Feo, Vito; Diotalevi, Francesco; Bartolozzi, Chiara; Indiveri, Giacomo; Vato, Alessandro
2016-01-01
Bidirectional brain-machine interfaces (BMIs) establish a two-way direct communication link between the brain and the external world. A decoder translates recorded neural activity into motor commands and an encoder delivers sensory information collected from the environment directly to the brain creating a closed-loop system. These two modules are typically integrated in bulky external devices. However, the clinical support of patients with severe motor and sensory deficits requires compact, low-power, and fully implantable systems that can decode neural signals to control external devices. As a first step toward this goal, we developed a modular bidirectional BMI setup that uses a compact neuromorphic processor as a decoder. On this chip we implemented a network of spiking neurons built using its ultra-low-power mixed-signal analog/digital circuits. On-chip on-line spike-timing-dependent plasticity synapse circuits enabled the network to learn to decode neural signals recorded from the brain into motor outputs controlling the movements of an external device. The modularity of the BMI allowed us to tune the individual components of the setup without modifying the whole system. In this paper, we present the features of this modular BMI and describe how we configured the network of spiking neuron circuits to implement the decoder and to coordinate it with the encoder in an experimental BMI paradigm that connects bidirectionally the brain of an anesthetized rat with an external object. We show that the chip learned the decoding task correctly, allowing the interfaced brain to control the object's trajectories robustly. Based on our demonstration, we propose that neuromorphic technology is mature enough for the development of BMI modules that are sufficiently low-power and compact, while being highly computationally powerful and adaptive.
-
A Bidirectional Brain-Machine Interface Featuring a Neuromorphic Hardware Decoder
PubMed Central
Boi, Fabio; Moraitis, Timoleon; De Feo, Vito; Diotalevi, Francesco; Bartolozzi, Chiara; Indiveri, Giacomo; Vato, Alessandro
2016-01-01
Bidirectional brain-machine interfaces (BMIs) establish a two-way direct communication link between the brain and the external world. A decoder translates recorded neural activity into motor commands and an encoder delivers sensory information collected from the environment directly to the brain creating a closed-loop system. These two modules are typically integrated in bulky external devices. However, the clinical support of patients with severe motor and sensory deficits requires compact, low-power, and fully implantable systems that can decode neural signals to control external devices. As a first step toward this goal, we developed a modular bidirectional BMI setup that uses a compact neuromorphic processor as a decoder. On this chip we implemented a network of spiking neurons built using its ultra-low-power mixed-signal analog/digital circuits. On-chip on-line spike-timing-dependent plasticity synapse circuits enabled the network to learn to decode neural signals recorded from the brain into motor outputs controlling the movements of an external device. The modularity of the BMI allowed us to tune the individual components of the setup without modifying the whole system. In this paper, we present the features of this modular BMI and describe how we configured the network of spiking neuron circuits to implement the decoder and to coordinate it with the encoder in an experimental BMI paradigm that connects bidirectionally the brain of an anesthetized rat with an external object. We show that the chip learned the decoding task correctly, allowing the interfaced brain to control the object's trajectories robustly. Based on our demonstration, we propose that neuromorphic technology is mature enough for the development of BMI modules that are sufficiently low-power and compact, while being highly computationally powerful and adaptive. PMID:28018162
-
Deep learning with convolutional neural networks for EEG decoding and visualization
PubMed Central
Springenberg, Jost Tobias; Fiederer, Lukas Dominique Josef; Glasstetter, Martin; Eggensperger, Katharina; Tangermann, Michael; Hutter, Frank; Burgard, Wolfram; Ball, Tonio
2017-01-01
Abstract Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end‐to‐end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end‐to‐end EEG analysis, but a better understanding of how to design and train ConvNets for end‐to‐end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task‐related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG‐based brain mapping. Hum Brain Mapp 38:5391–5420, 2017. © 2017 Wiley Periodicals, Inc. PMID:28782865
-
Deep learning with convolutional neural networks for EEG decoding and visualization.
PubMed
Schirrmeister, Robin Tibor; Springenberg, Jost Tobias; Fiederer, Lukas Dominique Josef; Glasstetter, Martin; Eggensperger, Katharina; Tangermann, Michael; Hutter, Frank; Burgard, Wolfram; Ball, Tonio
2017-11-01
Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end-to-end EEG analysis, but a better understanding of how to design and train ConvNets for end-to-end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task-related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG-based brain mapping. Hum Brain Mapp 38:5391-5420, 2017. © 2017 Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
-
Global cortical activity predicts shape of hand during grasping
PubMed Central
Agashe, Harshavardhan A.; Paek, Andrew Y.; Zhang, Yuhang; Contreras-Vidal, José L.
2015-01-01
Recent studies show that the amplitude of cortical field potentials is modulated in the time domain by grasping kinematics. However, it is unknown if these low frequency modulations persist and contain enough information to decode grasp kinematics in macro-scale activity measured at the scalp via electroencephalography (EEG). Further, it is unclear as to whether joint angle velocities or movement synergies are the optimal kinematics spaces to decode. In this offline decoding study, we infer from human EEG, hand joint angular velocities as well as synergistic trajectories as subjects perform natural reach-to-grasp movements. Decoding accuracy, measured as the correlation coefficient (r) between the predicted and actual movement kinematics, was r = 0.49 ± 0.02 across 15 hand joints. Across the first three kinematic synergies, decoding accuracies were r = 0.59 ± 0.04, 0.47 ± 0.06, and 0.32 ± 0.05. The spatial-temporal pattern of EEG channel recruitment showed early involvement of contralateral frontal-central scalp areas followed by later activation of central electrodes over primary sensorimotor cortical areas. Information content in EEG about the grasp type peaked at 250 ms after movement onset. The high decoding accuracies in this study are significant not only as evidence for time-domain modulation in macro-scale brain activity, but for the field of brain-machine interfaces as well. Our decoding strategy, which harnesses the neural “symphony” as opposed to local members of the neural ensemble (as in intracranial approaches), may provide a means of extracting information about motor intent for grasping without the need for penetrating electrodes and suggests that it may be soon possible to develop non-invasive neural interfaces for the control of prosthetic limbs. PMID:25914616
-
Comparing Treatments for Children with ADHD and Word Reading Difficulties: A Randomized Clinical Trial
PubMed Central
Tamm, Leanne; Denton, Carolyn A.; Epstein, Jeffery N.; Schatschneider, Christopher; Taylor, Heather; Arnold, L. Eugene; Bukstein, Oscar; Anixt, Julia; Koshy, Anson; Newman, Nicholas C.; Maltinsky, Jan; Brinson, Patricia; Loren, Richard; Prasad, Mary R.; Ewing-Cobbs, Linda; Vaughn, Aaron
2017-01-01
Objective This randomized clinical trial compared Attention-Deficit/Hyperactivity Disorder (ADHD) treatment alone, intensive reading intervention alone, and their combination for children with ADHD and word reading difficulties and disabilities (RD). Method Children (n=216; predominantly African American males) in grades 2–5 with ADHD and word reading/decoding deficits were randomized to ADHD treatment (carefully-managed medication+parent training), reading treatment (intensive reading instruction), or combined ADHD+reading treatment. Outcomes were parent and teacher ADHD ratings and measures of word reading/decoding. Analyses utilized a mixed models covariate-adjusted gain score approach with post-test regressed onto pretest and other predictors. Results Inattention and hyperactivity/impulsivity outcomes were significantly better in the ADHD (parent Hedges g=.87/.75; teacher g=.67/.50) and combined (parent g=1.06/.95; teacher g=.36/41) treatment groups than reading treatment alone; the ADHD and Combined groups did not differ significantly (parent g=.19/.20; teacher g=.31/.09). Word reading and decoding outcomes were significantly better in the reading (word reading g=.23; decoding g=.39) and combined (word reading g=.32; decoding g=.39) treatment groups than ADHD treatment alone; reading and combined groups did not differ (word reading g=.09; decoding g=.00). Significant group differences were maintained at the three- to five-month follow-up on all outcomes except word reading. Conclusions Children with ADHD and RD benefit from specific treatment of each disorder. ADHD treatment is associated with more improvement in ADHD symptoms than RD treatment, and reading instruction is associated with better word reading and decoding outcomes than ADHD treatment. The additive value of combining treatments was not significant within disorder, but the combination allows treating both disorders simultaneously. PMID:28333510
-
Comparing treatments for children with ADHD and word reading difficulties: A randomized clinical trial.
PubMed
Tamm, Leanne; Denton, Carolyn A; Epstein, Jeffery N; Schatschneider, Christopher; Taylor, Heather; Arnold, L Eugene; Bukstein, Oscar; Anixt, Julia; Koshy, Anson; Newman, Nicholas C; Maltinsky, Jan; Brinson, Patricia; Loren, Richard E A; Prasad, Mary R; Ewing-Cobbs, Linda; Vaughn, Aaron
2017-05-01
This trial compared attention-deficit/hyperactivity disorder (ADHD) treatment alone, intensive reading intervention alone, and their combination for children with ADHD and word reading difficulties and disabilities (RD). Children (n = 216; predominantly African American males) in Grades 2-5 with ADHD and word reading/decoding deficits were randomized to ADHD treatment (medication + parent training), reading treatment (reading instruction), or combined ADHD + reading treatment. Outcomes were parent and teacher ADHD ratings and measures of word reading/decoding. Analyses utilized a mixed models covariate-adjusted gain score approach with posttest regressed onto pretest. Inattention and hyperactivity/impulsivity outcomes were significantly better in the ADHD (parent Hedges's g = .87/.75; teacher g = .67/.50) and combined (parent g = 1.06/.95; teacher g = .36/41) treatment groups than reading treatment alone; the ADHD and Combined groups did not differ significantly (parent g = .19/.20; teacher g = .31/.09). Word reading and decoding outcomes were significantly better in the reading (word reading g = .23; decoding g = .39) and combined (word reading g = .32; decoding g = .39) treatment groups than ADHD treatment alone; reading and combined groups did not differ (word reading g = .09; decoding g = .00). Significant group differences were maintained at the 3- to 5-month follow-up on all outcomes except word reading. Children with ADHD and RD benefit from specific treatment of each disorder. ADHD treatment is associated with more improvement in ADHD symptoms than RD treatment, and reading instruction is associated with better word reading and decoding outcomes than ADHD treatment. The additive value of combining treatments was not significant within disorder, but the combination allows treating both disorders simultaneously. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
-
Improving zero-training brain-computer interfaces by mixing model estimators
NASA Astrophysics Data System (ADS)
Verhoeven, T.; Hübner, D.; Tangermann, M.; Müller, K. R.; Dambre, J.; Kindermans, P. J.
2017-06-01
Objective. Brain-computer interfaces (BCI) based on event-related potentials (ERP) incorporate a decoder to classify recorded brain signals and subsequently select a control signal that drives a computer application. Standard supervised BCI decoders require a tedious calibration procedure prior to every session. Several unsupervised classification methods have been proposed that tune the decoder during actual use and as such omit this calibration. Each of these methods has its own strengths and weaknesses. Our aim is to improve overall accuracy of ERP-based BCIs without calibration. Approach. We consider two approaches for unsupervised classification of ERP signals. Learning from label proportions (LLP) was recently shown to be guaranteed to converge to a supervised decoder when enough data is available. In contrast, the formerly proposed expectation maximization (EM) based decoding for ERP-BCI does not have this guarantee. However, while this decoder has high variance due to random initialization of its parameters, it obtains a higher accuracy faster than LLP when the initialization is good. We introduce a method to optimally combine these two unsupervised decoding methods, letting one method’s strengths compensate for the weaknesses of the other and vice versa. The new method is compared to the aforementioned methods in a resimulation of an experiment with a visual speller. Main results. Analysis of the experimental results shows that the new method exceeds the performance of the previous unsupervised classification approaches in terms of ERP classification accuracy and symbol selection accuracy during the spelling experiment. Furthermore, the method shows less dependency on random initialization of model parameters and is consequently more reliable. Significance. Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.
-
Reading Comprehension in a Large Cohort of French First Graders from Low Socio-Economic Status Families: A 7-Month Longitudinal Study
PubMed Central
Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne; Colé, Pascale
2013-01-01
Background The literature suggests that a complex relationship exists between the three main skills involved in reading comprehension (decoding, listening comprehension and vocabulary) and that this relationship depends on at least three other factors orthographic transparency, children’s grade level and socioeconomic status (SES). This study investigated the relative contribution of the predictors of reading comprehension in a longitudinal design (from beginning to end of the first grade) in 394 French children from low SES families. Methodology/Principal findings Reading comprehension was measured at the end of the first grade using two tasks one with short utterances and one with a medium length narrative text. Accuracy in listening comprehension and vocabulary, and fluency of decoding skills, were measured at the beginning and end of the first grade. Accuracy in decoding skills was measured only at the beginning. Regression analyses showed that listening comprehension and decoding skills (accuracy and fluency) always significantly predicted reading comprehension. The contribution of decoding was greater when reading comprehension was assessed via the task using short utterances. Between the two assessments, the contribution of vocabulary, and of decoding skills especially, increased, while that of listening comprehension remained unchanged. Conclusion/Significance These results challenge the ‘simple view of reading’. They also have educational implications, since they show that it is possible to assess decoding and reading comprehension very early on in an orthography (i.e., French), which is less deep than the English one even in low SES children. These assessments, associated with those of listening comprehension and vocabulary, may allow early identification of children at risk for reading difficulty, and to set up early remedial training, which is the most effective, for them. PMID:24250802
-
Neuroprosthetic Decoder Training as Imitation Learning
PubMed Central
Merel, Josh; Paninski, Liam; Cunningham, John P.
2016-01-01
Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user’s intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user’s intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector. PMID:27191387