Toward more versatile and intuitive cortical brain-machine interfaces.
Andersen, Richard A; Kellis, Spencer; Klaes, Christian; Aflalo, Tyson
2014-09-22
Brain-machine interfaces have great potential for the development of neuroprosthetic applications to assist patients suffering from brain injury or neurodegenerative disease. One type of brain-machine interface is a cortical motor prosthetic, which is used to assist paralyzed subjects. Motor prosthetics to date have typically used the motor cortex as a source of neural signals for controlling external devices. The review will focus on several new topics in the arena of cortical prosthetics. These include using: recordings from cortical areas outside motor cortex; local field potentials as a source of recorded signals; somatosensory feedback for more dexterous control of robotics; and new decoding methods that work in concert to form an ecology of decode algorithms. These new advances promise to greatly accelerate the applicability and ease of operation of motor prosthetics. Copyright © 2014 Elsevier Ltd. All rights reserved.
Tonet, Oliver; Marinelli, Martina; Citi, Luca; Rossini, Paolo Maria; Rossini, Luca; Megali, Giuseppe; Dario, Paolo
2008-01-15
Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications.
Nishimoto, Atsuko; Kawakami, Michiyuki; Fujiwara, Toshiyuki; Hiramoto, Miho; Honaga, Kaoru; Abe, Kaoru; Mizuno, Katsuhiro; Ushiba, Junichi; Liu, Meigen
2018-01-10
Brain-machine interface training was developed for upper-extremity rehabilitation for patients with severe hemiparesis. Its clinical application, however, has been limited because of its lack of feasibility in real-world rehabilitation settings. We developed a new compact task-specific brain-machine interface system that enables task-specific training, including reach-and-grasp tasks, and studied its clinical feasibility and effectiveness for upper-extremity motor paralysis in patients with stroke. Prospective beforeâ€"after study. Twenty-six patients with severe chronic hemiparetic stroke. Participants were trained with the brain-machine interface system to pick up and release pegs during 40-min sessions and 40 min of standard occupational therapy per day for 10 days. Fugl-Meyer upper-extremity motor (FMA) and Motor Activity Log-14 amount of use (MAL-AOU) scores were assessed before and after the intervention. To test its feasibility, 4 occupational therapists who operated the system for the first time assessed it with the Quebec User Evaluation of Satisfaction with assistive Technology (QUEST) 2.0. FMA and MAL-AOU scores improved significantly after brain-machine interface training, with the effect sizes being medium and large, respectively (p<0.01, d=0.55; p<0.01, d=0.88). QUEST effectiveness and safety scores showed feasibility and satisfaction in the clinical setting. Our newly developed compact brain-machine interface system is feasible for use in real-world clinical settings.
Detecting Mental States by Machine Learning Techniques: The Berlin Brain-Computer Interface
NASA Astrophysics Data System (ADS)
Blankertz, Benjamin; Tangermann, Michael; Vidaurre, Carmen; Dickhaus, Thorsten; Sannelli, Claudia; Popescu, Florin; Fazli, Siamac; Danóczy, Márton; Curio, Gabriel; Müller, Klaus-Robert
The Berlin Brain-Computer Interface Brain-Computer Interface (BBCI) uses a machine learning approach to extract user-specific patterns from high-dimensional EEG-features optimized for revealing the user's mental state. Classical BCI applications are brain actuated tools for patients such as prostheses (see Section 4.1) or mental text entry systems ([1] and see [2-5] for an overview on BCI). In these applications, the BBCI uses natural motor skills of the users and specifically tailored pattern recognition algorithms for detecting the user's intent. But beyond rehabilitation, there is a wide range of possible applications in which BCI technology is used to monitor other mental states, often even covert ones (see also [6] in the fMRI realm). While this field is still largely unexplored, two examples from our studies are exemplified in Sections 4.3 and 4.4.
Future developments in brain-machine interface research.
Lebedev, Mikhail A; Tate, Andrew J; Hanson, Timothy L; Li, Zheng; O'Doherty, Joseph E; Winans, Jesse A; Ifft, Peter J; Zhuang, Katie Z; Fitzsimmons, Nathan A; Schwarz, David A; Fuller, Andrew M; An, Je Hi; Nicolelis, Miguel A L
2011-01-01
Neuroprosthetic devices based on brain-machine interface technology hold promise for the restoration of body mobility in patients suffering from devastating motor deficits caused by brain injury, neurologic diseases and limb loss. During the last decade, considerable progress has been achieved in this multidisciplinary research, mainly in the brain-machine interface that enacts upper-limb functionality. However, a considerable number of problems need to be resolved before fully functional limb neuroprostheses can be built. To move towards developing neuroprosthetic devices for humans, brain-machine interface research has to address a number of issues related to improving the quality of neuronal recordings, achieving stable, long-term performance, and extending the brain-machine interface approach to a broad range of motor and sensory functions. Here, we review the future steps that are part of the strategic plan of the Duke University Center for Neuroengineering, and its partners, the Brazilian National Institute of Brain-Machine Interfaces and the École Polytechnique Fédérale de Lausanne (EPFL) Center for Neuroprosthetics, to bring this new technology to clinical fruition.
Future developments in brain-machine interface research
Lebedev, Mikhail A; Tate, Andrew J; Hanson, Timothy L; Li, Zheng; O'Doherty, Joseph E; Winans, Jesse A; Ifft, Peter J; Zhuang, Katie Z; Fitzsimmons, Nathan A; Schwarz, David A; Fuller, Andrew M; An, Je Hi; Nicolelis, Miguel A L
2011-01-01
Neuroprosthetic devices based on brain-machine interface technology hold promise for the restoration of body mobility in patients suffering from devastating motor deficits caused by brain injury, neurologic diseases and limb loss. During the last decade, considerable progress has been achieved in this multidisciplinary research, mainly in the brain-machine interface that enacts upper-limb functionality. However, a considerable number of problems need to be resolved before fully functional limb neuroprostheses can be built. To move towards developing neuroprosthetic devices for humans, brain-machine interface research has to address a number of issues related to improving the quality of neuronal recordings, achieving stable, long-term performance, and extending the brain-machine interface approach to a broad range of motor and sensory functions. Here, we review the future steps that are part of the strategic plan of the Duke University Center for Neuroengineering, and its partners, the Brazilian National Institute of Brain-Machine Interfaces and the École Polytechnique Fédérale de Lausanne (EPFL) Center for Neuroprosthetics, to bring this new technology to clinical fruition. PMID:21779720
Workshops of the Fifth International Brain-Computer Interface Meeting: Defining the Future.
Huggins, Jane E; Guger, Christoph; Allison, Brendan; Anderson, Charles W; Batista, Aaron; Brouwer, Anne-Marie A-M; Brunner, Clemens; Chavarriaga, Ricardo; Fried-Oken, Melanie; Gunduz, Aysegul; Gupta, Disha; Kübler, Andrea; Leeb, Robert; Lotte, Fabien; Miller, Lee E; Müller-Putz, Gernot; Rutkowski, Tomasz; Tangermann, Michael; Thompson, David Edward
2014-01-01
The Fifth International Brain-Computer Interface (BCI) Meeting met June 3-7 th , 2013 at the Asilomar Conference Grounds, Pacific Grove, California. The conference included 19 workshops covering topics in brain-computer interface and brain-machine interface research. Topics included translation of BCIs into clinical use, standardization and certification, types of brain activity to use for BCI, recording methods, the effects of plasticity, special interest topics in BCIs applications, and future BCI directions. BCI research is well established and transitioning to practical use to benefit people with physical impairments. At the same time, new applications are being explored, both for people with physical impairments and beyond. Here we provide summaries of each workshop, illustrating the breadth and depth of BCI research and high-lighting important issues for future research and development.
Matching brain-machine interface performance to space applications.
Citi, Luca; Tonet, Oliver; Marinelli, Martina
2009-01-01
A brain-machine interface (BMI) is a particular class of human-machine interface (HMI). BMIs have so far been studied mostly as a communication means for people who have little or no voluntary control of muscle activity. For able-bodied users, such as astronauts, a BMI would only be practical if conceived as an augmenting interface. A method is presented for pointing out effective combinations of HMIs and applications of robotics and automation to space. Latency and throughput are selected as performance measures for a hybrid bionic system (HBS), that is, the combination of a user, a device, and a HMI. We classify and briefly describe HMIs and space applications and then compare the performance of classes of interfaces with the requirements of classes of applications, both in terms of latency and throughput. Regions of overlap correspond to effective combinations. Devices requiring simpler control, such as a rover, a robotic camera, or environmental controls are suitable to be driven by means of BMI technology. Free flyers and other devices with six degrees of freedom can be controlled, but only at low-interactivity levels. More demanding applications require conventional interfaces, although they could be controlled by BMIs once the same levels of performance as currently recorded in animal experiments are attained. Robotic arms and manipulators could be the next frontier for noninvasive BMIs. Integrating smart controllers in HBSs could improve interactivity and boost the use of BMI technology in space applications.
Toward more versatile and intuitive cortical brain machine interfaces
Andersen, Richard A.; Kellis, Spencer; Klaes, Christian; Aflalo, Tyson
2015-01-01
Brain machine interfaces have great potential in neuroprosthetic applications to assist patients with brain injury and neurodegenerative diseases. One type of BMI is a cortical motor prosthetic which is used to assist paralyzed subjects. Motor prosthetics to date have typically used the motor cortex as a source of neural signals for controlling external devices. The review will focus on several new topics in the arena of cortical prosthetics. These include using 1) recordings from cortical areas outside motor cortex; 2) local field potentials (LFPs) as a source of recorded signals; 3) somatosensory feedback for more dexterous control of robotics; and 4) new decoding methods that work in concert to form an ecology of decode algorithms. These new advances hold promise in greatly accelerating the applicability and ease of operation of motor prosthetics. PMID:25247368
Workshops of the Fifth International Brain-Computer Interface Meeting: Defining the Future
Huggins, Jane E.; Guger, Christoph; Allison, Brendan; Anderson, Charles W.; Batista, Aaron; Brouwer, Anne-Marie (A.-M.); Brunner, Clemens; Chavarriaga, Ricardo; Fried-Oken, Melanie; Gunduz, Aysegul; Gupta, Disha; Kübler, Andrea; Leeb, Robert; Lotte, Fabien; Miller, Lee E.; Müller-Putz, Gernot; Rutkowski, Tomasz; Tangermann, Michael; Thompson, David Edward
2014-01-01
The Fifth International Brain-Computer Interface (BCI) Meeting met June 3–7th, 2013 at the Asilomar Conference Grounds, Pacific Grove, California. The conference included 19 workshops covering topics in brain-computer interface and brain-machine interface research. Topics included translation of BCIs into clinical use, standardization and certification, types of brain activity to use for BCI, recording methods, the effects of plasticity, special interest topics in BCIs applications, and future BCI directions. BCI research is well established and transitioning to practical use to benefit people with physical impairments. At the same time, new applications are being explored, both for people with physical impairments and beyond. Here we provide summaries of each workshop, illustrating the breadth and depth of BCI research and high-lighting important issues for future research and development. PMID:25485284
Hand-in-hand advances in biomedical engineering and sensorimotor restoration.
Pisotta, Iolanda; Perruchoud, David; Ionta, Silvio
2015-05-15
Living in a multisensory world entails the continuous sensory processing of environmental information in order to enact appropriate motor routines. The interaction between our body and our brain is the crucial factor for achieving such sensorimotor integration ability. Several clinical conditions dramatically affect the constant body-brain exchange, but the latest developments in biomedical engineering provide promising solutions for overcoming this communication breakdown. The ultimate technological developments succeeded in transforming neuronal electrical activity into computational input for robotic devices, giving birth to the era of the so-called brain-machine interfaces. Combining rehabilitation robotics and experimental neuroscience the rise of brain-machine interfaces into clinical protocols provided the technological solution for bypassing the neural disconnection and restore sensorimotor function. Based on these advances, the recovery of sensorimotor functionality is progressively becoming a concrete reality. However, despite the success of several recent techniques, some open issues still need to be addressed. Typical interventions for sensorimotor deficits include pharmaceutical treatments and manual/robotic assistance in passive movements. These procedures achieve symptoms relief but their applicability to more severe disconnection pathologies is limited (e.g. spinal cord injury or amputation). Here we review how state-of-the-art solutions in biomedical engineering are continuously increasing expectances in sensorimotor rehabilitation, as well as the current challenges especially with regards to the translation of the signals from brain-machine interfaces into sensory feedback and the incorporation of brain-machine interfaces into daily activities. Copyright © 2015 Elsevier B.V. All rights reserved.
Huggins, Jane E; Guger, Christoph; Ziat, Mounia; Zander, Thorsten O; Taylor, Denise; Tangermann, Michael; Soria-Frisch, Aureli; Simeral, John; Scherer, Reinhold; Rupp, Rüdiger; Ruffini, Giulio; Robinson, Douglas K R; Ramsey, Nick F; Nijholt, Anton; Müller-Putz, Gernot; McFarland, Dennis J; Mattia, Donatella; Lance, Brent J; Kindermans, Pieter-Jan; Iturrate, Iñaki; Herff, Christian; Gupta, Disha; Do, An H; Collinger, Jennifer L; Chavarriaga, Ricardo; Chase, Steven M; Bleichner, Martin G; Batista, Aaron; Anderson, Charles W; Aarnoutse, Erik J
2017-01-01
The Sixth International Brain-Computer Interface (BCI) Meeting was held 30 May-3 June 2016 at the Asilomar Conference Grounds, Pacific Grove, California, USA. The conference included 28 workshops covering topics in BCI and brain-machine interface research. Topics included BCI for specific populations or applications, advancing BCI research through use of specific signals or technological advances, and translational and commercial issues to bring both implanted and non-invasive BCIs to market. BCI research is growing and expanding in the breadth of its applications, the depth of knowledge it can produce, and the practical benefit it can provide both for those with physical impairments and the general public. Here we provide summaries of each workshop, illustrating the breadth and depth of BCI research and highlighting important issues and calls for action to support future research and development.
On the applicability of brain reading for predictive human-machine interfaces in robotics.
Kirchner, Elsa Andrea; Kim, Su Kyoung; Straube, Sirko; Seeland, Anett; Wöhrle, Hendrik; Krell, Mario Michael; Tabie, Marc; Fahle, Manfred
2013-01-01
The ability of today's robots to autonomously support humans in their daily activities is still limited. To improve this, predictive human-machine interfaces (HMIs) can be applied to better support future interaction between human and machine. To infer upcoming context-based behavior relevant brain states of the human have to be detected. This is achieved by brain reading (BR), a passive approach for single trial EEG analysis that makes use of supervised machine learning (ML) methods. In this work we propose that BR is able to detect concrete states of the interacting human. To support this, we show that BR detects patterns in the electroencephalogram (EEG) that can be related to event-related activity in the EEG like the P300, which are indicators of concrete states or brain processes like target recognition processes. Further, we improve the robustness and applicability of BR in application-oriented scenarios by identifying and combining most relevant training data for single trial classification and by applying classifier transfer. We show that training and testing, i.e., application of the classifier, can be carried out on different classes, if the samples of both classes miss a relevant pattern. Classifier transfer is important for the usage of BR in application scenarios, where only small amounts of training examples are available. Finally, we demonstrate a dual BR application in an experimental setup that requires similar behavior as performed during the teleoperation of a robotic arm. Here, target recognition processes and movement preparation processes are detected simultaneously. In summary, our findings contribute to the development of robust and stable predictive HMIs that enable the simultaneous support of different interaction behaviors.
On the Applicability of Brain Reading for Predictive Human-Machine Interfaces in Robotics
Kirchner, Elsa Andrea; Kim, Su Kyoung; Straube, Sirko; Seeland, Anett; Wöhrle, Hendrik; Krell, Mario Michael; Tabie, Marc; Fahle, Manfred
2013-01-01
The ability of today's robots to autonomously support humans in their daily activities is still limited. To improve this, predictive human-machine interfaces (HMIs) can be applied to better support future interaction between human and machine. To infer upcoming context-based behavior relevant brain states of the human have to be detected. This is achieved by brain reading (BR), a passive approach for single trial EEG analysis that makes use of supervised machine learning (ML) methods. In this work we propose that BR is able to detect concrete states of the interacting human. To support this, we show that BR detects patterns in the electroencephalogram (EEG) that can be related to event-related activity in the EEG like the P300, which are indicators of concrete states or brain processes like target recognition processes. Further, we improve the robustness and applicability of BR in application-oriented scenarios by identifying and combining most relevant training data for single trial classification and by applying classifier transfer. We show that training and testing, i.e., application of the classifier, can be carried out on different classes, if the samples of both classes miss a relevant pattern. Classifier transfer is important for the usage of BR in application scenarios, where only small amounts of training examples are available. Finally, we demonstrate a dual BR application in an experimental setup that requires similar behavior as performed during the teleoperation of a robotic arm. Here, target recognition processes and movement preparation processes are detected simultaneously. In summary, our findings contribute to the development of robust and stable predictive HMIs that enable the simultaneous support of different interaction behaviors. PMID:24358125
Errare machinale est: the use of error-related potentials in brain-machine interfaces
Chavarriaga, Ricardo; Sobolewski, Aleksander; Millán, José del R.
2014-01-01
The ability to recognize errors is crucial for efficient behavior. Numerous studies have identified electrophysiological correlates of error recognition in the human brain (error-related potentials, ErrPs). Consequently, it has been proposed to use these signals to improve human-computer interaction (HCI) or brain-machine interfacing (BMI). Here, we present a review of over a decade of developments toward this goal. This body of work provides consistent evidence that ErrPs can be successfully detected on a single-trial basis, and that they can be effectively used in both HCI and BMI applications. We first describe the ErrP phenomenon and follow up with an analysis of different strategies to increase the robustness of a system by incorporating single-trial ErrP recognition, either by correcting the machine's actions or by providing means for its error-based adaptation. These approaches can be applied both when the user employs traditional HCI input devices or in combination with another BMI channel. Finally, we discuss the current challenges that have to be overcome in order to fully integrate ErrPs into practical applications. This includes, in particular, the characterization of such signals during real(istic) applications, as well as the possibility of extracting richer information from them, going beyond the time-locked decoding that dominates current approaches. PMID:25100937
Quadcopter control using a BCI
NASA Astrophysics Data System (ADS)
Rosca, S.; Leba, M.; Ionica, A.; Gamulescu, O.
2018-01-01
The paper presents how there can be interconnected two ubiquitous elements nowadays. On one hand, the drones, which are increasingly present and integrated into more and more fields of activity, beyond the military applications they come from, moving towards entertainment, real-estate, delivery and so on. On the other hand, unconventional man-machine interfaces, which are generous topics to explore now and in the future. Of these, we chose brain computer interface (BCI), which allows human-machine interaction without requiring any moving elements. The research consists of mathematical modeling and numerical simulation of a drone and a BCI. Then there is presented an application using a Parrot mini-drone and an Emotiv Insight BCI.
Wireless communication links for brain-machine interface applications
NASA Astrophysics Data System (ADS)
Larson, L.
2016-05-01
Recent technological developments have given neuroscientists direct access to neural signals in real time, with the accompanying ability to decode the resulting information and control various prosthetic devices and gain insight into deeper aspects of cognition. These developments - along with deep brain stimulation for Parkinson's disease and the possible use of electro-stimulation for other maladies - leads to the conclusion that the widespread use electronic brain interface technology is a long term possibility. This talk will summarize the various technical challenges and approaches that have been developed to wirelessly communicate with the brain, including technology constraints, dc power limits, compression and data rate issues.
My thoughts through a robot's eyes: an augmented reality-brain-machine interface.
Kansaku, Kenji; Hata, Naoki; Takano, Kouji
2010-02-01
A brain-machine interface (BMI) uses neurophysiological signals from the brain to control external devices, such as robot arms or computer cursors. Combining augmented reality with a BMI, we show that the user's brain signals successfully controlled an agent robot and operated devices in the robot's environment. The user's thoughts became reality through the robot's eyes, enabling the augmentation of real environments outside the anatomy of the human body.
Active tactile exploration using a brain-machine-brain interface.
O'Doherty, Joseph E; Lebedev, Mikhail A; Ifft, Peter J; Zhuang, Katie Z; Shokur, Solaiman; Bleuler, Hannes; Nicolelis, Miguel A L
2011-10-05
Brain-machine interfaces use neuronal activity recorded from the brain to establish direct communication with external actuators, such as prosthetic arms. It is hoped that brain-machine interfaces can be used to restore the normal sensorimotor functions of the limbs, but so far they have lacked tactile sensation. Here we report the operation of a brain-machine-brain interface (BMBI) that both controls the exploratory reaching movements of an actuator and allows signalling of artificial tactile feedback through intracortical microstimulation (ICMS) of the primary somatosensory cortex. Monkeys performed an active exploration task in which an actuator (a computer cursor or a virtual-reality arm) was moved using a BMBI that derived motor commands from neuronal ensemble activity recorded in the primary motor cortex. ICMS feedback occurred whenever the actuator touched virtual objects. Temporal patterns of ICMS encoded the artificial tactile properties of each object. Neuronal recordings and ICMS epochs were temporally multiplexed to avoid interference. Two monkeys operated this BMBI to search for and distinguish one of three visually identical objects, using the virtual-reality arm to identify the unique artificial texture associated with each. These results suggest that clinical motor neuroprostheses might benefit from the addition of ICMS feedback to generate artificial somatic perceptions associated with mechanical, robotic or even virtual prostheses.
A Wireless 32-Channel Implantable Bidirectional Brain Machine Interface
Su, Yi; Routhu, Sudhamayee; Moon, Kee S.; Lee, Sung Q.; Youm, WooSub; Ozturk, Yusuf
2016-01-01
All neural information systems (NIS) rely on sensing neural activity to supply commands and control signals for computers, machines and a variety of prosthetic devices. Invasive systems achieve a high signal-to-noise ratio (SNR) by eliminating the volume conduction problems caused by tissue and bone. An implantable brain machine interface (BMI) using intracortical electrodes provides excellent detection of a broad range of frequency oscillatory activities through the placement of a sensor in direct contact with cortex. This paper introduces a compact-sized implantable wireless 32-channel bidirectional brain machine interface (BBMI) to be used with freely-moving primates. The system is designed to monitor brain sensorimotor rhythms and present current stimuli with a configurable duration, frequency and amplitude in real time to the brain based on the brain activity report. The battery is charged via a novel ultrasonic wireless power delivery module developed for efficient delivery of power into a deeply-implanted system. The system was successfully tested through bench tests and in vivo tests on a behaving primate to record the local field potential (LFP) oscillation and stimulate the target area at the same time. PMID:27669264
Robotic devices and brain-machine interfaces for hand rehabilitation post-stroke.
McConnell, Alistair C; Moioli, Renan C; Brasil, Fabricio L; Vallejo, Marta; Corne, David W; Vargas, Patricia A; Stokes, Adam A
2017-06-28
To review the state of the art of robotic-aided hand physiotherapy for post-stroke rehabilitation, including the use of brain-machine interfaces. Each patient has a unique clinical history and, in response to personalized treatment needs, research into individualized and at-home treatment options has expanded rapidly in recent years. This has resulted in the development of many devices and design strategies for use in stroke rehabilitation. The development progression of robotic-aided hand physiotherapy devices and brain-machine interface systems is outlined, focussing on those with mechanisms and control strategies designed to improve recovery outcomes of the hand post-stroke. A total of 110 commercial and non-commercial hand and wrist devices, spanning the 2 major core designs: end-effector and exoskeleton are reviewed. The growing body of evidence on the efficacy and relevance of incorporating brain-machine interfaces in stroke rehabilitation is summarized. The challenges involved in integrating robotic rehabilitation into the healthcare system are discussed. This review provides novel insights into the use of robotics in physiotherapy practice, and may help system designers to develop new devices.
Soft brain-machine interfaces for assistive robotics: A novel control approach.
Schiatti, Lucia; Tessadori, Jacopo; Barresi, Giacinto; Mattos, Leonardo S; Ajoudani, Arash
2017-07-01
Robotic systems offer the possibility of improving the life quality of people with severe motor disabilities, enhancing the individual's degree of independence and interaction with the external environment. In this direction, the operator's residual functions must be exploited for the control of the robot movements and the underlying dynamic interaction through intuitive and effective human-robot interfaces. Towards this end, this work aims at exploring the potential of a novel Soft Brain-Machine Interface (BMI), suitable for dynamic execution of remote manipulation tasks for a wide range of patients. The interface is composed of an eye-tracking system, for an intuitive and reliable control of a robotic arm system's trajectories, and a Brain-Computer Interface (BCI) unit, for the control of the robot Cartesian stiffness, which determines the interaction forces between the robot and environment. The latter control is achieved by estimating in real-time a unidimensional index from user's electroencephalographic (EEG) signals, which provides the probability of a neutral or active state. This estimated state is then translated into a stiffness value for the robotic arm, allowing a reliable modulation of the robot's impedance. A preliminary evaluation of this hybrid interface concept provided evidence on the effective execution of tasks with dynamic uncertainties, demonstrating the great potential of this control method in BMI applications for self-service and clinical care.
Applications of Brain–Machine Interface Systems in Stroke Recovery and Rehabilitation
Francisco, Gerard E.; Contreras-Vidal, Jose L.
2014-01-01
Stroke is a leading cause of disability, significantly impacting the quality of life (QOL) in survivors, and rehabilitation remains the mainstay of treatment in these patients. Recent engineering and technological advances such as brain-machine interfaces (BMI) and robotic rehabilitative devices are promising to enhance stroke neu-rorehabilitation, to accelerate functional recovery and improve QOL. This review discusses the recent applications of BMI and robotic-assisted rehabilitation in stroke patients. We present the framework for integrated BMI and robotic-assisted therapies, and discuss their potential therapeutic, assistive and diagnostic functions in stroke rehabilitation. Finally, we conclude with an outlook on the potential challenges and future directions of these neurotechnologies, and their impact on clinical rehabilitation. PMID:25110624
Zhen Qin; Bin Zhang; Ning Hu; Ping Wang
2015-01-01
The mammalian gustatory system is acknowledged as one of the most valid chemosensing systems. The sense of taste particularly provides critical information about ingestion of toxic and noxious chemicals. Thus the potential of utilizing rats' gustatory system is investigated in detecting sapid substances. By recording electrical activities of neurons in gustatory cortex, a novel bioelectronic tongue system is developed in combination with brain-machine interface technology. Features are extracted in both spikes and local field potentials. By visualizing these features, classification is performed and the responses to different tastants can be prominently separated from each other. The results suggest that this in vivo bioelectronic tongue is capable of detecting tastants and will provide a promising platform for potential applications in evaluating palatability of food and beverages.
European public deliberation on brain machine interface technology: five convergence seminars.
Jebari, Karim; Hansson, Sven-Ove
2013-09-01
We present a novel procedure to engage the public in ethical deliberations on the potential impacts of brain machine interface technology. We call this procedure a convergence seminar, a form of scenario-based group discussion that is founded on the idea of hypothetical retrospection. The theoretical background of this procedure and the results of five seminars are presented.
Hybrid EEG-EOG brain-computer interface system for practical machine control.
Punsawad, Yunyong; Wongsawat, Yodchanan; Parnichkun, Manukid
2010-01-01
Practical issues such as accuracy with various subjects, number of sensors, and time for training are important problems of existing brain-computer interface (BCI) systems. In this paper, we propose a hybrid framework for the BCI system that can make machine control more practical. The electrooculogram (EOG) is employed to control the machine in the left and right directions while the electroencephalogram (EEG) is employed to control the forword, no action, and complete stop motions of the machine. By using only 2-channel biosignals, the average classification accuracy of more than 95% can be achieved.
Zander, Thorsten O; Kothe, Christian
2011-04-01
Cognitive monitoring is an approach utilizing realtime brain signal decoding (RBSD) for gaining information on the ongoing cognitive user state. In recent decades this approach has brought valuable insight into the cognition of an interacting human. Automated RBSD can be used to set up a brain-computer interface (BCI) providing a novel input modality for technical systems solely based on brain activity. In BCIs the user usually sends voluntary and directed commands to control the connected computer system or to communicate through it. In this paper we propose an extension of this approach by fusing BCI technology with cognitive monitoring, providing valuable information about the users' intentions, situational interpretations and emotional states to the technical system. We call this approach passive BCI. In the following we give an overview of studies which utilize passive BCI, as well as other novel types of applications resulting from BCI technology. We especially focus on applications for healthy users, and the specific requirements and demands of this user group. Since the presented approach of combining cognitive monitoring with BCI technology is very similar to the concept of BCIs itself we propose a unifying categorization of BCI-based applications, including the novel approach of passive BCI.
Feasibility study for future implantable neural-silicon interface devices.
Al-Armaghany, Allann; Yu, Bo; Mak, Terrence; Tong, Kin-Fai; Sun, Yihe
2011-01-01
The emerging neural-silicon interface devices bridge nerve systems with artificial systems and play a key role in neuro-prostheses and neuro-rehabilitation applications. Integrating neural signal collection, processing and transmission on a single device will make clinical applications more practical and feasible. This paper focuses on the wireless antenna part and real-time neural signal analysis part of implantable brain-machine interface (BMI) devices. We propose to use millimeter-wave for wireless connections between different areas of a brain. Various antenna, including microstrip patch, monopole antenna and substrate integrated waveguide antenna are considered for the intra-cortical proximity communication. A Hebbian eigenfilter based method is proposed for multi-channel neuronal spike sorting. Folding and parallel design techniques are employed to explore various structures and make a trade-off between area and power consumption. Field programmable logic arrays (FPGAs) are used to evaluate various structures.
Sakurai, Yoshio; Song, Kichan; Tachibana, Shota; Takahashi, Susumu
2014-01-01
In this review, we focus on neuronal operant conditioning in which increments in neuronal activities are directly rewarded without behaviors. We discuss the potential of this approach to elucidate neuronal plasticity for enhancing specific brain functions and its interaction with the progress in neurorehabilitation and brain-machine interfaces. The key to-be-conditioned activities that this paper emphasizes are synchronous and oscillatory firings of multiple neurons that reflect activities of cell assemblies. First, we introduce certain well-known studies on neuronal operant conditioning in which conditioned enhancements of neuronal firing were reported in animals and humans. These studies demonstrated the feasibility of volitional control over neuronal activity. Second, we refer to the recent studies on operant conditioning of synchrony and oscillation of neuronal activities. In particular, we introduce a recent study showing volitional enhancement of oscillatory activity in monkey motor cortex and our study showing selective enhancement of firing synchrony of neighboring neurons in rat hippocampus. Third, we discuss the reasons for emphasizing firing synchrony and oscillation in neuronal operant conditioning, the main reason being that they reflect the activities of cell assemblies, which have been suggested to be basic neuronal codes representing information in the brain. Finally, we discuss the interaction of neuronal operant conditioning with neurorehabilitation and brain-machine interface (BMI). We argue that synchrony and oscillation of neuronal firing are the key activities required for developing both reliable neurorehabilitation and high-performance BMI. Further, we conclude that research of neuronal operant conditioning, neurorehabilitation, BMI, and system neuroscience will produce findings applicable to these interrelated fields, and neuronal synchrony and oscillation can be a common important bridge among all of them. PMID:24567704
Sakurai, Yoshio; Song, Kichan; Tachibana, Shota; Takahashi, Susumu
2014-01-01
In this review, we focus on neuronal operant conditioning in which increments in neuronal activities are directly rewarded without behaviors. We discuss the potential of this approach to elucidate neuronal plasticity for enhancing specific brain functions and its interaction with the progress in neurorehabilitation and brain-machine interfaces. The key to-be-conditioned activities that this paper emphasizes are synchronous and oscillatory firings of multiple neurons that reflect activities of cell assemblies. First, we introduce certain well-known studies on neuronal operant conditioning in which conditioned enhancements of neuronal firing were reported in animals and humans. These studies demonstrated the feasibility of volitional control over neuronal activity. Second, we refer to the recent studies on operant conditioning of synchrony and oscillation of neuronal activities. In particular, we introduce a recent study showing volitional enhancement of oscillatory activity in monkey motor cortex and our study showing selective enhancement of firing synchrony of neighboring neurons in rat hippocampus. Third, we discuss the reasons for emphasizing firing synchrony and oscillation in neuronal operant conditioning, the main reason being that they reflect the activities of cell assemblies, which have been suggested to be basic neuronal codes representing information in the brain. Finally, we discuss the interaction of neuronal operant conditioning with neurorehabilitation and brain-machine interface (BMI). We argue that synchrony and oscillation of neuronal firing are the key activities required for developing both reliable neurorehabilitation and high-performance BMI. Further, we conclude that research of neuronal operant conditioning, neurorehabilitation, BMI, and system neuroscience will produce findings applicable to these interrelated fields, and neuronal synchrony and oscillation can be a common important bridge among all of them.
A Wearable Channel Selection-Based Brain-Computer Interface for Motor Imagery Detection.
Lo, Chi-Chun; Chien, Tsung-Yi; Chen, Yu-Chun; Tsai, Shang-Ho; Fang, Wai-Chi; Lin, Bor-Shyh
2016-02-06
Motor imagery-based brain-computer interface (BCI) is a communication interface between an external machine and the brain. Many kinds of spatial filters are used in BCIs to enhance the electroencephalography (EEG) features related to motor imagery. The approach of channel selection, developed to reserve meaningful EEG channels, is also an important technique for the development of BCIs. However, current BCI systems require a conventional EEG machine and EEG electrodes with conductive gel to acquire multi-channel EEG signals and then transmit these EEG signals to the back-end computer to perform the approach of channel selection. This reduces the convenience of use in daily life and increases the limitations of BCI applications. In order to improve the above issues, a novel wearable channel selection-based brain-computer interface is proposed. Here, retractable comb-shaped active dry electrodes are designed to measure the EEG signals on a hairy site, without conductive gel. By the design of analog CAR spatial filters and the firmware of EEG acquisition module, the function of spatial filters could be performed without any calculation, and channel selection could be performed in the front-end device to improve the practicability of detecting motor imagery in the wearable EEG device directly or in commercial mobile phones or tablets, which may have relatively low system specifications. Finally, the performance of the proposed BCI is investigated, and the experimental results show that the proposed system is a good wearable BCI system prototype.
Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang
2014-01-01
Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.
Lapborisuth, Pawan; Zhang, Xian; Noah, Adam; Hirsch, Joy
2017-04-01
Neurofeedback is a method for using neural activity displayed on a computer to regulate one's own brain function and has been shown to be a promising technique for training individuals to interact with brain-machine interface applications such as neuroprosthetic limbs. The goal of this study was to develop a user-friendly functional near-infrared spectroscopy (fNIRS)-based neurofeedback system to upregulate neural activity associated with motor imagery, which is frequently used in neuroprosthetic applications. We hypothesized that fNIRS neurofeedback would enhance activity in motor cortex during a motor imagery task. Twenty-two participants performed active and imaginary right-handed squeezing movements using an elastic ball while wearing a 98-channel fNIRS device. Neurofeedback traces representing localized cortical hemodynamic responses were graphically presented to participants in real time. Participants were instructed to observe this graphical representation and use the information to increase signal amplitude. Neural activity was compared during active and imaginary squeezing with and without neurofeedback. Active squeezing resulted in activity localized to the left premotor and supplementary motor cortex, and activity in the motor cortex was found to be modulated by neurofeedback. Activity in the motor cortex was also shown in the imaginary squeezing condition only in the presence of neurofeedback. These findings demonstrate that real-time fNIRS neurofeedback is a viable platform for brain-machine interface applications.
NASA Astrophysics Data System (ADS)
Stieglitz, Thomas
2009-05-01
Implantable medical devices to interface with muscles, peripheral nerves, and the brain have been developed for many applications over the last decades. They have been applied in fundamental neuroscientific studies as well as in diagnosis, therapy and rehabilitation in clinical practice. Success stories of these implants have been written with help of precision mechanics manufacturing techniques. Latest cutting edge research approaches to restore vision in blind persons and to develop an interface with the human brain as motor control interface, however, need more complex systems and larger scales of integration and higher degrees of miniaturization. Microsystems engineering offers adequate tools, methods, and materials but so far, no MEMS based active medical device has been transferred into clinical practice. Silicone rubber, polyimide, parylene as flexible materials and silicon and alumina (aluminum dioxide ceramics) as substrates and insulation or packaging materials, respectively, and precious metals as electrodes have to be combined to systems that do not harm the biological target structure and have to work reliably in a wet environment with ions and proteins. Here, different design, manufacturing and packaging paradigms will be presented and strengths and drawbacks will be discussed in close relation to the envisioned biological and medical applications.
Decoding position, velocity, or goal: does it matter for brain-machine interfaces?
Marathe, A R; Taylor, D M
2011-04-01
Arm end-point position, end-point velocity, and the intended final location or 'goal' of a reach have all been decoded from cortical signals for use in brain-machine interface (BMI) applications. These different aspects of arm movement can be decoded from the brain and used directly to control the position, velocity, or movement goal of a device. However, these decoded parameters can also be remapped to control different aspects of movement, such as using the decoded position of the hand to control the velocity of a device. People easily learn to use the position of a joystick to control the velocity of an object in a videogame. Similarly, in BMI systems, the position, velocity, or goal of a movement could be decoded from the brain and remapped to control some other aspect of device movement. This study evaluates how easily people make transformations between position, velocity, and reach goal in BMI systems. It also evaluates how different amounts of decoding error impact on device control with and without these transformations. Results suggest some remapping options can significantly improve BMI control. This study provides guidance on what remapping options to use when various amounts of decoding error are present.
Decoding position, velocity, or goal: Does it matter for brain-machine interfaces?
NASA Astrophysics Data System (ADS)
Marathe, A. R.; Taylor, D. M.
2011-04-01
Arm end-point position, end-point velocity, and the intended final location or 'goal' of a reach have all been decoded from cortical signals for use in brain-machine interface (BMI) applications. These different aspects of arm movement can be decoded from the brain and used directly to control the position, velocity, or movement goal of a device. However, these decoded parameters can also be remapped to control different aspects of movement, such as using the decoded position of the hand to control the velocity of a device. People easily learn to use the position of a joystick to control the velocity of an object in a videogame. Similarly, in BMI systems, the position, velocity, or goal of a movement could be decoded from the brain and remapped to control some other aspect of device movement. This study evaluates how easily people make transformations between position, velocity, and reach goal in BMI systems. It also evaluates how different amounts of decoding error impact on device control with and without these transformations. Results suggest some remapping options can significantly improve BMI control. This study provides guidance on what remapping options to use when various amounts of decoding error are present.
Developments in brain-machine interfaces from the perspective of robotics.
Kim, Hyun K; Park, Shinsuk; Srinivasan, Mandayam A
2009-04-01
Many patients suffer from the loss of motor skills, resulting from traumatic brain and spinal cord injuries, stroke, and many other disabling conditions. Thanks to technological advances in measuring and decoding the electrical activity of cortical neurons, brain-machine interfaces (BMI) have become a promising technology that can aid paralyzed individuals. In recent studies on BMI, robotic manipulators have demonstrated their potential as neuroprostheses. Restoring motor skills through robot manipulators controlled by brain signals may improve the quality of life of people with disability. This article reviews current robotic technologies that are relevant to BMI and suggests strategies that could improve the effectiveness of a brain-operated neuroprosthesis through robotics.
Minati, Ludovico; Nigri, Anna; Rosazza, Cristina; Bruzzone, Maria Grazia
2012-06-01
Previous studies have demonstrated the possibility of using functional MRI to control a robot arm through a brain-machine interface by directly coupling haemodynamic activity in the sensory-motor cortex to the position of two axes. Here, we extend this work by implementing interaction at a more abstract level, whereby imagined actions deliver structured commands to a robot arm guided by a machine vision system. Rather than extracting signals from a small number of pre-selected regions, the proposed system adaptively determines at individual level how to map representative brain areas to the input nodes of a classifier network. In this initial study, a median action recognition accuracy of 90% was attained on five volunteers performing a game consisting of collecting randomly positioned coloured pawns and placing them into cups. The "pawn" and "cup" instructions were imparted through four mental imaginery tasks, linked to robot arm actions by a state machine. With the current implementation in MatLab language the median action recognition time was 24.3s and the robot execution time was 17.7s. We demonstrate the notion of combining haemodynamic brain-machine interfacing with computer vision to implement interaction at the level of high-level commands rather than individual movements, which may find application in future fMRI approaches relevant to brain-lesioned patients, and provide source code supporting further work on larger command sets and real-time processing. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Diep, Lucy; Wolbring, Gregor
2013-01-01
Some new and envisioned technologies such as brain machine interfaces (BMI) that are being developed initially for people with disabilities, but whose use can also be expanded to the general public have the potential to change body ability expectations of disabled and non-disabled people beyond the species-typical. The ways in which this dynamic…
Neurosurgery and the dawning age of Brain-Machine Interfaces
Rowland, Nathan C.; Breshears, Jonathan; Chang, Edward F.
2013-01-01
Brain–machine interfaces (BMIs) are on the horizon for clinical neurosurgery. Electrocorticography-based platforms are less invasive than implanted microelectrodes, however, the latter are unmatched in their ability to achieve fine motor control of a robotic prosthesis capable of natural human behaviors. These technologies will be crucial to restoring neural function to a large population of patients with severe neurologic impairment – including those with spinal cord injury, stroke, limb amputation, and disabling neuromuscular disorders such as amyotrophic lateral sclerosis. On the opposite end of the spectrum are neural enhancement technologies for specialized applications such as combat. An ongoing ethical dialogue is imminent as we prepare for BMI platforms to enter the neurosurgical realm of clinical management. PMID:23653884
The Mind and the Machine. On the Conceptual and Moral Implications of Brain-Machine Interaction.
Schermer, Maartje
2009-12-01
Brain-machine interfaces are a growing field of research and application. The increasing possibilities to connect the human brain to electronic devices and computer software can be put to use in medicine, the military, and entertainment. Concrete technologies include cochlear implants, Deep Brain Stimulation, neurofeedback and neuroprosthesis. The expectations for the near and further future are high, though it is difficult to separate hope from hype. The focus in this paper is on the effects that these new technologies may have on our 'symbolic order'-on the ways in which popular categories and concepts may change or be reinterpreted. First, the blurring distinction between man and machine and the idea of the cyborg are discussed. It is argued that the morally relevant difference is that between persons and non-persons, which does not necessarily coincide with the distinction between man and machine. The concept of the person remains useful. It may, however, become more difficult to assess the limits of the human body. Next, the distinction between body and mind is discussed. The mind is increasingly seen as a function of the brain, and thus understood in bodily and mechanical terms. This raises questions concerning concepts of free will and moral responsibility that may have far reaching consequences in the field of law, where some have argued for a revision of our criminal justice system, from retributivist to consequentialist. Even without such a (unlikely and unwarranted) revision occurring, brain-machine interactions raise many interesting questions regarding distribution and attribution of responsibility.
An implantable integrated low-power amplifier-microelectrode array for Brain-Machine Interfaces.
Patrick, Erin; Sankar, Viswanath; Rowe, William; Sanchez, Justin C; Nishida, Toshikazu
2010-01-01
One of the important challenges in designing Brain-Machine Interfaces (BMI) is to build implantable systems that have the ability to reliably process the activity of large ensembles of cortical neurons. In this paper, we report the design, fabrication, and testing of a polyimide-based microelectrode array integrated with a low-power amplifier as part of the Florida Wireless Integrated Recording Electrode (FWIRE) project at the University of Florida developing a fully implantable neural recording system for BMI applications. The electrode array was fabricated using planar micromachining MEMS processes and hybrid packaged with the amplifier die using a flip-chip bonding technique. The system was tested both on bench and in-vivo. Acute and chronic neural recordings were obtained from a rodent for a period of 42 days. The electrode-amplifier performance was analyzed over the chronic recording period with the observation of a noise floor of 4.5 microVrms, and an average signal-to-noise ratio of 3.8.
Spatial Brain Control Interface using Optical and Electrophysiological Measures
2013-08-27
appropriate for implementing a reliable brain-computer interface ( BCI ). The LSVM method 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 27-08-2013 13...Machine (LSVM) was the most appropriate for implementing a reliable brain-computer interface ( BCI ). The LSVM method was applied to the imaging data...local field potentials proved to be fast and strongly tuned for the spatial parameters of the task. Thus, a reliable BCI that can predict upcoming
Interfacing insect brain for space applications.
Di Pino, Giovanni; Seidl, Tobias; Benvenuto, Antonella; Sergi, Fabrizio; Campolo, Domenico; Accoto, Dino; Maria Rossini, Paolo; Guglielmelli, Eugenio
2009-01-01
Insects exhibit remarkable navigation capabilities that current control architectures are still far from successfully mimic and reproduce. In this chapter, we present the results of a study on conceptualizing insect/machine hybrid controllers for improving autonomy of exploratory vehicles. First, the different principally possible levels of interfacing between insect and machine are examined followed by a review of current approaches towards hybridity and enabling technologies. Based on the insights of this activity, we propose a double hybrid control architecture which hinges around the concept of "insect-in-a-cockpit." It integrates both biological/artificial (insect/robot) modules and deliberative/reactive behavior. The basic assumption is that "low-level" tasks are managed by the robot, while the "insect intelligence" is exploited whenever high-level problem solving and decision making is required. Both neural and natural interfacing have been considered to achieve robustness and redundancy of exchanged information.
Lee, Brian; Liu, Charles Y; Apuzzo, Michael L J
2013-01-01
Conventionally, the practice of neurosurgery has been characterized by the removal of pathology, congenital or acquired. The emerging complement to the removal of pathology is surgery for the specific purpose of restoration of function. Advents in neuroscience, technology, and the understanding of neural circuitry are creating opportunities to intervene in disease processes in a reparative manner, thereby advancing toward the long-sought-after concept of neurorestoration. Approaching the issue of neurorestoration from a biomedical engineering perspective is the rapidly growing arena of implantable devices. Implantable devices are becoming more common in medicine and are making significant advancements to improve a patient's functional outcome. Devices such as deep brain stimulators, vagus nerve stimulators, and spinal cord stimulators are now becoming more commonplace in neurosurgery as we utilize our understanding of the nervous system to interpret neural activity and restore function. One of the most exciting prospects in neurosurgery is the technologically driven field of brain-machine interface, also known as brain-computer interface, or neuroprosthetics. The successful development of this technology will have far-reaching implications for patients suffering from a great number of diseases, including but not limited to spinal cord injury, paralysis, stroke, or loss of limb. This article provides an overview of the issues related to neurorestoration using implantable devices with a specific focus on brain-machine interface technology. Copyright © 2013 Elsevier Inc. All rights reserved.
Passive BCI in Operational Environments: Insights, Recent Advances, and Future Trends.
Arico, Pietro; Borghini, Gianluca; Di Flumeri, Gianluca; Sciaraffa, Nicolina; Colosimo, Alfredo; Babiloni, Fabio
2017-07-01
This minireview aims to highlight recent important aspects to consider and evaluate when passive brain-computer interface (pBCI) systems would be developed and used in operational environments, and remarks future directions of their applications. Electroencephalography (EEG) based pBCI has become an important tool for real-time analysis of brain activity since it could potentially provide covertly-without distracting the user from the main task-and objectively-not affected by the subjective judgment of an observer or the user itself-information about the operator cognitive state. Different examples of pBCI applications in operational environments and new adaptive interface solutions have been presented and described. In addition, a general overview regarding the correct use of machine learning techniques (e.g., which algorithm to use, common pitfalls to avoid, etc.) in the pBCI field has been provided. Despite recent innovations on algorithms and neurotechnology, pBCI systems are not completely ready to enter the market yet, mainly due to limitations of the EEG electrodes technology, and algorithms reliability and capability in real settings. High complexity and safety critical systems (e.g., airplanes, ATM interfaces) should adapt their behaviors and functionality accordingly to the user' actual mental state. Thus, technologies (i.e., pBCIs) able to measure in real time the user's mental states would result very useful in such "high risk" environments to enhance human machine interaction, and so increase the overall safety.
Miniaturized neural interfaces and implants
NASA Astrophysics Data System (ADS)
Stieglitz, Thomas; Boretius, Tim; Ordonez, Juan; Hassler, Christina; Henle, Christian; Meier, Wolfgang; Plachta, Dennis T. T.; Schuettler, Martin
2012-03-01
Neural prostheses are technical systems that interface nerves to treat the symptoms of neurological diseases and to restore sensory of motor functions of the body. Success stories have been written with the cochlear implant to restore hearing, with spinal cord stimulators to treat chronic pain as well as urge incontinence, and with deep brain stimulators in patients suffering from Parkinson's disease. Highly complex neural implants for novel medical applications can be miniaturized either by means of precision mechanics technologies using known and established materials for electrodes, cables, and hermetic packages or by applying microsystems technologies. Examples for both approaches will be introduced and discussed. Electrode arrays for recording of electrocorticograms during presurgical epilepsy diagnosis have been manufactured using approved materials and a marking laser to achieve an integration density that is adequate in the context of brain machine interfaces, e.g. on the motor cortex. Microtechnologies have to be used for further miniaturization to develop polymer-based flexible and light weighted electrode arrays to interface the peripheral and central nervous system. Polyimide as substrate and insulation material will be discussed as well as several application examples for nerve interfaces like cuffs, filament like electrodes and large arrays for subdural implantation.
Minati, Ludovico; Cercignani, Mara; Chan, Dennis
2013-10-01
Graph theory-based analyses of brain network topology can be used to model the spatiotemporal correlations in neural activity detected through fMRI, and such approaches have wide-ranging potential, from detection of alterations in preclinical Alzheimer's disease through to command identification in brain-machine interfaces. However, due to prohibitive computational costs, graph-based analyses to date have principally focused on measuring connection density rather than mapping the topological architecture in full by exhaustive shortest-path determination. This paper outlines a solution to this problem through parallel implementation of Dijkstra's algorithm in programmable logic. The processor design is optimized for large, sparse graphs and provided in full as synthesizable VHDL code. An acceleration factor between 15 and 18 is obtained on a representative resting-state fMRI dataset, and maps of Euclidean path length reveal the anticipated heterogeneous cortical involvement in long-range integrative processing. These results enable high-resolution geodesic connectivity mapping for resting-state fMRI in patient populations and real-time geodesic mapping to support identification of imagined actions for fMRI-based brain-machine interfaces. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
Parsing learning in networks using brain-machine interfaces.
Orsborn, Amy L; Pesaran, Bijan
2017-10-01
Brain-machine interfaces (BMIs) define new ways to interact with our environment and hold great promise for clinical therapies. Motor BMIs, for instance, re-route neural activity to control movements of a new effector and could restore movement to people with paralysis. Increasing experience shows that interfacing with the brain inevitably changes the brain. BMIs engage and depend on a wide array of innate learning mechanisms to produce meaningful behavior. BMIs precisely define the information streams into and out of the brain, but engage wide-spread learning. We take a network perspective and review existing observations of learning in motor BMIs to show that BMIs engage multiple learning mechanisms distributed across neural networks. Recent studies demonstrate the advantages of BMI for parsing this learning and its underlying neural mechanisms. BMIs therefore provide a powerful tool for studying the neural mechanisms of learning that highlights the critical role of learning in engineered neural therapies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Future Cyborgs: Human-Machine Interface for Virtual Reality Applications
2007-04-01
FUTURE CYBORGS : HUMAN-MACHINE INTERFACE FOR VIRTUAL REALITY APPLICATIONS Robert R. Powell, Major, USAF April 2007 Blue Horizons...SUBTITLE Future Cyborgs : Human-Machine Interface for Virtual Reality Applications 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...Nicholas Negroponte, Being Digital (New York: Alfred A Knopf, Inc, 1995), 123. 23 Ibid. 24 Andy Clark, Natural-Born Cyborgs (New York: Oxford
Evolution of brain-computer interfaces: going beyond classic motor physiology
Leuthardt, Eric C.; Schalk, Gerwin; Roland, Jarod; Rouse, Adam; Moran, Daniel W.
2010-01-01
The notion that a computer can decode brain signals to infer the intentions of a human and then enact those intentions directly through a machine is becoming a realistic technical possibility. These types of devices are known as brain-computer interfaces (BCIs). The evolution of these neuroprosthetic technologies could have significant implications for patients with motor disabilities by enhancing their ability to interact and communicate with their environment. The cortical physiology most investigated and used for device control has been brain signals from the primary motor cortex. To date, this classic motor physiology has been an effective substrate for demonstrating the potential efficacy of BCI-based control. However, emerging research now stands to further enhance our understanding of the cortical physiology underpinning human intent and provide further signals for more complex brain-derived control. In this review, the authors report the current status of BCIs and detail the emerging research trends that stand to augment clinical applications in the future. PMID:19569892
Wearable ear EEG for brain interfacing
NASA Astrophysics Data System (ADS)
Schroeder, Eric D.; Walker, Nicholas; Danko, Amanda S.
2017-02-01
Brain-computer interfaces (BCIs) measuring electrical activity via electroencephalogram (EEG) have evolved beyond clinical applications to become wireless consumer products. Typically marketed for meditation and neu- rotherapy, these devices are limited in scope and currently too obtrusive to be a ubiquitous wearable. Stemming from recent advancements made in hearing aid technology, wearables have been shrinking to the point that the necessary sensors, circuitry, and batteries can be fit into a small in-ear wearable device. In this work, an ear-EEG device is created with a novel system for artifact removal and signal interpretation. The small, compact, cost-effective, and discreet device is demonstrated against existing consumer electronics in this space for its signal quality, comfort, and usability. A custom mobile application is developed to process raw EEG from each device and display interpreted data to the user. Artifact removal and signal classification is accomplished via a combination of support matrix machines (SMMs) and soft thresholding of relevant statistical properties.
Rattanatamrong, Prapaporn; Matsunaga, Andrea; Raiturkar, Pooja; Mesa, Diego; Zhao, Ming; Mahmoudi, Babak; Digiovanna, Jack; Principe, Jose; Figueiredo, Renato; Sanchez, Justin; Fortes, Jose
2010-01-01
The CyberWorkstation (CW) is an advanced cyber-infrastructure for Brain-Machine Interface (BMI) research. It allows the development, configuration and execution of BMI computational models using high-performance computing resources. The CW's concept is implemented using a software structure in which an "experiment engine" is used to coordinate all software modules needed to capture, communicate and process brain signals and motor-control commands. A generic BMI-model template, which specifies a common interface to the CW's experiment engine, and a common communication protocol enable easy addition, removal or replacement of models without disrupting system operation. This paper reviews the essential components of the CW and shows how templates can facilitate the processes of BMI model development, testing and incorporation into the CW. It also discusses the ongoing work towards making this process infrastructure independent.
Larsson, Karin C; Kjäll, Peter; Richter-Dahlfors, Agneta
2013-09-01
A major challenge when creating interfaces for the nervous system is to translate between the signal carriers of the nervous system (ions and neurotransmitters) and those of conventional electronics (electrons). Organic conjugated polymers represent a unique class of materials that utilizes both electrons and ions as charge carriers. Based on these materials, we have established a series of novel communication interfaces between electronic components and biological systems. The organic electronic ion pump (OEIP) presented in this review is made of the polymer-polyelectrolyte system poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS). The OEIP translates electronic signals into electrophoretic migration of ions and neurotransmitters. We demonstrate how spatio-temporally controlled delivery of ions and neurotransmitters can be used to modulate intracellular Ca(2+) signaling in neuronal cells in the absence of convective disturbances. The electronic control of delivery enables strict control of dynamic parameters, such as amplitude and frequency of Ca(2+) responses, and can be used to generate temporal patterns mimicking naturally occurring Ca(2+) oscillations. To enable further control of the ionic signals we developed the electrophoretic chemical transistor, an analog of the traditional transistor used to amplify and/or switch electronic signals. Finally, we demonstrate the use of the OEIP in a new "machine-to-brain" interface by modulating brainstem responses in vivo. This review highlights the potential of communication interfaces based on conjugated polymers in generating complex, high-resolution, signal patterns to control cell physiology. We foresee widespread applications for these devices in biomedical research and in future medical devices within multiple therapeutic areas. This article is part of a Special Issue entitled Organic Bioelectronics-Novel Applications in Biomedicine. Copyright © 2012 Elsevier B.V. All rights reserved.
Toward FRP-Based Brain-Machine Interfaces—Single-Trial Classification of Fixation-Related Potentials
Finke, Andrea; Essig, Kai; Marchioro, Giuseppe; Ritter, Helge
2016-01-01
The co-registration of eye tracking and electroencephalography provides a holistic measure of ongoing cognitive processes. Recently, fixation-related potentials have been introduced to quantify the neural activity in such bi-modal recordings. Fixation-related potentials are time-locked to fixation onsets, just like event-related potentials are locked to stimulus onsets. Compared to existing electroencephalography-based brain-machine interfaces that depend on visual stimuli, fixation-related potentials have the advantages that they can be used in free, unconstrained viewing conditions and can also be classified on a single-trial level. Thus, fixation-related potentials have the potential to allow for conceptually different brain-machine interfaces that directly interpret cortical activity related to the visual processing of specific objects. However, existing research has investigated fixation-related potentials only with very restricted and highly unnatural stimuli in simple search tasks while participant’s body movements were restricted. We present a study where we relieved many of these restrictions while retaining some control by using a gaze-contingent visual search task. In our study, participants had to find a target object out of 12 complex and everyday objects presented on a screen while the electrical activity of the brain and eye movements were recorded simultaneously. Our results show that our proposed method for the classification of fixation-related potentials can clearly discriminate between fixations on relevant, non-relevant and background areas. Furthermore, we show that our classification approach generalizes not only to different test sets from the same participant, but also across participants. These results promise to open novel avenues for exploiting fixation-related potentials in electroencephalography-based brain-machine interfaces and thus providing a novel means for intuitive human-machine interaction. PMID:26812487
Long Chen; Zhongpeng Wang; Feng He; Jiajia Yang; Hongzhi Qi; Peng Zhou; Baikun Wan; Dong Ming
2015-08-01
The hybrid brain computer interface (hBCI) could provide higher information transfer rate than did the classical BCIs. It included more than one brain-computer or human-machine interact paradigms, such as the combination of the P300 and SSVEP paradigms. Research firstly constructed independent subsystems of three different paradigms and tested each of them with online experiments. Then we constructed a serial hybrid BCI system which combined these paradigms to achieve the functions of typing letters, moving and clicking cursor, and switching among them for the purpose of browsing webpages. Five subjects were involved in this study. They all successfully realized these functions in the online tests. The subjects could achieve an accuracy above 90% after training, which met the requirement in operating the system efficiently. The results demonstrated that it was an efficient system capable of robustness, which provided an approach for the clinic application.
Lührs, Michael; Goebel, Rainer
2017-10-01
Turbo-Satori is a neurofeedback and brain-computer interface (BCI) toolbox for real-time functional near-infrared spectroscopy (fNIRS). It incorporates multiple pipelines from real-time preprocessing and analysis to neurofeedback and BCI applications. The toolbox is designed with a focus in usability, enabling a fast setup and execution of real-time experiments. Turbo-Satori uses an incremental recursive least-squares procedure for real-time general linear model calculation and support vector machine classifiers for advanced BCI applications. It communicates directly with common NIRx fNIRS hardware and was tested extensively ensuring that the calculations can be performed in real time without a significant change in calculation times for all sampling intervals during ongoing experiments of up to 6 h of recording. Enabling immediate access to advanced processing features also allows the use of this toolbox for students and nonexperts in the field of fNIRS data acquisition and processing. Flexible network interfaces allow third party stimulus applications to access the processed data and calculated statistics in real time so that this information can be easily incorporated in neurofeedback or BCI presentations.
Wang, Yiwen; Wang, Fang; Xu, Kai; Zhang, Qiaosheng; Zhang, Shaomin; Zheng, Xiaoxiang
2015-05-01
Reinforcement learning (RL)-based brain machine interfaces (BMIs) enable the user to learn from the environment through interactions to complete the task without desired signals, which is promising for clinical applications. Previous studies exploited Q-learning techniques to discriminate neural states into simple directional actions providing the trial initial timing. However, the movements in BMI applications can be quite complicated, and the action timing explicitly shows the intention when to move. The rich actions and the corresponding neural states form a large state-action space, imposing generalization difficulty on Q-learning. In this paper, we propose to adopt attention-gated reinforcement learning (AGREL) as a new learning scheme for BMIs to adaptively decode high-dimensional neural activities into seven distinct movements (directional moves, holdings and resting) due to the efficient weight-updating. We apply AGREL on neural data recorded from M1 of a monkey to directly predict a seven-action set in a time sequence to reconstruct the trajectory of a center-out task. Compared to Q-learning techniques, AGREL could improve the target acquisition rate to 90.16% in average with faster convergence and more stability to follow neural activity over multiple days, indicating the potential to achieve better online decoding performance for more complicated BMI tasks.
On robust parameter estimation in brain-computer interfacing
NASA Astrophysics Data System (ADS)
Samek, Wojciech; Nakajima, Shinichi; Kawanabe, Motoaki; Müller, Klaus-Robert
2017-12-01
Objective. The reliable estimation of parameters such as mean or covariance matrix from noisy and high-dimensional observations is a prerequisite for successful application of signal processing and machine learning algorithms in brain-computer interfacing (BCI). This challenging task becomes significantly more difficult if the data set contains outliers, e.g. due to subject movements, eye blinks or loose electrodes, as they may heavily bias the estimation and the subsequent statistical analysis. Although various robust estimators have been developed to tackle the outlier problem, they ignore important structural information in the data and thus may not be optimal. Typical structural elements in BCI data are the trials consisting of a few hundred EEG samples and indicating the start and end of a task. Approach. This work discusses the parameter estimation problem in BCI and introduces a novel hierarchical view on robustness which naturally comprises different types of outlierness occurring in structured data. Furthermore, the class of minimum divergence estimators is reviewed and a robust mean and covariance estimator for structured data is derived and evaluated with simulations and on a benchmark data set. Main results. The results show that state-of-the-art BCI algorithms benefit from robustly estimated parameters. Significance. Since parameter estimation is an integral part of various machine learning algorithms, the presented techniques are applicable to many problems beyond BCI.
Wissel, Tobias; Pfeiffer, Tim; Frysch, Robert; Knight, Robert T.; Chang, Edward F.; Hinrichs, Hermann; Rieger, Jochem W.; Rose, Georg
2013-01-01
Objective Support Vector Machines (SVM) have developed into a gold standard for accurate classification in Brain-Computer-Interfaces (BCI). The choice of the most appropriate classifier for a particular application depends on several characteristics in addition to decoding accuracy. Here we investigate the implementation of Hidden Markov Models (HMM)for online BCIs and discuss strategies to improve their performance. Approach We compare the SVM, serving as a reference, and HMMs for classifying discrete finger movements obtained from the Electrocorticograms of four subjects doing a finger tapping experiment. The classifier decisions are based on a subset of low-frequency time domain and high gamma oscillation features. Main results We show that decoding optimization between the two approaches is due to the way features are extracted and selected and less dependent on the classifier. An additional gain in HMM performance of up to 6% was obtained by introducing model constraints. Comparable accuracies of up to 90% were achieved with both SVM and HMM with the high gamma cortical response providing the most important decoding information for both techniques. Significance We discuss technical HMM characteristics and adaptations in the context of the presented data as well as for general BCI applications. Our findings suggest that HMMs and their characteristics are promising for efficient online brain-computer interfaces. PMID:24045504
The chemistry of cyborgs--interfacing technical devices with organisms.
Giselbrecht, Stefan; Rapp, Bastian E; Niemeyer, Christof M
2013-12-23
The term "cyborg" refers to a cybernetic organism, which characterizes the chimera of a living organism and a machine. Owing to the widespread application of intracorporeal medical devices, cyborgs are no longer exclusively a subject of science fiction novels, but technically they already exist in our society. In this review, we briefly summarize the development of modern prosthetics and the evolution of brain-machine interfaces, and discuss the latest technical developments of implantable devices, in particular, biocompatible integrated electronics and microfluidics used for communication and control of living organisms. Recent examples of animal cyborgs and their relevance to fundamental and applied biomedical research and bioethics in this novel and exciting field at the crossroads of chemistry, biomedicine, and the engineering sciences are presented. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
EXiO-A Brain-Controlled Lower Limb Exoskeleton for Rhesus Macaques.
Vouga, Tristan; Zhuang, Katie Z; Olivier, Jeremy; Lebedev, Mikhail A; Nicolelis, Miguel A L; Bouri, Mohamed; Bleuler, Hannes
2017-02-01
Recent advances in the field of brain-machine interfaces (BMIs) have demonstrated enormous potential to shape the future of rehabilitation and prosthetic devices. Here, a lower-limb exoskeleton controlled by the intracortical activity of an awake behaving rhesus macaque is presented as a proof-of-concept for a locomotorBMI. A detailed description of the mechanical device, including its innovative features and first experimental results, is provided. During operation, BMI-decoded position and velocity are directly mapped onto the bipedal exoskeleton's motions, which then move the monkey's legs as the monkey remains physicallypassive. To meet the unique requirements of such an application, the exoskeleton's features include: high output torque with backdrivable actuation, size adjustability, and safe user-robot interface. In addition, a novel rope transmission is introduced and implemented. To test the performance of the exoskeleton, a mechanical assessment was conducted, which yielded quantifiable results for transparency, efficiency, stiffness, and tracking performance. Usage under both brain control and automated actuation demonstrates the device's capability to fulfill the demanding needs of this application. These results lay the groundwork for further advancement in BMI-controlled devices for primates including humans.
Wireless brain-machine interface using EEG and EOG: brain wave classification and robot control
NASA Astrophysics Data System (ADS)
Oh, Sechang; Kumar, Prashanth S.; Kwon, Hyeokjun; Varadan, Vijay K.
2012-04-01
A brain-machine interface (BMI) links a user's brain activity directly to an external device. It enables a person to control devices using only thought. Hence, it has gained significant interest in the design of assistive devices and systems for people with disabilities. In addition, BMI has also been proposed to replace humans with robots in the performance of dangerous tasks like explosives handling/diffusing, hazardous materials handling, fire fighting etc. There are mainly two types of BMI based on the measurement method of brain activity; invasive and non-invasive. Invasive BMI can provide pristine signals but it is expensive and surgery may lead to undesirable side effects. Recent advances in non-invasive BMI have opened the possibility of generating robust control signals from noisy brain activity signals like EEG and EOG. A practical implementation of a non-invasive BMI such as robot control requires: acquisition of brain signals with a robust wearable unit, noise filtering and signal processing, identification and extraction of relevant brain wave features and finally, an algorithm to determine control signals based on the wave features. In this work, we developed a wireless brain-machine interface with a small platform and established a BMI that can be used to control the movement of a robot by using the extracted features of the EEG and EOG signals. The system records and classifies EEG as alpha, beta, delta, and theta waves. The classified brain waves are then used to define the level of attention. The acceleration and deceleration or stopping of the robot is controlled based on the attention level of the wearer. In addition, the left and right movements of eye ball control the direction of the robot.
Sefcik, Roberta K; Opie, Nicholas L; John, Sam E; Kellner, Christopher P; Mocco, J; Oxley, Thomas J
2016-05-01
Current standard practice requires an invasive approach to the recording of electroencephalography (EEG) for epilepsy surgery, deep brain stimulation (DBS), and brain-machine interfaces (BMIs). The development of endovascular techniques offers a minimally invasive route to recording EEG from deep brain structures. This historical perspective aims to describe the technical progress in endovascular EEG by reviewing the first endovascular recordings made using a wire electrode, which was followed by the development of nanowire and catheter recordings and, finally, the most recent progress in stent-electrode recordings. The technical progress in device technology over time and the development of the ability to record chronic intravenous EEG from electrode arrays is described. Future applications for the use of endovascular EEG in the preoperative and operative management of epilepsy surgery are then discussed, followed by the possibility of the technique's future application in minimally invasive operative approaches to DBS and BMI.
In vivo recordings of brain activity using organic transistors
Khodagholy, Dion; Doublet, Thomas; Quilichini, Pascale; Gurfinkel, Moshe; Leleux, Pierre; Ghestem, Antoine; Ismailova, Esma; Hervé, Thierry; Sanaur, Sébastien; Bernard, Christophe; Malliaras, George G.
2013-01-01
In vivo electrophysiological recordings of neuronal circuits are necessary for diagnostic purposes and for brain-machine interfaces. Organic electronic devices constitute a promising candidate because of their mechanical flexibility and biocompatibility. Here we demonstrate the engineering of an organic electrochemical transistor embedded in an ultrathin organic film designed to record electrophysiological signals on the surface of the brain. The device, tested in vivo on epileptiform discharges, displayed superior signal-to-noise ratio due to local amplification compared with surface electrodes. The organic transistor was able to record on the surface low-amplitude brain activities, which were poorly resolved with surface electrodes. This study introduces a new class of biocompatible, highly flexible devices for recording brain activity with superior signal-to-noise ratio that hold great promise for medical applications. PMID:23481383
In vivo recordings of brain activity using organic transistors.
Khodagholy, Dion; Doublet, Thomas; Quilichini, Pascale; Gurfinkel, Moshe; Leleux, Pierre; Ghestem, Antoine; Ismailova, Esma; Hervé, Thierry; Sanaur, Sébastien; Bernard, Christophe; Malliaras, George G
2013-01-01
In vivo electrophysiological recordings of neuronal circuits are necessary for diagnostic purposes and for brain-machine interfaces. Organic electronic devices constitute a promising candidate because of their mechanical flexibility and biocompatibility. Here we demonstrate the engineering of an organic electrochemical transistor embedded in an ultrathin organic film designed to record electrophysiological signals on the surface of the brain. The device, tested in vivo on epileptiform discharges, displayed superior signal-to-noise ratio due to local amplification compared with surface electrodes. The organic transistor was able to record on the surface low-amplitude brain activities, which were poorly resolved with surface electrodes. This study introduces a new class of biocompatible, highly flexible devices for recording brain activity with superior signal-to-noise ratio that hold great promise for medical applications.
Cortical and subcortical mechanisms of brain-machine interfaces.
Marchesotti, Silvia; Martuzzi, Roberto; Schurger, Aaron; Blefari, Maria Laura; Del Millán, José R; Bleuler, Hannes; Blanke, Olaf
2017-06-01
Technical advances in the field of Brain-Machine Interfaces (BMIs) enable users to control a variety of external devices such as robotic arms, wheelchairs, virtual entities and communication systems through the decoding of brain signals in real time. Most BMI systems sample activity from restricted brain regions, typically the motor and premotor cortex, with limited spatial resolution. Despite the growing number of applications, the cortical and subcortical systems involved in BMI control are currently unknown at the whole-brain level. Here, we provide a comprehensive and detailed report of the areas active during on-line BMI control. We recorded functional magnetic resonance imaging (fMRI) data while participants controlled an EEG-based BMI inside the scanner. We identified the regions activated during BMI control and how they overlap with those involved in motor imagery (without any BMI control). In addition, we investigated which regions reflect the subjective sense of controlling a BMI, the sense of agency for BMI-actions. Our data revealed an extended cortical-subcortical network involved in operating a motor-imagery BMI. This includes not only sensorimotor regions but also the posterior parietal cortex, the insula and the lateral occipital cortex. Interestingly, the basal ganglia and the anterior cingulate cortex were involved in the subjective sense of controlling the BMI. These results inform basic neuroscience by showing that the mechanisms of BMI control extend beyond sensorimotor cortices. This knowledge may be useful for the development of BMIs that offer a more natural and embodied feeling of control for the user. Hum Brain Mapp 38:2971-2989, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Marzullo, T C; Dudley, J R; Miller, C R; Trejo, L; Kipke, D R
2005-01-01
Brain machine interface development typically falls into two arenas, invasive extracellular recording and non-invasive electroencephalogram recording methods. The relationship between action potentials and field potentials is not well understood, and investigation of interrelationships may improve design of neuroprosthetic control systems. Rats were trained on a motor learning task whereby they had to insert their noses into an aperture while simultaneously pressing down on levers with their forepaws; spikes, local field potentials (LFPs), and electrocorticograms (ECoGs) over the motor cortex were recorded and characterized. Preliminary results suggest that the LFP activity in lower cortical layers oscillates with the ECoG.
Contreras-Vidal, Jose L.; Grossman, Robert G.
2013-01-01
In this communication, a translational clinical brain-machine interface (BMI) roadmap for an EEG-based BMI to a robotic exoskeleton (NeuroRex) is presented. This multi-faceted project addresses important engineering and clinical challenges: It addresses the validation of an intelligent, self-balancing, robotic lower-body and trunk exoskeleton (Rex) augmented with EEG-based BMI capabilities to interpret user intent to assist a mobility-impaired person to walk independently. The goal is to improve the quality of life and health status of wheelchair-bounded persons by enabling standing and sitting, walking and backing, turning, ascending and descending stairs/curbs, and navigating sloping surfaces in a variety of conditions without the need for additional support or crutches. PMID:24110003
Neuromechanism Study of Insect–Machine Interface: Flight Control by Neural Electrical Stimulation
Zhao, Huixia; Zheng, Nenggan; Ribi, Willi A.; Zheng, Huoqing; Xue, Lei; Gong, Fan; Zheng, Xiaoxiang; Hu, Fuliang
2014-01-01
The insect–machine interface (IMI) is a novel approach developed for man-made air vehicles, which directly controls insect flight by either neuromuscular or neural stimulation. In our previous study of IMI, we induced flight initiation and cessation reproducibly in restrained honeybees (Apis mellifera L.) via electrical stimulation of the bilateral optic lobes. To explore the neuromechanism underlying IMI, we applied electrical stimulation to seven subregions of the honeybee brain with the aid of a new method for localizing brain regions. Results showed that the success rate for initiating honeybee flight decreased in the order: α-lobe (or β-lobe), ellipsoid body, lobula, medulla and antennal lobe. Based on a comparison with other neurobiological studies in honeybees, we propose that there is a cluster of descending neurons in the honeybee brain that transmits neural excitation from stimulated brain areas to the thoracic ganglia, leading to flight behavior. This neural circuit may involve the higher-order integration center, the primary visual processing center and the suboesophageal ganglion, which is also associated with a possible learning and memory pathway. By pharmacologically manipulating the electrically stimulated honeybee brain, we have shown that octopamine, rather than dopamine, serotonin and acetylcholine, plays a part in the circuit underlying electrically elicited honeybee flight. Our study presents a new brain stimulation protocol for the honeybee–machine interface and has solved one of the questions with regard to understanding which functional divisions of the insect brain participate in flight control. It will support further studies to uncover the involved neurons inside specific brain areas and to test the hypothesized involvement of a visual learning and memory pathway in IMI flight control. PMID:25409523
Neuromechanism study of insect-machine interface: flight control by neural electrical stimulation.
Zhao, Huixia; Zheng, Nenggan; Ribi, Willi A; Zheng, Huoqing; Xue, Lei; Gong, Fan; Zheng, Xiaoxiang; Hu, Fuliang
2014-01-01
The insect-machine interface (IMI) is a novel approach developed for man-made air vehicles, which directly controls insect flight by either neuromuscular or neural stimulation. In our previous study of IMI, we induced flight initiation and cessation reproducibly in restrained honeybees (Apis mellifera L.) via electrical stimulation of the bilateral optic lobes. To explore the neuromechanism underlying IMI, we applied electrical stimulation to seven subregions of the honeybee brain with the aid of a new method for localizing brain regions. Results showed that the success rate for initiating honeybee flight decreased in the order: α-lobe (or β-lobe), ellipsoid body, lobula, medulla and antennal lobe. Based on a comparison with other neurobiological studies in honeybees, we propose that there is a cluster of descending neurons in the honeybee brain that transmits neural excitation from stimulated brain areas to the thoracic ganglia, leading to flight behavior. This neural circuit may involve the higher-order integration center, the primary visual processing center and the suboesophageal ganglion, which is also associated with a possible learning and memory pathway. By pharmacologically manipulating the electrically stimulated honeybee brain, we have shown that octopamine, rather than dopamine, serotonin and acetylcholine, plays a part in the circuit underlying electrically elicited honeybee flight. Our study presents a new brain stimulation protocol for the honeybee-machine interface and has solved one of the questions with regard to understanding which functional divisions of the insect brain participate in flight control. It will support further studies to uncover the involved neurons inside specific brain areas and to test the hypothesized involvement of a visual learning and memory pathway in IMI flight control.
Brain-machine interfacing control of whole-body humanoid motion
Bouyarmane, Karim; Vaillant, Joris; Sugimoto, Norikazu; Keith, François; Furukawa, Jun-ichiro; Morimoto, Jun
2014-01-01
We propose to tackle in this paper the problem of controlling whole-body humanoid robot behavior through non-invasive brain-machine interfacing (BMI), motivated by the perspective of mapping human motor control strategies to human-like mechanical avatar. Our solution is based on the adequate reduction of the controllable dimensionality of a high-DOF humanoid motion in line with the state-of-the-art possibilities of non-invasive BMI technologies, leaving the complement subspace part of the motion to be planned and executed by an autonomous humanoid whole-body motion planning and control framework. The results are shown in full physics-based simulation of a 36-degree-of-freedom humanoid motion controlled by a user through EEG-extracted brain signals generated with motor imagery task. PMID:25140134
Diverse applications of advanced man-telerobot interfaces
NASA Technical Reports Server (NTRS)
Mcaffee, Douglas A.
1991-01-01
Advancements in man-machine interfaces and control technologies used in space telerobotics and teleoperators have potential application wherever human operators need to manipulate multi-dimensional spatial relationships. Bilateral six degree-of-freedom position and force cues exchanged between the user and a complex system can broaden and improve the effectiveness of several diverse man-machine interfaces.
Physiological properties of brain-machine interface input signals.
Slutzky, Marc W; Flint, Robert D
2017-08-01
Brain-machine interfaces (BMIs), also called brain-computer interfaces (BCIs), decode neural signals and use them to control some type of external device. Despite many experimental successes and terrific demonstrations in animals and humans, a high-performance, clinically viable device has not yet been developed for widespread usage. There are many factors that impact clinical viability and BMI performance. Arguably, the first of these is the selection of brain signals used to control BMIs. In this review, we summarize the physiological characteristics and performance-including movement-related information, longevity, and stability-of multiple types of input signals that have been used in invasive BMIs to date. These include intracortical spikes as well as field potentials obtained inside the cortex, at the surface of the cortex (electrocorticography), and at the surface of the dura mater (epidural signals). We also discuss the potential for future enhancements in input signal performance, both by improving hardware and by leveraging the knowledge of the physiological characteristics of these signals to improve decoding and stability. Copyright © 2017 the American Physiological Society.
A Semisupervised Support Vector Machines Algorithm for BCI Systems
Qin, Jianzhao; Li, Yuanqing; Sun, Wei
2007-01-01
As an emerging technology, brain-computer interfaces (BCIs) bring us new communication interfaces which translate brain activities into control signals for devices like computers, robots, and so forth. In this study, we propose a semisupervised support vector machine (SVM) algorithm for brain-computer interface (BCI) systems, aiming at reducing the time-consuming training process. In this algorithm, we apply a semisupervised SVM for translating the features extracted from the electrical recordings of brain into control signals. This SVM classifier is built from a small labeled data set and a large unlabeled data set. Meanwhile, to reduce the time for training semisupervised SVM, we propose a batch-mode incremental learning method, which can also be easily applied to the online BCI systems. Additionally, it is suggested in many studies that common spatial pattern (CSP) is very effective in discriminating two different brain states. However, CSP needs a sufficient labeled data set. In order to overcome the drawback of CSP, we suggest a two-stage feature extraction method for the semisupervised learning algorithm. We apply our algorithm to two BCI experimental data sets. The offline data analysis results demonstrate the effectiveness of our algorithm. PMID:18368141
sw-SVM: sensor weighting support vector machines for EEG-based brain-computer interfaces.
Jrad, N; Congedo, M; Phlypo, R; Rousseau, S; Flamary, R; Yger, F; Rakotomamonjy, A
2011-10-01
In many machine learning applications, like brain-computer interfaces (BCI), high-dimensional sensor array data are available. Sensor measurements are often highly correlated and signal-to-noise ratio is not homogeneously spread across sensors. Thus, collected data are highly variable and discrimination tasks are challenging. In this work, we focus on sensor weighting as an efficient tool to improve the classification procedure. We present an approach integrating sensor weighting in the classification framework. Sensor weights are considered as hyper-parameters to be learned by a support vector machine (SVM). The resulting sensor weighting SVM (sw-SVM) is designed to satisfy a margin criterion, that is, the generalization error. Experimental studies on two data sets are presented, a P300 data set and an error-related potential (ErrP) data set. For the P300 data set (BCI competition III), for which a large number of trials is available, the sw-SVM proves to perform equivalently with respect to the ensemble SVM strategy that won the competition. For the ErrP data set, for which a small number of trials are available, the sw-SVM shows superior performances as compared to three state-of-the art approaches. Results suggest that the sw-SVM promises to be useful in event-related potentials classification, even with a small number of training trials.
Schuettler, Martin; Kohler, Fabian; Ordonez, Juan S; Stieglitz, Thomas
2012-01-01
Future brain-computer-interfaces (BCIs) for severely impaired patients are implanted to electrically contact the brain tissue. Avoiding percutaneous cables requires amplifier and telemetry electronics to be implanted too. We developed a hermetic package that protects the electronic circuitry of a BCI from body moisture while permitting infrared communication through the package wall made from alumina ceramic. The ceramic package is casted in medical grade silicone adhesive, for which we identified MED2-4013 as a promising candidate.
Gonzalez-Vargas, Jose; Dosen, Strahinja; Amsuess, Sebastian; Yu, Wenwei; Farina, Dario
2015-01-01
Modern assistive devices are very sophisticated systems with multiple degrees of freedom. However, an effective and user-friendly control of these systems is still an open problem since conventional human-machine interfaces (HMI) cannot easily accommodate the system’s complexity. In HMIs, the user is responsible for generating unique patterns of command signals directly triggering the device functions. This approach can be difficult to implement when there are many functions (necessitating many command patterns) and/or the user has a considerable impairment (limited number of available signal sources). In this study, we propose a novel concept for a general-purpose HMI where the controller and the user communicate bidirectionally to select the desired function. The system first presents possible choices to the user via electro-tactile stimulation; the user then acknowledges the desired choice by generating a single command signal. Therefore, the proposed approach simplifies the user communication interface (one signal to generate), decoding (one signal to recognize), and allows selecting from a number of options. To demonstrate the new concept the method was used in one particular application, namely, to implement the control of all the relevant functions in a state of the art commercial prosthetic hand without using any myoelectric channels. We performed experiments in healthy subjects and with one amputee to test the feasibility of the novel approach. The results showed that the performance of the novel HMI concept was comparable or, for some outcome measures, better than the classic myoelectric interfaces. The presented approach has a general applicability and the obtained results point out that it could be used to operate various assistive systems (e.g., prosthesis vs. wheelchair), or it could be integrated into other control schemes (e.g., myoelectric control, brain-machine interfaces) in order to improve the usability of existing low-bandwidth HMIs. PMID:26069961
Gonzalez-Vargas, Jose; Dosen, Strahinja; Amsuess, Sebastian; Yu, Wenwei; Farina, Dario
2015-01-01
Modern assistive devices are very sophisticated systems with multiple degrees of freedom. However, an effective and user-friendly control of these systems is still an open problem since conventional human-machine interfaces (HMI) cannot easily accommodate the system's complexity. In HMIs, the user is responsible for generating unique patterns of command signals directly triggering the device functions. This approach can be difficult to implement when there are many functions (necessitating many command patterns) and/or the user has a considerable impairment (limited number of available signal sources). In this study, we propose a novel concept for a general-purpose HMI where the controller and the user communicate bidirectionally to select the desired function. The system first presents possible choices to the user via electro-tactile stimulation; the user then acknowledges the desired choice by generating a single command signal. Therefore, the proposed approach simplifies the user communication interface (one signal to generate), decoding (one signal to recognize), and allows selecting from a number of options. To demonstrate the new concept the method was used in one particular application, namely, to implement the control of all the relevant functions in a state of the art commercial prosthetic hand without using any myoelectric channels. We performed experiments in healthy subjects and with one amputee to test the feasibility of the novel approach. The results showed that the performance of the novel HMI concept was comparable or, for some outcome measures, better than the classic myoelectric interfaces. The presented approach has a general applicability and the obtained results point out that it could be used to operate various assistive systems (e.g., prosthesis vs. wheelchair), or it could be integrated into other control schemes (e.g., myoelectric control, brain-machine interfaces) in order to improve the usability of existing low-bandwidth HMIs.
Gupta, Rahul; Ashe, James
2009-06-01
Brain-machine interfaces (BMIs) hold a lot of promise for restoring some level of motor function to patients with neuronal disease or injury. Current BMI approaches fall into two broad categories--those that decode discrete properties of limb movement (such as movement direction and movement intent) and those that decode continuous variables (such as position and velocity). However, to enable the prosthetic devices to be useful for common everyday tasks, precise control of the forces applied by the end-point of the prosthesis (e.g., the hand) is also essential. Here, we used linear regression and Kalman filter methods to show that neural activity recorded from the motor cortex of the monkey during movements in a force field can be used to decode the end-point forces applied by the subject successfully and with high fidelity. Furthermore, the models exhibit some generalization to novel task conditions. We also demonstrate how the simultaneous prediction of kinematics and kinetics can be easily achieved using the same framework, without any degradation in decoding quality. Our results represent a useful extension of the current BMI technology, making dynamic control of a prosthetic device a distinct possibility in the near future.
Brain-Machine Interface control of a robot arm using actor-critic rainforcement learning.
Pohlmeyer, Eric A; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline; Sanchez, Justin C
2012-01-01
Here we demonstrate how a marmoset monkey can use a reinforcement learning (RL) Brain-Machine Interface (BMI) to effectively control the movements of a robot arm for a reaching task. In this work, an actor-critic RL algorithm used neural ensemble activity in the monkey's motor cortext to control the robot movements during a two-target decision task. This novel approach to decoding offers unique advantages for BMI control applications. Compared to supervised learning decoding methods, the actor-critic RL algorithm does not require an explicit set of training data to create a static control model, but rather it incrementally adapts the model parameters according to its current performance, in this case requiring only a very basic feedback signal. We show how this algorithm achieved high performance when mapping the monkey's neural states (94%) to robot actions, and only needed to experience a few trials before obtaining accurate real-time control of the robot arm. Since RL methods responsively adapt and adjust their parameters, they can provide a method to create BMIs that are robust against perturbations caused by changes in either the neural input space or the output actions they generate under different task requirements or goals.
Neuron-Type-Specific Utility in a Brain-Machine Interface: a Pilot Study.
Garcia-Garcia, Martha G; Bergquist, Austin J; Vargas-Perez, Hector; Nagai, Mary K; Zariffa, Jose; Marquez-Chin, Cesar; Popovic, Milos R
2017-11-01
Firing rates of single cortical neurons can be volitionally modulated through biofeedback (i.e. operant conditioning), and this information can be transformed to control external devices (i.e. brain-machine interfaces; BMIs). However, not all neurons respond to operant conditioning in BMI implementation. Establishing criteria that predict neuron utility will assist translation of BMI research to clinical applications. Single cortical neurons (n=7) were recorded extracellularly from primary motor cortex of a Long-Evans rat. Recordings were incorporated into a BMI involving up-regulation of firing rate to control the brightness of a light-emitting-diode and subsequent reward. Neurons were classified as 'fast-spiking', 'bursting' or 'regular-spiking' according to waveform-width and intrinsic firing patterns. Fast-spiking and bursting neurons were found to up-regulate firing rate by a factor of 2.43±1.16, demonstrating high utility, while regular-spiking neurons decreased firing rates on average by a factor of 0.73±0.23, demonstrating low utility. The ability to select neurons with high utility will be important to minimize training times and maximize information yield in future clinical BMI applications. The highly contrasting utility observed between fast-spiking and bursting neurons versus regular-spiking neurons allows for the hypothesis to be advanced that intrinsic electrophysiological properties may be useful criteria that predict neuron utility in BMI implementation.
A Brain-Machine-Brain Interface for Rewiring of Cortical Circuitry after Traumatic Brain Injury
2012-09-01
Oral presentations (Dr. Nudo): Invited Speaker, Neuroprosthetic tools for repair of the injured brain, American Society for Neurorehabilitation... Neuroprosthetic tools for repair of the injured brain, Neurobiology of Disease Course, University of Texas Health Science Center, Houston, Texas...Congress of NeuroRehabilitation, Melbourne, Australia, May 17, 2012. Invited Speaker, Novel neuroprosthetic tools for repair of the injured brain
Control of a 2 DoF robot using a brain-machine interface.
Hortal, Enrique; Ubeda, Andrés; Iáñez, Eduardo; Azorín, José M
2014-09-01
In this paper, a non-invasive spontaneous Brain-Machine Interface (BMI) is used to control the movement of a planar robot. To that end, two mental tasks are used to manage the visual interface that controls the robot. The robot used is a PupArm, a force-controlled planar robot designed by the nBio research group at the Miguel Hernández University of Elche (Spain). Two control strategies are compared: hierarchical and directional control. The experimental test (performed by four users) consists of reaching four targets. The errors and time used during the performance of the tests are compared in both control strategies (hierarchical and directional control). The advantages and disadvantages of each method are shown after the analysis of the results. The hierarchical control allows an accurate approaching to the goals but it is slower than using the directional control which, on the contrary, is less precise. The results show both strategies are useful to control this planar robot. In the future, by adding an extra device like a gripper, this BMI could be used in assistive applications such as grasping daily objects in a realistic environment. In order to compare the behavior of the system taking into account the opinion of the users, a NASA Tasks Load Index (TLX) questionnaire is filled out after two sessions are completed. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Overview Electrotactile Feedback for Enhancing Human Computer Interface
NASA Astrophysics Data System (ADS)
Pamungkas, Daniel S.; Caesarendra, Wahyu
2018-04-01
To achieve effective interaction between a human and a computing device or machine, adequate feedback from the computing device or machine is required. Recently, haptic feedback is increasingly being utilised to improve the interactivity of the Human Computer Interface (HCI). Most existing haptic feedback enhancements aim at producing forces or vibrations to enrich the user’s interactive experience. However, these force and/or vibration actuated haptic feedback systems can be bulky and uncomfortable to wear and only capable of delivering a limited amount of information to the user which can limit both their effectiveness and the applications they can be applied to. To address this deficiency, electrotactile feedback is used. This involves delivering haptic sensations to the user by electrically stimulating nerves in the skin via electrodes placed on the surface of the skin. This paper presents a review and explores the capability of electrotactile feedback for HCI applications. In addition, a description of the sensory receptors within the skin for sensing tactile stimulus and electric currents alsoseveral factors which influenced electric signal to transmit to the brain via human skinare explained.
Sakurai, Yoshio
2014-01-01
This perspective emphasizes that the brain-machine interface (BMI) research has the potential to clarify major mysteries of the brain and that such clarification of the mysteries by neuroscience is needed to develop BMIs. I enumerate five principal mysteries. The first is "how is information encoded in the brain?" This is the fundamental question for understanding what our minds are and is related to the verification of Hebb's cell assembly theory. The second is "how is information distributed in the brain?" This is also a reconsideration of the functional localization of the brain. The third is "what is the function of the ongoing activity of the brain?" This is the problem of how the brain is active during no-task periods and what meaning such spontaneous activity has. The fourth is "how does the bodily behavior affect the brain function?" This is the problem of brain-body interaction, and obtaining a new "body" by a BMI leads to a possibility of changes in the owner's brain. The last is "to what extent can the brain induce plasticity?" Most BMIs require changes in the brain's neuronal activity to realize higher performance, and the neuronal operant conditioning inherent in the BMIs further enhances changes in the activity.
Decoder calibration with ultra small current sample set for intracortical brain-machine interface
NASA Astrophysics Data System (ADS)
Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping
2018-04-01
Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application of intracortical brain-machine interfaces in clinical practice.
An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces.
Li, Simin; Li, Jie; Li, Zheng
2016-01-01
Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well.
An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces
Li, Simin; Li, Jie; Li, Zheng
2016-01-01
Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well. PMID:28066170
Interfacing with the brain using organic electronics (Presentation Recording)
NASA Astrophysics Data System (ADS)
Malliaras, George G.
2015-10-01
Implantable electrodes are being used for diagnostic purposes, for brain-machine interfaces, and for delivering electrical stimulation to alleviate the symptoms of diseases such as Parkinson's. The field of organic electronics made available devices with a unique combination of attractive properties, including mixed ionic/electronic conduction, mechanical flexibility, enhanced biocompatibility, and capability for drug delivery. I will present examples of organic electrodes, transistors and other devices for recording and stimulation of brain activity and discuss how they can improve our understanding of brain physiology and pathology, and how they can be used to deliver new therapies.
Vassanelli, Stefano; Mahmud, Mufti
2016-01-01
Future technologies aiming at restoring and enhancing organs function will intimately rely on near-physiological and energy-efficient communication between living and artificial biomimetic systems. Interfacing brain-inspired devices with the real brain is at the forefront of such emerging field, with the term "neurobiohybrids" indicating all those systems where such interaction is established. We argue that achieving a "high-level" communication and functional synergy between natural and artificial neuronal networks in vivo , will allow the development of a heterogeneous world of neurobiohybrids, which will include "living robots" but will also embrace "intelligent" neuroprostheses for augmentation of brain function. The societal and economical impact of intelligent neuroprostheses is likely to be potentially strong, as they will offer novel therapeutic perspectives for a number of diseases, and going beyond classical pharmaceutical schemes. However, they will unavoidably raise fundamental ethical questions on the intermingling between man and machine and more specifically, on how deeply it should be allowed that brain processing is affected by implanted "intelligent" artificial systems. Following this perspective, we provide the reader with insights on ongoing developments and trends in the field of neurobiohybrids. We address the topic also from a "community building" perspective, showing through a quantitative bibliographic analysis, how scientists working on the engineering of brain-inspired devices and brain-machine interfaces are increasing their interactions. We foresee that such trend preludes to a formidable technological and scientific revolution in brain-machine communication and to the opening of new avenues for restoring or even augmenting brain function for therapeutic purposes.
Vassanelli, Stefano; Mahmud, Mufti
2016-01-01
Future technologies aiming at restoring and enhancing organs function will intimately rely on near-physiological and energy-efficient communication between living and artificial biomimetic systems. Interfacing brain-inspired devices with the real brain is at the forefront of such emerging field, with the term “neurobiohybrids” indicating all those systems where such interaction is established. We argue that achieving a “high-level” communication and functional synergy between natural and artificial neuronal networks in vivo, will allow the development of a heterogeneous world of neurobiohybrids, which will include “living robots” but will also embrace “intelligent” neuroprostheses for augmentation of brain function. The societal and economical impact of intelligent neuroprostheses is likely to be potentially strong, as they will offer novel therapeutic perspectives for a number of diseases, and going beyond classical pharmaceutical schemes. However, they will unavoidably raise fundamental ethical questions on the intermingling between man and machine and more specifically, on how deeply it should be allowed that brain processing is affected by implanted “intelligent” artificial systems. Following this perspective, we provide the reader with insights on ongoing developments and trends in the field of neurobiohybrids. We address the topic also from a “community building” perspective, showing through a quantitative bibliographic analysis, how scientists working on the engineering of brain-inspired devices and brain-machine interfaces are increasing their interactions. We foresee that such trend preludes to a formidable technological and scientific revolution in brain-machine communication and to the opening of new avenues for restoring or even augmenting brain function for therapeutic purposes. PMID:27721741
The PennBMBI: Design of a General Purpose Wireless Brain-Machine-Brain Interface System.
Liu, Xilin; Zhang, Milin; Subei, Basheer; Richardson, Andrew G; Lucas, Timothy H; Van der Spiegel, Jan
2015-04-01
In this paper, a general purpose wireless Brain-Machine-Brain Interface (BMBI) system is presented. The system integrates four battery-powered wireless devices for the implementation of a closed-loop sensorimotor neural interface, including a neural signal analyzer, a neural stimulator, a body-area sensor node and a graphic user interface implemented on the PC end. The neural signal analyzer features a four channel analog front-end with configurable bandpass filter, gain stage, digitization resolution, and sampling rate. The target frequency band is configurable from EEG to single unit activity. A noise floor of 4.69 μVrms is achieved over a bandwidth from 0.05 Hz to 6 kHz. Digital filtering, neural feature extraction, spike detection, sensing-stimulating modulation, and compressed sensing measurement are realized in a central processing unit integrated in the analyzer. A flash memory card is also integrated in the analyzer. A 2-channel neural stimulator with a compliance voltage up to ± 12 V is included. The stimulator is capable of delivering unipolar or bipolar, charge-balanced current pulses with programmable pulse shape, amplitude, width, pulse train frequency and latency. A multi-functional sensor node, including an accelerometer, a temperature sensor, a flexiforce sensor and a general sensor extension port has been designed. A computer interface is designed to monitor, control and configure all aforementioned devices via a wireless link, according to a custom designed communication protocol. Wireless closed-loop operation between the sensory devices, neural stimulator, and neural signal analyzer can be configured. The proposed system was designed to link two sites in the brain, bridging the brain and external hardware, as well as creating new sensory and motor pathways for clinical practice. Bench test and in vivo experiments are performed to verify the functions and performances of the system.
O'Shea, Daniel J; Trautmann, Eric; Chandrasekaran, Chandramouli; Stavisky, Sergey; Kao, Jonathan C; Sahani, Maneesh; Ryu, Stephen; Deisseroth, Karl; Shenoy, Krishna V
2017-01-01
A central goal of neuroscience is to understand how populations of neurons coordinate and cooperate in order to give rise to perception, cognition, and action. Nonhuman primates (NHPs) are an attractive model with which to understand these mechanisms in humans, primarily due to the strong homology of their brains and the cognitively sophisticated behaviors they can be trained to perform. Using electrode recordings, the activity of one to a few hundred individual neurons may be measured electrically, which has enabled many scientific findings and the development of brain-machine interfaces. Despite these successes, electrophysiology samples sparsely from neural populations and provides little information about the genetic identity and spatial micro-organization of recorded neurons. These limitations have spurred the development of all-optical methods for neural circuit interrogation. Fluorescent calcium signals serve as a reporter of neuronal responses, and when combined with post-mortem optical clearing techniques such as CLARITY, provide dense recordings of neuronal populations, spatially organized and annotated with genetic and anatomical information. Here, we advocate that this methodology, which has been of tremendous utility in smaller animal models, can and should be developed for use with NHPs. We review here several of the key opportunities and challenges for calcium-based optical imaging in NHPs. We focus on motor neuroscience and brain-machine interface design as representative domains of opportunity within the larger field of NHP neuroscience. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Antoniadou, Eleni V; Ahmad, Rezal K; Jackman, Richard B; Seifalian, Alexander M
2011-01-01
Composite materials based on the coupling of conductive organic polymers and carbon nanotubes have shown that they possess properties of the individual components with a synergistic effect. Multi-wall carbon nanotube (MWCNT)/ polymer composites are hybrid materials that combine numerous mechanical, electrical and chemical properties and thus, constitute ideal biomaterials for a wide range of regenerative medicine applications. Although, complete dispersion of CNT in a polymer matrix has rarely been achieved, in this study we have succeeded high dispersibility of CNT in POSS-PCU and POSS-PCL, novel polymers based on polyprolactone and polycarbonate polyurethane (PCU) and poly(caprolactoneurea)urethane both having incorporated polyhedral oligomeric silsesquioxane (POSS). We report the synthesis and characterization of a novel biomaterial that possesses unique properties of being electrically conducting and thus being capable of electronic interfacing with tissue. To this end, POSS-PCU/MWCNT composite can be used as a biomaterial for the development of nerve guidance channels to promote nerve regeneration and POSS-PCL/MWCNT as a substrate to increase electronic interfacing between neurons and micro-machined electrodes for potential applications in neural probes, prosthetic devices and brain implants.
Applications of Deep Learning and Reinforcement Learning to Biological Data.
Mahmud, Mufti; Kaiser, Mohammed Shamim; Hussain, Amir; Vassanelli, Stefano
2018-06-01
Rapid advances in hardware-based technologies during the past decades have opened up new possibilities for life scientists to gather multimodal data in various application domains, such as omics, bioimaging, medical imaging, and (brain/body)-machine interfaces. These have generated novel opportunities for development of dedicated data-intensive machine learning techniques. In particular, recent research in deep learning (DL), reinforcement learning (RL), and their combination (deep RL) promise to revolutionize the future of artificial intelligence. The growth in computational power accompanied by faster and increased data storage, and declining computing costs have already allowed scientists in various fields to apply these techniques on data sets that were previously intractable owing to their size and complexity. This paper provides a comprehensive survey on the application of DL, RL, and deep RL techniques in mining biological data. In addition, we compare the performances of DL techniques when applied to different data sets across various application domains. Finally, we outline open issues in this challenging research area and discuss future development perspectives.
Techniques and applications for binaural sound manipulation in human-machine interfaces
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Wenzel, Elizabeth M.
1990-01-01
The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.
Techniques and applications for binaural sound manipulation in human-machine interfaces
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Wenzel, Elizabeth M.
1992-01-01
The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.
A chronic generalized bi-directional brain-machine interface.
Rouse, A G; Stanslaski, S R; Cong, P; Jensen, R M; Afshar, P; Ullestad, D; Gupta, R; Molnar, G F; Moran, D W; Denison, T J
2011-06-01
A bi-directional neural interface (NI) system was designed and prototyped by incorporating a novel neural recording and processing subsystem into a commercial neural stimulator architecture. The NI system prototype leverages the system infrastructure from an existing neurostimulator to ensure reliable operation in a chronic implantation environment. In addition to providing predicate therapy capabilities, the device adds key elements to facilitate chronic research, such as four channels of electrocortigram/local field potential amplification and spectral analysis, a three-axis accelerometer, algorithm processing, event-based data logging, and wireless telemetry for data uploads and algorithm/configuration updates. The custom-integrated micropower sensor and interface circuits facilitate extended operation in a power-limited device. The prototype underwent significant verification testing to ensure reliability, and meets the requirements for a class CF instrument per IEC-60601 protocols. The ability of the device system to process and aid in classifying brain states was preclinically validated using an in vivo non-human primate model for brain control of a computer cursor (i.e. brain-machine interface or BMI). The primate BMI model was chosen for its ability to quantitatively measure signal decoding performance from brain activity that is similar in both amplitude and spectral content to other biomarkers used to detect disease states (e.g. Parkinson's disease). A key goal of this research prototype is to help broaden the clinical scope and acceptance of NI techniques, particularly real-time brain state detection. These techniques have the potential to be generalized beyond motor prosthesis, and are being explored for unmet needs in other neurological conditions such as movement disorders, stroke and epilepsy.
Prosthetic EMG control enhancement through the application of man-machine principles
NASA Technical Reports Server (NTRS)
Simcox, W. A.
1977-01-01
An area in medicine that appears suitable to man-machine principles is rehabilitation research, particularly when the motor aspects of the body are involved. If one considers the limb, whether functional or not, as the machine, the brain as the controller and the neuromuscular system as the man-machine interface, the human body is reduced to a man-machine system that can benefit from the principles behind such systems. The area of rehabilitation that this paper deals with is that of an arm amputee and his prosthetic device. Reducing this area to its man-machine basics, the problem becomes one of attaining natural multiaxis prosthetic control using Electromyographic activity (EMG) as the means of communication between man and prothesis. In order to use EMG as the communication channel it must be amplified and processed to yield a high information signal suitable for control. The most common processing scheme employed is termed Mean Value Processing. This technique for extracting the useful EMG signal consists of a differential to single ended conversion to the surface activity followed by a rectification and smoothing.
Single-trial dynamics of motor cortex and their applications to brain-machine interfaces
Kao, Jonathan C.; Nuyujukian, Paul; Ryu, Stephen I.; Churchland, Mark M.; Cunningham, John P.; Shenoy, Krishna V.
2015-01-01
Increasing evidence suggests that neural population responses have their own internal drive, or dynamics, that describe how the neural population evolves through time. An important prediction of neural dynamical models is that previously observed neural activity is informative of noisy yet-to-be-observed activity on single-trials, and may thus have a denoising effect. To investigate this prediction, we built and characterized dynamical models of single-trial motor cortical activity. We find these models capture salient dynamical features of the neural population and are informative of future neural activity on single trials. To assess how neural dynamics may beneficially denoise single-trial neural activity, we incorporate neural dynamics into a brain–machine interface (BMI). In online experiments, we find that a neural dynamical BMI achieves substantially higher performance than its non-dynamical counterpart. These results provide evidence that neural dynamics beneficially inform the temporal evolution of neural activity on single trials and may directly impact the performance of BMIs. PMID:26220660
Sakurai, Yoshio
2014-01-01
This perspective emphasizes that the brain-machine interface (BMI) research has the potential to clarify major mysteries of the brain and that such clarification of the mysteries by neuroscience is needed to develop BMIs. I enumerate five principal mysteries. The first is “how is information encoded in the brain?” This is the fundamental question for understanding what our minds are and is related to the verification of Hebb’s cell assembly theory. The second is “how is information distributed in the brain?” This is also a reconsideration of the functional localization of the brain. The third is “what is the function of the ongoing activity of the brain?” This is the problem of how the brain is active during no-task periods and what meaning such spontaneous activity has. The fourth is “how does the bodily behavior affect the brain function?” This is the problem of brain-body interaction, and obtaining a new “body” by a BMI leads to a possibility of changes in the owner’s brain. The last is “to what extent can the brain induce plasticity?” Most BMIs require changes in the brain’s neuronal activity to realize higher performance, and the neuronal operant conditioning inherent in the BMIs further enhances changes in the activity. PMID:24904323
Research interface on a programmable ultrasound scanner.
Shamdasani, Vijay; Bae, Unmin; Sikdar, Siddhartha; Yoo, Yang Mo; Karadayi, Kerem; Managuli, Ravi; Kim, Yongmin
2008-07-01
Commercial ultrasound machines in the past did not provide the ultrasound researchers access to raw ultrasound data. Lack of this ability has impeded evaluation and clinical testing of novel ultrasound algorithms and applications. Recently, we developed a flexible ultrasound back-end where all the processing for the conventional ultrasound modes, such as B, M, color flow and spectral Doppler, was performed in software. The back-end has been incorporated into a commercial ultrasound machine, the Hitachi HiVision 5500. The goal of this work is to develop an ultrasound research interface on the back-end for acquiring raw ultrasound data from the machine. The research interface has been designed as a software module on the ultrasound back-end. To increase the amount of raw ultrasound data that can be spooled in the limited memory available on the back-end, we have developed a method that can losslessly compress the ultrasound data in real time. The raw ultrasound data could be obtained in any conventional ultrasound mode, including duplex and triplex modes. Furthermore, use of the research interface does not decrease the frame rate or otherwise affect the clinical usability of the machine. The lossless compression of the ultrasound data in real time can increase the amount of data spooled by approximately 2.3 times, thus allowing more than 6s of raw ultrasound data to be acquired in all the modes. The interface has been used not only for early testing of new ideas with in vitro data from phantoms, but also for acquiring in vivo data for fine-tuning ultrasound applications and conducting clinical studies. We present several examples of how newer ultrasound applications, such as elastography, vibration imaging and 3D imaging, have benefited from this research interface. Since the research interface is entirely implemented in software, it can be deployed on existing HiVision 5500 ultrasound machines and may be easily upgraded in the future. The developed research interface can aid researchers in the rapid testing and clinical evaluation of new ultrasound algorithms and applications. Additionally, we believe that our approach would be applicable to designing research interfaces on other ultrasound machines.
Assisted navigation based on shared-control, using discrete and sparse human-machine interfaces.
Lopes, Ana C; Nunes, Urbano; Vaz, Luis; Vaz, Luís
2010-01-01
This paper presents a shared-control approach for Assistive Mobile Robots (AMR), which depends on the user's ability to navigate a semi-autonomous powered wheelchair, using a sparse and discrete human-machine interface (HMI). This system is primarily intended to help users with severe motor disabilities that prevent them to use standard human-machine interfaces. Scanning interfaces and Brain Computer Interfaces (BCI), characterized to provide a small set of commands issued sparsely, are possible HMIs. This shared-control approach is intended to be applied in an Assisted Navigation Training Framework (ANTF) that is used to train users' ability in steering a powered wheelchair in an appropriate manner, given the restrictions imposed by their limited motor capabilities. A shared-controller based on user characterization, is proposed. This controller is able to share the information provided by the local motion planning level with the commands issued sparsely by the user. Simulation results of the proposed shared-control method, are presented.
A Brain-Machine Interface Based on ERD/ERS for an Upper-Limb Exoskeleton Control.
Tang, Zhichuan; Sun, Shouqian; Zhang, Sanyuan; Chen, Yumiao; Li, Chao; Chen, Shi
2016-12-02
To recognize the user's motion intention, brain-machine interfaces (BMI) usually decode movements from cortical activity to control exoskeletons and neuroprostheses for daily activities. The aim of this paper is to investigate whether self-induced variations of the electroencephalogram (EEG) can be useful as control signals for an upper-limb exoskeleton developed by us. A BMI based on event-related desynchronization/synchronization (ERD/ERS) is proposed. In the decoder-training phase, we investigate the offline classification performance of left versus right hand and left hand versus both feet by using motor execution (ME) or motor imagery (MI). The results indicate that the accuracies of ME sessions are higher than those of MI sessions, and left hand versus both feet paradigm achieves a better classification performance, which would be used in the online-control phase. In the online-control phase, the trained decoder is tested in two scenarios (wearing or without wearing the exoskeleton). The MI and ME sessions wearing the exoskeleton achieve mean classification accuracy of 84.29% ± 2.11% and 87.37% ± 3.06%, respectively. The present study demonstrates that the proposed BMI is effective to control the upper-limb exoskeleton, and provides a practical method by non-invasive EEG signal associated with human natural behavior for clinical applications.
Marsh, Brandi T; Tarigoppula, Venkata S Aditya; Chen, Chen; Francis, Joseph T
2015-05-13
For decades, neurophysiologists have worked on elucidating the function of the cortical sensorimotor control system from the standpoint of kinematics or dynamics. Recently, computational neuroscientists have developed models that can emulate changes seen in the primary motor cortex during learning. However, these simulations rely on the existence of a reward-like signal in the primary sensorimotor cortex. Reward modulation of the primary sensorimotor cortex has yet to be characterized at the level of neural units. Here we demonstrate that single units/multiunits and local field potentials in the primary motor (M1) cortex of nonhuman primates (Macaca radiata) are modulated by reward expectation during reaching movements and that this modulation is present even while subjects passively view cursor motions that are predictive of either reward or nonreward. After establishing this reward modulation, we set out to determine whether we could correctly classify rewarding versus nonrewarding trials, on a moment-to-moment basis. This reward information could then be used in collaboration with reinforcement learning principles toward an autonomous brain-machine interface. The autonomous brain-machine interface would use M1 for both decoding movement intention and extraction of reward expectation information as evaluative feedback, which would then update the decoding algorithm as necessary. In the work presented here, we show that this, in theory, is possible. Copyright © 2015 the authors 0270-6474/15/357374-14$15.00/0.
Flexible Neural Electrode Array Based-on Porous Graphene for Cortical Microstimulation and Sensing
NASA Astrophysics Data System (ADS)
Lu, Yichen; Lyu, Hongming; Richardson, Andrew G.; Lucas, Timothy H.; Kuzum, Duygu
2016-09-01
Neural sensing and stimulation have been the backbone of neuroscience research, brain-machine interfaces and clinical neuromodulation therapies for decades. To-date, most of the neural stimulation systems have relied on sharp metal microelectrodes with poor electrochemical properties that induce extensive damage to the tissue and significantly degrade the long-term stability of implantable systems. Here, we demonstrate a flexible cortical microelectrode array based on porous graphene, which is capable of efficient electrophysiological sensing and stimulation from the brain surface, without penetrating into the tissue. Porous graphene electrodes show superior impedance and charge injection characteristics making them ideal for high efficiency cortical sensing and stimulation. They exhibit no physical delamination or degradation even after 1 million biphasic stimulation cycles, confirming high endurance. In in vivo experiments with rodents, same array is used to sense brain activity patterns with high spatio-temporal resolution and to control leg muscles with high-precision electrical stimulation from the cortical surface. Flexible porous graphene array offers a minimally invasive but high efficiency neuromodulation scheme with potential applications in cortical mapping, brain-computer interfaces, treatment of neurological disorders, where high resolution and simultaneous recording and stimulation of neural activity are crucial.
Grissmann, Sebastian; Zander, Thorsten O; Faller, Josef; Brönstrup, Jonas; Kelava, Augustin; Gramann, Klaus; Gerjets, Peter
2017-01-01
Most brain-computer interfaces (BCIs) focus on detecting single aspects of user states (e.g., motor imagery) in the electroencephalogram (EEG) in order to use these aspects as control input for external systems. This communication can be effective, but unaccounted mental processes can interfere with signals used for classification and thereby introduce changes in the signal properties which could potentially impede BCI classification performance. To improve BCI performance, we propose deploying an approach that potentially allows to describe different mental states that could influence BCI performance. To test this approach, we analyzed neural signatures of potential affective states in data collected in a paradigm where the complex user state of perceived loss of control (LOC) was induced. In this article, source localization methods were used to identify brain dynamics with source located outside but affecting the signal of interest originating from the primary motor areas, pointing to interfering processes in the brain during natural human-machine interaction. In particular, we found affective correlates which were related to perceived LOC. We conclude that additional context information about the ongoing user state might help to improve the applicability of BCIs to real-world scenarios.
Grissmann, Sebastian; Zander, Thorsten O.; Faller, Josef; Brönstrup, Jonas; Kelava, Augustin; Gramann, Klaus; Gerjets, Peter
2017-01-01
Most brain-computer interfaces (BCIs) focus on detecting single aspects of user states (e.g., motor imagery) in the electroencephalogram (EEG) in order to use these aspects as control input for external systems. This communication can be effective, but unaccounted mental processes can interfere with signals used for classification and thereby introduce changes in the signal properties which could potentially impede BCI classification performance. To improve BCI performance, we propose deploying an approach that potentially allows to describe different mental states that could influence BCI performance. To test this approach, we analyzed neural signatures of potential affective states in data collected in a paradigm where the complex user state of perceived loss of control (LOC) was induced. In this article, source localization methods were used to identify brain dynamics with source located outside but affecting the signal of interest originating from the primary motor areas, pointing to interfering processes in the brain during natural human-machine interaction. In particular, we found affective correlates which were related to perceived LOC. We conclude that additional context information about the ongoing user state might help to improve the applicability of BCIs to real-world scenarios. PMID:28769776
Samwald, Matthias; Lim, Ernest; Masiar, Peter; Marenco, Luis; Chen, Huajun; Morse, Thomas; Mutalik, Pradeep; Shepherd, Gordon; Miller, Perry; Cheung, Kei-Hoi
2009-01-01
The amount of biomedical data available in Semantic Web formats has been rapidly growing in recent years. While these formats are machine-friendly, user-friendly web interfaces allowing easy querying of these data are typically lacking. We present "Entrez Neuron", a pilot neuron-centric interface that allows for keyword-based queries against a coherent repository of OWL ontologies. These ontologies describe neuronal structures, physiology, mathematical models and microscopy images. The returned query results are organized hierarchically according to brain architecture. Where possible, the application makes use of entities from the Open Biomedical Ontologies (OBO) and the 'HCLS knowledgebase' developed by the W3C Interest Group for Health Care and Life Science. It makes use of the emerging RDFa standard to embed ontology fragments and semantic annotations within its HTML-based user interface. The application and underlying ontologies demonstrate how Semantic Web technologies can be used for information integration within a curated information repository and between curated information repositories. It also demonstrates how information integration can be accomplished on the client side, through simple copying and pasting of portions of documents that contain RDFa markup.
The Cybathlon BCI race: Successful longitudinal mutual learning with two tetraplegic users.
Perdikis, Serafeim; Tonin, Luca; Saeedi, Sareh; Schneider, Christoph; Millán, José Del R
2018-05-01
This work aims at corroborating the importance and efficacy of mutual learning in motor imagery (MI) brain-computer interface (BCI) by leveraging the insights obtained through our participation in the BCI race of the Cybathlon event. We hypothesized that, contrary to the popular trend of focusing mostly on the machine learning aspects of MI BCI training, a comprehensive mutual learning methodology that reinstates the three learning pillars (at the machine, subject, and application level) as equally significant could lead to a BCI-user symbiotic system able to succeed in real-world scenarios such as the Cybathlon event. Two severely impaired participants with chronic spinal cord injury (SCI), were trained following our mutual learning approach to control their avatar in a virtual BCI race game. The competition outcomes substantiate the effectiveness of this type of training. Most importantly, the present study is one among very few to provide multifaceted evidence on the efficacy of subject learning during BCI training. Learning correlates could be derived at all levels of the interface-application, BCI output, and electroencephalography (EEG) neuroimaging-with two end-users, sufficiently longitudinal evaluation, and, importantly, under real-world and even adverse conditions.
Bidirectional neural interface: Closed-loop feedback control for hybrid neural systems.
Chou, Zane; Lim, Jeffrey; Brown, Sophie; Keller, Melissa; Bugbee, Joseph; Broccard, Frédéric D; Khraiche, Massoud L; Silva, Gabriel A; Cauwenberghs, Gert
2015-01-01
Closed-loop neural prostheses enable bidirectional communication between the biological and artificial components of a hybrid system. However, a major challenge in this field is the limited understanding of how these components, the two separate neural networks, interact with each other. In this paper, we propose an in vitro model of a closed-loop system that allows for easy experimental testing and modification of both biological and artificial network parameters. The interface closes the system loop in real time by stimulating each network based on recorded activity of the other network, within preset parameters. As a proof of concept we demonstrate that the bidirectional interface is able to establish and control network properties, such as synchrony, in a hybrid system of two neural networks more significantly more effectively than the same system without the interface or with unidirectional alternatives. This success holds promise for the application of closed-loop systems in neural prostheses, brain-machine interfaces, and drug testing.
A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces
NASA Astrophysics Data System (ADS)
Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.
Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
Ogawa, Takeshi; Hirayama, Jun-Ichiro; Gupta, Pankaj; Moriya, Hiroki; Yamaguchi, Shumpei; Ishikawa, Akihiro; Inoue, Yoshihiro; Kawanabe, Motoaki; Ishii, Shin
2015-08-01
Smart houses for elderly or physically challenged people need a method to understand residents' intentions during their daily-living behaviors. To explore a new possibility, we here developed a novel brain-machine interface (BMI) system integrated with an experimental smart house, based on a prototype of a wearable near-infrared spectroscopy (NIRS) device, and verified the system in a specific task of controlling of the house's equipments with BMI. We recorded NIRS signals of three participants during typical daily-living actions (DLAs), and classified them by linear support vector machine. In our off-line analysis, four DLAs were classified at about 70% mean accuracy, significantly above the chance level of 25%, in every participant. In an online demonstration in the real smart house, one participant successfully controlled three target appliances by BMI at 81.3% accuracy. Thus we successfully demonstrated the feasibility of using NIRS-BMI in real smart houses, which will possibly enhance new assistive smart-home technologies.
Use of parallel computing for analyzing big data in EEG studies of ambiguous perception
NASA Astrophysics Data System (ADS)
Maksimenko, Vladimir A.; Grubov, Vadim V.; Kirsanov, Daniil V.
2018-02-01
Problem of interaction between human and machine systems through the neuro-interfaces (or brain-computer interfaces) is an urgent task which requires analysis of large amount of neurophysiological EEG data. In present paper we consider the methods of parallel computing as one of the most powerful tools for processing experimental data in real-time with respect to multichannel structure of EEG. In this context we demonstrate the application of parallel computing for the estimation of the spectral properties of multichannel EEG signals, associated with the visual perception. Using CUDA C library we run wavelet-based algorithm on GPUs and show possibility for detection of specific patterns in multichannel set of EEG data in real-time.
Classification of change detection and change blindness from near-infrared spectroscopy signals
NASA Astrophysics Data System (ADS)
Tanaka, Hirokazu; Katura, Takusige
2011-08-01
Using a machine-learning classification algorithm applied to near-infrared spectroscopy (NIRS) signals, we classify a success (change detection) or a failure (change blindness) in detecting visual changes for a change-detection task. Five subjects perform a change-detection task, and their brain activities are continuously monitored. A support-vector-machine algorithm is applied to classify the change-detection and change-blindness trials, and correct classification probability of 70-90% is obtained for four subjects. Two types of temporal shapes in classification probabilities are found: one exhibiting a maximum value after the task is completed (postdictive type), and another exhibiting a maximum value during the task (predictive type). As for the postdictive type, the classification probability begins to increase immediately after the task completion and reaches its maximum in about the time scale of neuronal hemodynamic response, reflecting a subjective report of change detection. As for the predictive type, the classification probability shows an increase at the task initiation and is maximal while subjects are performing the task, predicting the task performance in detecting a change. We conclude that decoding change detection and change blindness from NIRS signal is possible and argue some future applications toward brain-machine interfaces.
NASA Astrophysics Data System (ADS)
Abbott, W. W.; Faisal, A. A.
2012-08-01
Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s-1, more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’.
Classifying BCI signals from novice users with extreme learning machine
NASA Astrophysics Data System (ADS)
Rodríguez-Bermúdez, Germán; Bueno-Crespo, Andrés; José Martinez-Albaladejo, F.
2017-07-01
Brain computer interface (BCI) allows to control external devices only with the electrical activity of the brain. In order to improve the system, several approaches have been proposed. However it is usual to test algorithms with standard BCI signals from experts users or from repositories available on Internet. In this work, extreme learning machine (ELM) has been tested with signals from 5 novel users to compare with standard classification algorithms. Experimental results show that ELM is a suitable method to classify electroencephalogram signals from novice users.
Matsushita, Kojiro; Hirata, Masayuki; Suzuki, Takafumi; Ando, Hiroshi; Ota, Yuki; Sato, Fumihiro; Morris, Shyne; Yoshida, Takeshi; Matsuki, Hidetoshi; Yoshimine, Toshiki
2013-01-01
Brain Machine Interface (BMI) is a system that assumes user's intention by analyzing user's brain activities and control devices with the assumed intention. It is considered as one of prospective tools to enhance paralyzed patients' quality of life. In our group, we especially focus on ECoG (electro-corti-gram)-BMI, which requires surgery to place electrodes on the cortex. We try to implant all the devices within the patient's head and abdomen and to transmit the data and power wirelessly. Our device consists of 5 parts: (1) High-density multi-electrodes with a 3D shaped sheet fitting to the individual brain surface to effectively record the ECoG signals; (2) A small circuit board with two integrated circuit chips functioning 128 [ch] analogue amplifiers and A/D converters for ECoG signals; (3) A Wifi data communication & control circuit with the target PC; (4) A non-contact power supply transmitting electrical power minimum 400[mW] to the device 20[mm] away. We developed those devices, integrated them, and, investigated the performance.
Spatiotemporal source tuning filter bank for multiclass EEG based brain computer interfaces.
Acharya, Soumyadipta; Mollazadeh, Moshen; Murari, Kartikeya; Thakor, Nitish
2006-01-01
Non invasive brain-computer interfaces (BCI) allow people to communicate by modulating features of their electroencephalogram (EEG). Spatiotemporal filtering has a vital role in multi-class, EEG based BCI. In this study, we used a novel combination of principle component analysis, independent component analysis and dipole source localization to design a spatiotemporal multiple source tuning (SPAMSORT) filter bank, each channel of which was tuned to the activity of an underlying dipole source. Changes in the event-related spectral perturbation (ERSP) were measured and used to train a linear support vector machine to classify between four classes of motor imagery tasks (left hand, right hand, foot and tongue) for one subject. ERSP values were significantly (p<0.01) different across tasks and better (p<0.01) than conventional spatial filtering methods (large Laplacian and common average reference). Classification resulted in an average accuracy of 82.5%. This approach could lead to promising BCI applications such as control of a prosthesis with multiple degrees of freedom.
Towards a symbiotic brain-computer interface: exploring the application-decoder interaction
NASA Astrophysics Data System (ADS)
Verhoeven, T.; Buteneers Wiersema, P., Jr.; Dambre, J.; Kindermans, PJ
2015-12-01
Objective. State of the art brain-computer interface (BCI) research focuses on improving individual components such as the application or the decoder that converts the user’s brain activity to control signals. In this study, we investigate the interaction between these components in the P300 speller, a BCI for communication. We introduce a synergistic approach in which the stimulus presentation sequence is modified to enhance the machine learning decoding. In this way we aim for an improved overall BCI performance. Approach. First, a new stimulus presentation paradigm is introduced which provides us flexibility in tuning the sequence of visual stimuli presented to the user. Next, an experimental setup in which this paradigm is compared to other paradigms uncovers the underlying mechanism of the interdependence between the application and the performance of the decoder. Main results. Extensive analysis of the experimental results reveals the changing requirements of the decoder concerning the data recorded during the spelling session. When few data is recorded, the balance in the number of target and non-target stimuli shown to the user is more important than the signal-to-noise rate (SNR) of the recorded response signals. Only when more data has been collected, the SNR becomes the dominant factor. Significance. For BCIs in general, knowing the dominant factor that affects the decoder performance and being able to respond to it is of utmost importance to improve system performance. For the P300 speller, the proposed tunable paradigm offers the possibility to tune the application to the decoder’s needs at any time and, as such, fully exploit this application-decoder interaction.
Human facial neural activities and gesture recognition for machine-interfacing applications.
Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P
2011-01-01
The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.
Craniux: A LabVIEW-Based Modular Software Framework for Brain-Machine Interface Research
Degenhart, Alan D.; Kelly, John W.; Ashmore, Robin C.; Collinger, Jennifer L.; Tyler-Kabara, Elizabeth C.; Weber, Douglas J.; Wang, Wei
2011-01-01
This paper presents “Craniux,” an open-access, open-source software framework for brain-machine interface (BMI) research. Developed in LabVIEW, a high-level graphical programming environment, Craniux offers both out-of-the-box functionality and a modular BMI software framework that is easily extendable. Specifically, it allows researchers to take advantage of multiple features inherent to the LabVIEW environment for on-the-fly data visualization, parallel processing, multithreading, and data saving. This paper introduces the basic features and system architecture of Craniux and describes the validation of the system under real-time BMI operation using simulated and real electrocorticographic (ECoG) signals. Our results indicate that Craniux is able to operate consistently in real time, enabling a seamless work flow to achieve brain control of cursor movement. The Craniux software framework is made available to the scientific research community to provide a LabVIEW-based BMI software platform for future BMI research and development. PMID:21687575
Craniux: a LabVIEW-based modular software framework for brain-machine interface research.
Degenhart, Alan D; Kelly, John W; Ashmore, Robin C; Collinger, Jennifer L; Tyler-Kabara, Elizabeth C; Weber, Douglas J; Wang, Wei
2011-01-01
This paper presents "Craniux," an open-access, open-source software framework for brain-machine interface (BMI) research. Developed in LabVIEW, a high-level graphical programming environment, Craniux offers both out-of-the-box functionality and a modular BMI software framework that is easily extendable. Specifically, it allows researchers to take advantage of multiple features inherent to the LabVIEW environment for on-the-fly data visualization, parallel processing, multithreading, and data saving. This paper introduces the basic features and system architecture of Craniux and describes the validation of the system under real-time BMI operation using simulated and real electrocorticographic (ECoG) signals. Our results indicate that Craniux is able to operate consistently in real time, enabling a seamless work flow to achieve brain control of cursor movement. The Craniux software framework is made available to the scientific research community to provide a LabVIEW-based BMI software platform for future BMI research and development.
Zhao, Ming; Rattanatamrong, Prapaporn; DiGiovanna, Jack; Mahmoudi, Babak; Figueiredo, Renato J; Sanchez, Justin C; Príncipe, José C; Fortes, José A B
2008-01-01
Dynamic data-driven brain-machine interfaces (DDDBMI) have great potential to advance the understanding of neural systems and improve the design of brain-inspired rehabilitative systems. This paper presents a novel cyberinfrastructure that couples in vivo neurophysiology experimentation with massive computational resources to provide seamless and efficient support of DDDBMI research. Closed-loop experiments can be conducted with in vivo data acquisition, reliable network transfer, parallel model computation, and real-time robot control. Behavioral experiments with live animals are supported with real-time guarantees. Offline studies can be performed with various configurations for extensive analysis and training. A Web-based portal is also provided to allow users to conveniently interact with the cyberinfrastructure, conducting both experimentation and analysis. New motor control models are developed based on this approach, which include recursive least square based (RLS) and reinforcement learning based (RLBMI) algorithms. The results from an online RLBMI experiment shows that the cyberinfrastructure can successfully support DDDBMI experiments and meet the desired real-time requirements.
Silvoni, Stefano; Cavinato, Marianna; Volpato, Chiara; Cisotto, Giulia; Genna, Clara; Agostini, Michela; Turolla, Andrea; Ramos-Murguialday, Ander; Piccione, Francesco
2013-01-01
In a proof-of-principle prototypical demonstration we describe a new type of brain-machine interface (BMI) paradigm for upper limb motor-training. The proposed technique allows a fast contingent and proportionally modulated stimulation of afferent proprioceptive and motor output neural pathways using operant learning. Continuous and immediate assisted-feedback of force proportional to rolandic rhythm oscillations during actual movements was employed and illustrated with a single case experiment. One hemiplegic patient was trained for 2 weeks coupling somatosensory brain oscillations with force-field control during a robot-mediated center-out motor-task whose execution approaches movements of everyday life. The robot facilitated actual movements adding a modulated force directed to the target, thus providing a non-delayed proprioceptive feedback. Neuro-electric, kinematic, and motor-behavioral measures were recorded in pre- and post-assessments without force assistance. Patient's healthy arm was used as control since neither a placebo control was possible nor other control conditions. We observed a generalized and significant kinematic improvement in the affected arm and a spatial accuracy improvement in both arms, together with an increase and focalization of the somatosensory rhythm changes used to provide assisted-force-feedback. The interpretation of the neurophysiological and kinematic evidences reported here is strictly related to the repetition of the motor-task and the presence of the assisted-force-feedback. Results are described as systematic observations only, without firm conclusions about the effectiveness of the methodology. In this prototypical view, the design of appropriate control conditions is discussed. This study presents a novel operant-learning-based BMI-application for motor-training coupling brain oscillations and force feedback during an actual movement.
Brain control and information transfer.
Tehovnik, Edward J; Chen, Lewis L
2015-12-01
In this review, we examine the importance of having a body as essential for the brain to transfer information about the outside world to generate appropriate motor responses. We discuss the context-dependent conditioning of the motor control neural circuits and its dependence on the completion of feedback loops, which is in close agreement with the insights of Hebb and colleagues, who have stressed that for learning to occur the body must be intact and able to interact with the outside world. Finally, we apply information theory to data from published studies to evaluate the robustness of the neuronal signals obtained by bypassing the body (as used for brain-machine interfaces) versus via the body to move in the world. We show that recording from a group of neurons that bypasses the body exhibits a vastly degraded level of transfer of information as compared to that of an entire brain using the body to engage in the normal execution of behaviour. We conclude that body sensations provide more than just feedback for movements; they sustain the necessary transfer of information as animals explore their environment, thereby creating associations through learning. This work has implications for the development of brain-machine interfaces used to move external devices.
Fast mental states decoding in mixed reality.
De Massari, Daniele; Pacheco, Daniel; Malekshahi, Rahim; Betella, Alberto; Verschure, Paul F M J; Birbaumer, Niels; Caria, Andrea
2014-01-01
The combination of Brain-Computer Interface (BCI) technology, allowing online monitoring and decoding of brain activity, with virtual and mixed reality (MR) systems may help to shape and guide implicit and explicit learning using ecological scenarios. Real-time information of ongoing brain states acquired through BCI might be exploited for controlling data presentation in virtual environments. Brain states discrimination during mixed reality experience is thus critical for adapting specific data features to contingent brain activity. In this study we recorded electroencephalographic (EEG) data while participants experienced MR scenarios implemented through the eXperience Induction Machine (XIM). The XIM is a novel framework modeling the integration of a sensing system that evaluates and measures physiological and psychological states with a number of actuators and effectors that coherently reacts to the user's actions. We then assessed continuous EEG-based discrimination of spatial navigation, reading and calculation performed in MR, using linear discriminant analysis (LDA) and support vector machine (SVM) classifiers. Dynamic single trial classification showed high accuracy of LDA and SVM classifiers in detecting multiple brain states as well as in differentiating between high and low mental workload, using a 5 s time-window shifting every 200 ms. Our results indicate overall better performance of LDA with respect to SVM and suggest applicability of our approach in a BCI-controlled MR scenario. Ultimately, successful prediction of brain states might be used to drive adaptation of data representation in order to boost information processing in MR.
Fast mental states decoding in mixed reality
De Massari, Daniele; Pacheco, Daniel; Malekshahi, Rahim; Betella, Alberto; Verschure, Paul F. M. J.; Birbaumer, Niels; Caria, Andrea
2014-01-01
The combination of Brain-Computer Interface (BCI) technology, allowing online monitoring and decoding of brain activity, with virtual and mixed reality (MR) systems may help to shape and guide implicit and explicit learning using ecological scenarios. Real-time information of ongoing brain states acquired through BCI might be exploited for controlling data presentation in virtual environments. Brain states discrimination during mixed reality experience is thus critical for adapting specific data features to contingent brain activity. In this study we recorded electroencephalographic (EEG) data while participants experienced MR scenarios implemented through the eXperience Induction Machine (XIM). The XIM is a novel framework modeling the integration of a sensing system that evaluates and measures physiological and psychological states with a number of actuators and effectors that coherently reacts to the user's actions. We then assessed continuous EEG-based discrimination of spatial navigation, reading and calculation performed in MR, using linear discriminant analysis (LDA) and support vector machine (SVM) classifiers. Dynamic single trial classification showed high accuracy of LDA and SVM classifiers in detecting multiple brain states as well as in differentiating between high and low mental workload, using a 5 s time-window shifting every 200 ms. Our results indicate overall better performance of LDA with respect to SVM and suggest applicability of our approach in a BCI-controlled MR scenario. Ultimately, successful prediction of brain states might be used to drive adaptation of data representation in order to boost information processing in MR. PMID:25505878
A Brain-Machine-Brain Interface for Rewiring of Cortical Circuitry after Traumatic Brain Injury
2014-09-01
810. 22. Plow EB, Carey JR, Nudo RJ, Pascual-Leone A (2009) Invasive cortical stimulation to promote recovery of function after stroke: A critical...stimulation of the motor cortex enhances pro- genitor cell migration in the adult rat brain. Exp Brain Res 231(2):165–177. 28. Edwardson MA, Lucas TH, Carey ...The screws and rod were further secured with dental acrylic (all animals). In both the ADS and OLS groups, a hybrid, 16-channel, single-shank, chronic
Application of quantum-behaved particle swarm optimization to motor imagery EEG classification.
Hsu, Wei-Yen
2013-12-01
In this study, we propose a recognition system for single-trial analysis of motor imagery (MI) electroencephalogram (EEG) data. Applying event-related brain potential (ERP) data acquired from the sensorimotor cortices, the system chiefly consists of automatic artifact elimination, feature extraction, feature selection and classification. In addition to the use of independent component analysis, a similarity measure is proposed to further remove the electrooculographic (EOG) artifacts automatically. Several potential features, such as wavelet-fractal features, are then extracted for subsequent classification. Next, quantum-behaved particle swarm optimization (QPSO) is used to select features from the feature combination. Finally, selected sub-features are classified by support vector machine (SVM). Compared with without artifact elimination, feature selection using a genetic algorithm (GA) and feature classification with Fisher's linear discriminant (FLD) on MI data from two data sets for eight subjects, the results indicate that the proposed method is promising in brain-computer interface (BCI) applications.
A Prototype SSVEP Based Real Time BCI Gaming System
Martišius, Ignas
2016-01-01
Although brain-computer interface technology is mainly designed with disabled people in mind, it can also be beneficial to healthy subjects, for example, in gaming or virtual reality systems. In this paper we discuss the typical architecture, paradigms, requirements, and limitations of electroencephalogram-based gaming systems. We have developed a prototype three-class brain-computer interface system, based on the steady state visually evoked potentials paradigm and the Emotiv EPOC headset. An online target shooting game, implemented in the OpenViBE environment, has been used for user feedback. The system utilizes wave atom transform for feature extraction, achieving an average accuracy of 78.2% using linear discriminant analysis classifier, 79.3% using support vector machine classifier with a linear kernel, and 80.5% using a support vector machine classifier with a radial basis function kernel. PMID:27051414
A Prototype SSVEP Based Real Time BCI Gaming System.
Martišius, Ignas; Damaševičius, Robertas
2016-01-01
Although brain-computer interface technology is mainly designed with disabled people in mind, it can also be beneficial to healthy subjects, for example, in gaming or virtual reality systems. In this paper we discuss the typical architecture, paradigms, requirements, and limitations of electroencephalogram-based gaming systems. We have developed a prototype three-class brain-computer interface system, based on the steady state visually evoked potentials paradigm and the Emotiv EPOC headset. An online target shooting game, implemented in the OpenViBE environment, has been used for user feedback. The system utilizes wave atom transform for feature extraction, achieving an average accuracy of 78.2% using linear discriminant analysis classifier, 79.3% using support vector machine classifier with a linear kernel, and 80.5% using a support vector machine classifier with a radial basis function kernel.
Zhang, Yu; Zhou, Guoxu; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej
2015-11-30
Common spatial pattern (CSP) has been most popularly applied to motor-imagery (MI) feature extraction for classification in brain-computer interface (BCI) application. Successful application of CSP depends on the filter band selection to a large degree. However, the most proper band is typically subject-specific and can hardly be determined manually. This study proposes a sparse filter band common spatial pattern (SFBCSP) for optimizing the spatial patterns. SFBCSP estimates CSP features on multiple signals that are filtered from raw EEG data at a set of overlapping bands. The filter bands that result in significant CSP features are then selected in a supervised way by exploiting sparse regression. A support vector machine (SVM) is implemented on the selected features for MI classification. Two public EEG datasets (BCI Competition III dataset IVa and BCI Competition IV IIb) are used to validate the proposed SFBCSP method. Experimental results demonstrate that SFBCSP help improve the classification performance of MI. The optimized spatial patterns by SFBCSP give overall better MI classification accuracy in comparison with several competing methods. The proposed SFBCSP is a potential method for improving the performance of MI-based BCI. Copyright © 2015 Elsevier B.V. All rights reserved.
A Brain-Machine-Brain Interface for Rewiring of Cortical Circuitry after Traumatic Brain Injury
2014-09-01
2004. He served as Guest Coeditor of a special issue on applied neurodynamics for the Journal of Neural Engineering with Dr. Peter Thomas in December...for the millions of individuals who are left with permanent motor and cognitive impairments after acquired brain injury, as occurs in stroke and...Other investigators have proposed a closed-loop approach for a cognitive prosthesis that has shown promise in animal models (40). Other potential
Formisano, Elia; De Martino, Federico; Valente, Giancarlo
2008-09-01
Machine learning and pattern recognition techniques are being increasingly employed in functional magnetic resonance imaging (fMRI) data analysis. By taking into account the full spatial pattern of brain activity measured simultaneously at many locations, these methods allow detecting subtle, non-strictly localized effects that may remain invisible to the conventional analysis with univariate statistical methods. In typical fMRI applications, pattern recognition algorithms "learn" a functional relationship between brain response patterns and a perceptual, cognitive or behavioral state of a subject expressed in terms of a label, which may assume discrete (classification) or continuous (regression) values. This learned functional relationship is then used to predict the unseen labels from a new data set ("brain reading"). In this article, we describe the mathematical foundations of machine learning applications in fMRI. We focus on two methods, support vector machines and relevance vector machines, which are respectively suited for the classification and regression of fMRI patterns. Furthermore, by means of several examples and applications, we illustrate and discuss the methodological challenges of using machine learning algorithms in the context of fMRI data analysis.
A Brain-Machine-Brain Interface for Rewiring of Cortical Circuitry after Traumatic Brain Injury
2013-09-01
implemented to significantly decrease the IIR system response time, especially when artifacts were highly reproducible in consecutive stimulation...cycles. The proposed system architecture was hardware- implemented on a field- programmable gate array (FPGA) and tested using two sets of prerecorded...its FPGA implementation and testing with prerecorded neural datasets are reported in a manuscript currently in press with the IEEE Transactions on
Ljungquist, Bengt; Petersson, Per; Johansson, Anders J; Schouenborg, Jens; Garwicz, Martin
2018-04-01
Recent neuroscientific and technical developments of brain machine interfaces have put increasing demands on neuroinformatic databases and data handling software, especially when managing data in real time from large numbers of neurons. Extrapolating these developments we here set out to construct a scalable software architecture that would enable near-future massive parallel recording, organization and analysis of neurophysiological data on a standard computer. To this end we combined, for the first time in the present context, bit-encoding of spike data with a specific communication format for real time transfer and storage of neuronal data, synchronized by a common time base across all unit sources. We demonstrate that our architecture can simultaneously handle data from more than one million neurons and provide, in real time (< 25 ms), feedback based on analysis of previously recorded data. In addition to managing recordings from very large numbers of neurons in real time, it also has the capacity to handle the extensive periods of recording time necessary in certain scientific and clinical applications. Furthermore, the bit-encoding proposed has the additional advantage of allowing an extremely fast analysis of spatiotemporal spike patterns in a large number of neurons. Thus, we conclude that this architecture is well suited to support current and near-future Brain Machine Interface requirements.
NASA Astrophysics Data System (ADS)
Zander, T. O.; Jatzev, S.
2012-02-01
Brain-computer interface (BCI) systems are usually applied in highly controlled environments such as research laboratories or clinical setups. However, many BCI-based applications are implemented in more complex environments. For example, patients might want to use a BCI system at home, and users without disabilities could benefit from BCI systems in special working environments. In these contexts, it might be more difficult to reliably infer information about brain activity, because many intervening factors add up and disturb the BCI feature space. One solution for this problem would be adding context awareness to the system. We propose to augment the available information space with additional channels carrying information about the user state, the environment and the technical system. In particular, passive BCI systems seem to be capable of adding highly relevant context information—otherwise covert aspects of user state. In this paper, we present a theoretical framework based on general human-machine system research for adding context awareness to a BCI system. Building on that, we present results from a study on a passive BCI, which allows access to the covert aspect of user state related to the perceived loss of control. This study is a proof of concept and demonstrates that context awareness could beneficially be implemented in and combined with a BCI system or a general human-machine system. The EEG data from this experiment are available for public download at www.phypa.org. Parts of this work have already been presented in non-journal publications. This will be indicated specifically by appropriate references in the text.
NASA Astrophysics Data System (ADS)
Tahernezhad-Javazm, Farajollah; Azimirad, Vahid; Shoaran, Maryam
2018-04-01
Objective. Considering the importance and the near-future development of noninvasive brain-machine interface (BMI) systems, this paper presents a comprehensive theoretical-experimental survey on the classification and evolutionary methods for BMI-based systems in which EEG signals are used. Approach. The paper is divided into two main parts. In the first part, a wide range of different types of the base and combinatorial classifiers including boosting and bagging classifiers and evolutionary algorithms are reviewed and investigated. In the second part, these classifiers and evolutionary algorithms are assessed and compared based on two types of relatively widely used BMI systems, sensory motor rhythm-BMI and event-related potentials-BMI. Moreover, in the second part, some of the improved evolutionary algorithms as well as bi-objective algorithms are experimentally assessed and compared. Main results. In this study two databases are used, and cross-validation accuracy (CVA) and stability to data volume (SDV) are considered as the evaluation criteria for the classifiers. According to the experimental results on both databases, regarding the base classifiers, linear discriminant analysis and support vector machines with respect to CVA evaluation metric, and naive Bayes with respect to SDV demonstrated the best performances. Among the combinatorial classifiers, four classifiers, Bagg-DT (bagging decision tree), LogitBoost, and GentleBoost with respect to CVA, and Bagging-LR (bagging logistic regression) and AdaBoost (adaptive boosting) with respect to SDV had the best performances. Finally, regarding the evolutionary algorithms, single-objective invasive weed optimization (IWO) and bi-objective nondominated sorting IWO algorithms demonstrated the best performances. Significance. We present a general survey on the base and the combinatorial classification methods for EEG signals (sensory motor rhythm and event-related potentials) as well as their optimization methods through the evolutionary algorithms. In addition, experimental and statistical significance tests are carried out to study the applicability and effectiveness of the reviewed methods.
Lapborisuth, Pawan; Zhang, Xian; Noah, Adam; Hirsch, Joy
2017-01-01
Abstract. Neurofeedback is a method for using neural activity displayed on a computer to regulate one’s own brain function and has been shown to be a promising technique for training individuals to interact with brain–machine interface applications such as neuroprosthetic limbs. The goal of this study was to develop a user-friendly functional near-infrared spectroscopy (fNIRS)-based neurofeedback system to upregulate neural activity associated with motor imagery, which is frequently used in neuroprosthetic applications. We hypothesized that fNIRS neurofeedback would enhance activity in motor cortex during a motor imagery task. Twenty-two participants performed active and imaginary right-handed squeezing movements using an elastic ball while wearing a 98-channel fNIRS device. Neurofeedback traces representing localized cortical hemodynamic responses were graphically presented to participants in real time. Participants were instructed to observe this graphical representation and use the information to increase signal amplitude. Neural activity was compared during active and imaginary squeezing with and without neurofeedback. Active squeezing resulted in activity localized to the left premotor and supplementary motor cortex, and activity in the motor cortex was found to be modulated by neurofeedback. Activity in the motor cortex was also shown in the imaginary squeezing condition only in the presence of neurofeedback. These findings demonstrate that real-time fNIRS neurofeedback is a viable platform for brain–machine interface applications. PMID:28680906
Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.; Ball, Kenneth R.; Lance, Brent J.
2016-01-01
Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system. PMID:27713685
Waytowich, Nicholas R; Lawhern, Vernon J; Bohannon, Addison W; Ball, Kenneth R; Lance, Brent J
2016-01-01
Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.
Samwald, Matthias; Lim, Ernest; Masiar, Peter; Marenco, Luis; Chen, Huajun; Morse, Thomas; Mutalik, Pradeep; Shepherd, Gordon; Miller, Perry; Cheung, Kei-Hoi
2013-01-01
The amount of biomedical data available in Semantic Web formats has been rapidly growing in recent years. While these formats are machine-friendly, user-friendly web interfaces allowing easy querying of these data are typically lacking. We present “Entrez Neuron”, a pilot neuron-centric interface that allows for keyword-based queries against a coherent repository of OWL ontologies. These ontologies describe neuronal structures, physiology, mathematical models and microscopy images. The returned query results are organized hierarchically according to brain architecture. Where possible, the application makes use of entities from the Open Biomedical Ontologies (OBO) and the ‘HCLS knowledgebase’ developed by the W3C Interest Group for Health Care and Life Science. It makes use of the emerging RDFa standard to embed ontology fragments and semantic annotations within its HTML-based user interface. The application and underlying ontologies demonstrates how Semantic Web technologies can be used for information integration within a curated information repository and between curated information repositories. It also demonstrates how information integration can be accomplished on the client side, through simple copying and pasting of portions of documents that contain RDFa markup. PMID:19745321
A Brain-Machine-Brain Interface for Rewiring of Cortical Circuitry after Traumatic Brain Injury
2011-09-01
cerebral cortex of a rat’s brain. The flow chart for spike discrimination algorithm is also shown. Negative threshold level (not shown in bottom left...portion of the transistor drain current can flow into its bulk due to impact ionization effect [40], greatly degrading the output impedance of the...current source. This can be solved by connecting the bulk and source of together, as also seen in Fig. 4, allowing its drain-bulk current to also flow
A Bidirectional Brain-Machine Interface Featuring a Neuromorphic Hardware Decoder.
Boi, Fabio; Moraitis, Timoleon; De Feo, Vito; Diotalevi, Francesco; Bartolozzi, Chiara; Indiveri, Giacomo; Vato, Alessandro
2016-01-01
Bidirectional brain-machine interfaces (BMIs) establish a two-way direct communication link between the brain and the external world. A decoder translates recorded neural activity into motor commands and an encoder delivers sensory information collected from the environment directly to the brain creating a closed-loop system. These two modules are typically integrated in bulky external devices. However, the clinical support of patients with severe motor and sensory deficits requires compact, low-power, and fully implantable systems that can decode neural signals to control external devices. As a first step toward this goal, we developed a modular bidirectional BMI setup that uses a compact neuromorphic processor as a decoder. On this chip we implemented a network of spiking neurons built using its ultra-low-power mixed-signal analog/digital circuits. On-chip on-line spike-timing-dependent plasticity synapse circuits enabled the network to learn to decode neural signals recorded from the brain into motor outputs controlling the movements of an external device. The modularity of the BMI allowed us to tune the individual components of the setup without modifying the whole system. In this paper, we present the features of this modular BMI and describe how we configured the network of spiking neuron circuits to implement the decoder and to coordinate it with the encoder in an experimental BMI paradigm that connects bidirectionally the brain of an anesthetized rat with an external object. We show that the chip learned the decoding task correctly, allowing the interfaced brain to control the object's trajectories robustly. Based on our demonstration, we propose that neuromorphic technology is mature enough for the development of BMI modules that are sufficiently low-power and compact, while being highly computationally powerful and adaptive.
A Bidirectional Brain-Machine Interface Featuring a Neuromorphic Hardware Decoder
Boi, Fabio; Moraitis, Timoleon; De Feo, Vito; Diotalevi, Francesco; Bartolozzi, Chiara; Indiveri, Giacomo; Vato, Alessandro
2016-01-01
Bidirectional brain-machine interfaces (BMIs) establish a two-way direct communication link between the brain and the external world. A decoder translates recorded neural activity into motor commands and an encoder delivers sensory information collected from the environment directly to the brain creating a closed-loop system. These two modules are typically integrated in bulky external devices. However, the clinical support of patients with severe motor and sensory deficits requires compact, low-power, and fully implantable systems that can decode neural signals to control external devices. As a first step toward this goal, we developed a modular bidirectional BMI setup that uses a compact neuromorphic processor as a decoder. On this chip we implemented a network of spiking neurons built using its ultra-low-power mixed-signal analog/digital circuits. On-chip on-line spike-timing-dependent plasticity synapse circuits enabled the network to learn to decode neural signals recorded from the brain into motor outputs controlling the movements of an external device. The modularity of the BMI allowed us to tune the individual components of the setup without modifying the whole system. In this paper, we present the features of this modular BMI and describe how we configured the network of spiking neuron circuits to implement the decoder and to coordinate it with the encoder in an experimental BMI paradigm that connects bidirectionally the brain of an anesthetized rat with an external object. We show that the chip learned the decoding task correctly, allowing the interfaced brain to control the object's trajectories robustly. Based on our demonstration, we propose that neuromorphic technology is mature enough for the development of BMI modules that are sufficiently low-power and compact, while being highly computationally powerful and adaptive. PMID:28018162
Software architecture for time-constrained machine vision applications
NASA Astrophysics Data System (ADS)
Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.
2013-01-01
Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility, because they are normally oriented toward particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse, and inefficient execution on multicore processors. We present a novel software architecture for time-constrained machine vision applications that addresses these issues. The architecture is divided into three layers. The platform abstraction layer provides a high-level application programming interface for the rest of the architecture. The messaging layer provides a message-passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of message. The application layer provides a repository for reusable application modules designed for machine vision applications. These modules, which include acquisition, visualization, communication, user interface, and data processing, take advantage of the power of well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, the proposed architecture is applied to a real machine vision application: a jam detector for steel pickling lines.
NASA Astrophysics Data System (ADS)
Lin, Y.; Zhang, W. J.
2005-02-01
This paper presents an approach to human-machine interface design for control room operators of nuclear power plants. The first step in designing an interface for a particular application is to determine information content that needs to be displayed. The design methodology for this step is called the interface design framework (called framework ). Several frameworks have been proposed for applications at varying levels, including process plants. However, none is based on the design and manufacture of a plant system for which the interface is designed. This paper presents an interface design framework which originates from design theory and methodology for general technical systems. Specifically, the framework is based on a set of core concepts of a function-behavior-state model originally proposed by the artificial intelligence research community and widely applied in the design research community. Benefits of this new framework include the provision of a model-based fault diagnosis facility, and the seamless integration of the design (manufacture, maintenance) of plants and the design of human-machine interfaces. The missing linkage between design and operation of a plant was one of the causes of the Three Mile Island nuclear reactor incident. A simulated plant system is presented to explain how to apply this framework in designing an interface. The resulting human-machine interface is discussed; specifically, several fault diagnosis examples are elaborated to demonstrate how this interface could support operators' fault diagnosis in an unanticipated situation.
Fukayama, Osamu; Taniguchi, Noriyuki; Suzuki, Takafumi; Mabuchi, Kunihiko
2008-01-01
An online brain-machine interface (BMI) in the form of a small vehicle, the 'RatCar,' has been developed. A rat had neural electrodes implanted in its primary motor cortex and basal ganglia regions to continuously record neural signals. Then, a linear state space model represents a correlation between the recorded neural signals and locomotion states (i.e., moving velocity and azimuthal variances) of the rat. The model parameters were set so as to minimize estimation errors, and the locomotion states were estimated from neural firing rates using a Kalman filter algorithm. The results showed a small oscillation to achieve smooth control of the vehicle in spite of fluctuating firing rates with noises applied to the model. Major variation of the model variables converged in a first 30 seconds of the experiments and lasted for the entire one hour session.
A Brain-Machine-Brain Interface for Rewiring of Cortical Circuitry after Traumatic Brain Injury
2013-09-01
were requested to provide further evidence, either neurophysiological or neuroanatomical, of enhanced connectivity (no additional studies in new...report. The algorithms developed during the course of the manuscript revision have proved to be very enlightening . During Year 3, we revisited our Year...to our local IACUC and subsequently, to ACURO during Year 4. As a result of the Nature reviews, we focused on a more neurophysiological approach to
Liu, Jianbo; Khalil, Hassan K; Oweiss, Karim G
2011-10-01
In bi-directional brain-machine interfaces (BMIs), precisely controlling the delivery of microstimulation, both in space and in time, is critical to continuously modulate the neural activity patterns that carry information about the state of the brain-actuated device to sensory areas in the brain. In this paper, we investigate the use of neural feedback to control the spatiotemporal firing patterns of neural ensembles in a model of the thalamocortical pathway. Control of pyramidal (PY) cells in the primary somatosensory cortex (S1) is achieved based on microstimulation of thalamic relay cells through multiple-input multiple-output (MIMO) feedback controllers. This closed loop feedback control mechanism is achieved by simultaneously varying the stimulation parameters across multiple stimulation electrodes in the thalamic circuit based on continuous monitoring of the difference between reference patterns and the evoked responses of the cortical PY cells. We demonstrate that it is feasible to achieve a desired level of performance by controlling the firing activity pattern of a few "key" neural elements in the network. Our results suggest that neural feedback could be an effective method to facilitate the delivery of information to the cortex to substitute lost sensory inputs in cortically controlled BMIs.
LeMoyne, Robert; Tomycz, Nestor; Mastroianni, Timothy; McCandless, Cyrus; Cozza, Michael; Peduto, David
2015-01-01
Essential tremor (ET) is a highly prevalent movement disorder. Patients with ET exhibit a complex progressive and disabling tremor, and medical management often fails. Deep brain stimulation (DBS) has been successfully applied to this disorder, however there has been no quantifiable way to measure tremor severity or treatment efficacy in this patient population. The quantified amelioration of kinetic tremor via DBS is herein demonstrated through the application of a smartphone (iPhone) as a wireless accelerometer platform. The recorded acceleration signal can be obtained at a setting of the subject's convenience and conveyed by wireless transmission through the Internet for post-processing anywhere in the world. Further post-processing of the acceleration signal can be classified through a machine learning application, such as the support vector machine. Preliminary application of deep brain stimulation with a smartphone for acquisition of a feature set and machine learning for classification has been successfully applied. The support vector machine achieved 100% classification between deep brain stimulation in `on' and `off' mode based on the recording of an accelerometer signal through a smartphone as a wireless accelerometer platform.
[The current state of the brain-computer interface problem].
Shurkhay, V A; Aleksandrova, E V; Potapov, A A; Goryainov, S A
2015-01-01
It was only 40 years ago that the first PC appeared. Over this period, rather short in historical terms, we have witnessed the revolutionary changes in lives of individuals and the entire society. Computer technologies are tightly connected with any field, either directly or indirectly. We can currently claim that computers are manifold superior to a human mind in terms of a number of parameters; however, machines lack the key feature: they are incapable of independent thinking (like a human). However, the key to successful development of humankind is collaboration between the brain and the computer rather than competition. Such collaboration when a computer broadens, supplements, or replaces some brain functions is known as the brain-computer interface. Our review focuses on real-life implementation of this collaboration.
Nurmikko, Arto V; Donoghue, John P; Hochberg, Leigh R; Patterson, William R; Song, Yoon-Kyu; Bull, Christopher W; Borton, David A; Laiwalla, Farah; Park, Sunmee; Ming, Yin; Aceros, Juan
2010-01-01
Acquiring neural signals at high spatial and temporal resolution directly from brain microcircuits and decoding their activity to interpret commands and/or prior planning activity, such as motion of an arm or a leg, is a prime goal of modern neurotechnology. Its practical aims include assistive devices for subjects whose normal neural information pathways are not functioning due to physical damage or disease. On the fundamental side, researchers are striving to decipher the code of multiple neural microcircuits which collectively make up nature's amazing computing machine, the brain. By implanting biocompatible neural sensor probes directly into the brain, in the form of microelectrode arrays, it is now possible to extract information from interacting populations of neural cells with spatial and temporal resolution at the single cell level. With parallel advances in application of statistical and mathematical techniques tools for deciphering the neural code, extracted populations or correlated neurons, significant understanding has been achieved of those brain commands that control, e.g., the motion of an arm in a primate (monkey or a human subject). These developments are accelerating the work on neural prosthetics where brain derived signals may be employed to bypass, e.g., an injured spinal cord. One key element in achieving the goals for practical and versatile neural prostheses is the development of fully implantable wireless microelectronic "brain-interfaces" within the body, a point of special emphasis of this paper.
Koizumi, Amane; Nagata, Osamu; Togawa, Morio; Sazi, Toshiyuki
2014-01-01
Neuroscience is an expanding field of science to investigate enigmas of brain and human body function. However, the majority of the public have never had the chance to learn the basics of neuroscience and new knowledge from advanced neuroscience research through hands-on experience. Here, we report that we produced the Muscle Sensor, a simplified electromyography, to promote educational understanding in neuroscience. The Muscle Sensor can detect myoelectric potentials which are filtered and processed as 3-V pulse signals to shine a light bulb and emit beep sounds. With this educational tool, we delivered "On-Site Neuroscience Lectures" in Japanese junior-high schools to facilitate hands-on experience of neuroscientific electrophysiology and to connect their text-book knowledge to advanced neuroscience researches. On-site neuroscience lectures with the Muscle Sensor pave the way for a better understanding of the basics of neuroscience and the latest topics such as how brain-machine-interface technology could help patients with disabilities such as spinal cord injuries. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Operation of micro and molecular machines: a new concept with its origins in interface science.
Ariga, Katsuhiko; Ishihara, Shinsuke; Izawa, Hironori; Xia, Hong; Hill, Jonathan P
2011-03-21
A landmark accomplishment of nanotechnology would be successful fabrication of ultrasmall machines that can work like tweezers, motors, or even computing devices. Now we must consider how operation of micro- and molecular machines might be implemented for a wide range of applications. If these machines function only under limited conditions and/or require specialized apparatus then they are useless for practical applications. Therefore, it is important to carefully consider the access of functionality of the molecular or nanoscale systems by conventional stimuli at the macroscopic level. In this perspective, we will outline the position of micro- and molecular machines in current science and technology. Most of these machines are operated by light irradiation, application of electrical or magnetic fields, chemical reactions, and thermal fluctuations, which cannot always be applied in remote machine operation. We also propose strategies for molecular machine operation using the most conventional of stimuli, that of macroscopic mechanical force, achieved through mechanical operation of molecular machines located at an air-water interface. The crucial roles of the characteristics of an interfacial environment, i.e. connection between macroscopic dimension and nanoscopic function, and contact of media with different dielectric natures, are also described.
Low Latency Messages on Distributed Memory Multiprocessors
Rosing, Matt; Saltz, Joel
1995-01-01
This article describes many of the issues in developing an efficient interface for communication on distributed memory machines. Although the hardware component of message latency is less than 1 ws on many distributed memory machines, the software latency associated with sending and receiving typed messages is on the order of 50 μs. The reason for this imbalance is that the software interface does not match the hardware. By changing the interface to match the hardware more closely, applications with fine grained communication can be put on these machines. This article describes several tests performed and many of the issues involvedmore » in supporting low latency messages on distributed memory machines.« less
A Brain-Machine-Brain Interface for Rewiring of Cortical Circuitry after Traumatic Brain Injury
2015-11-01
or asymmetric biphasic current pulses up to ~100 A with passive discharge , and W-level digital signal processing 6 (DSP) unit for real-time SAR...voltage compliance of 4.68 V with a 5 V supply, when configured for monophasic stimulation with passive discharge . The programmable microstimulator...superficial aspects of the corona radiate was evident. In the full study, impact parameters will be altered slightly (somewhat larger impact tip, slightly
Brain-machine interfaces in neurorehabilitation of stroke.
Soekadar, Surjo R; Birbaumer, Niels; Slutzky, Marc W; Cohen, Leonardo G
2015-11-01
Stroke is among the leading causes of long-term disabilities leaving an increasing number of people with cognitive, affective and motor impairments depending on assistance in their daily life. While function after stroke can significantly improve in the first weeks and months, further recovery is often slow or non-existent in the more severe cases encompassing 30-50% of all stroke victims. The neurobiological mechanisms underlying recovery in those patients are incompletely understood. However, recent studies demonstrated the brain's remarkable capacity for functional and structural plasticity and recovery even in severe chronic stroke. As all established rehabilitation strategies require some remaining motor function, there is currently no standardized and accepted treatment for patients with complete chronic muscle paralysis. The development of brain-machine interfaces (BMIs) that translate brain activity into control signals of computers or external devices provides two new strategies to overcome stroke-related motor paralysis. First, BMIs can establish continuous high-dimensional brain-control of robotic devices or functional electric stimulation (FES) to assist in daily life activities (assistive BMI). Second, BMIs could facilitate neuroplasticity, thus enhancing motor learning and motor recovery (rehabilitative BMI). Advances in sensor technology, development of non-invasive and implantable wireless BMI-systems and their combination with brain stimulation, along with evidence for BMI systems' clinical efficacy suggest that BMI-related strategies will play an increasing role in neurorehabilitation of stroke. Copyright © 2014. Published by Elsevier Inc.
Redesigning the Human-Machine Interface for Computer-Mediated Visual Technologies.
ERIC Educational Resources Information Center
Acker, Stephen R.
1986-01-01
This study examined an application of a human machine interface which relies on the use of optical bar codes incorporated in a computer-based module to teach radio production. The sequencing procedure used establishes the user rather than the computer as the locus of control for the mediated instruction. (Author/MBR)
Shahdoost, Shahab; Frost, Shawn; Van Acker, Gustaf; DeJong, Stacey; Dunham, Caleb; Barbay, Scott; Nudo, Randolph; Mohseni, Pedram
2014-01-01
Nearly 6 million people in the United States are currently living with paralysis in which 23% of the cases are related to spinal cord injury (SCI). Miniaturized closed-loop neural interfaces have the potential for restoring function and mobility lost to debilitating neural injuries such as SCI by leveraging recent advancements in bioelectronics and a better understanding of the processes that underlie functional and anatomical reorganization in an injured nervous system. This paper describes our current progress towards developing a miniaturized brain-machine-spinal cord interface (BMSI) that is envisioned to convert in real time the neural command signals recorded from the brain to electrical stimuli delivered to the spinal cord below the injury level. Specifically, the paper reports on a corticospinal interface integrated circuit (IC) as a core building block for such a BMSI that is capable of low-noise recording of extracellular neural spikes from the cerebral cortex as well as muscle activation using intraspinal microstimulation (ISMS) in a rat with contusion injury to the thoracic spinal cord. The paper further presents results from a neurobiological study conducted in both normal and SCI rats to investigate the effect of various ISMS parameters on movement thresholds in the rat hindlimb. Coupled with proper signal-processing algorithms in the future for the transformation between the cortically recorded data and ISMS parameters, such a BMSI has the potential to facilitate functional recovery after an SCI by re-establishing corticospinal communication channels lost due to the injury.
Towards a real-time interface between a biomimetic model of sensorimotor cortex and a robotic arm
Dura-Bernal, Salvador; Chadderdon, George L; Neymotin, Samuel A; Francis, Joseph T; Lytton, William W
2015-01-01
Brain-machine interfaces can greatly improve the performance of prosthetics. Utilizing biomimetic neuronal modeling in brain machine interfaces (BMI) offers the possibility of providing naturalistic motor-control algorithms for control of a robotic limb. This will allow finer control of a robot, while also giving us new tools to better understand the brain’s use of electrical signals. However, the biomimetic approach presents challenges in integrating technologies across multiple hardware and software platforms, so that the different components can communicate in real-time. We present the first steps in an ongoing effort to integrate a biomimetic spiking neuronal model of motor learning with a robotic arm. The biomimetic model (BMM) was used to drive a simple kinematic two-joint virtual arm in a motor task requiring trial-and-error convergence on a single target. We utilized the output of this model in real time to drive mirroring motion of a Barrett Technology WAM robotic arm through a user datagram protocol (UDP) interface. The robotic arm sent back information on its joint positions, which was then used by a visualization tool on the remote computer to display a realistic 3D virtual model of the moving robotic arm in real time. This work paves the way towards a full closed-loop biomimetic brain-effector system that can be incorporated in a neural decoder for prosthetic control, to be used as a platform for developing biomimetic learning algorithms for controlling real-time devices. PMID:26709323
A Web-based cost-effective training tool with possible application to brain injury rehabilitation.
Wang, Peijun; Kreutzer, Ina Anna; Bjärnemo, Robert; Davies, Roy C
2004-06-01
Virtual reality (VR) has provoked enormous interest in the medical community. In particular, VR offers therapists new approaches for improving rehabilitation effects. However, most of these VR assistant tools are not very portable, extensible or economical. Due to the vast amount of 3D data, they are not suitable for Internet transfer. Furthermore, in order to run these VR systems smoothly, special hardware devices are needed. As a result, existing VR assistant tools tend to be available in hospitals but not in patients' homes. To overcome these disadvantages, as a case study, this paper proposes a Web-based Virtual Ticket Machine, called WBVTM, using VRML [VRML Consortium, The Virtual Reality Modeling Language: International Standard ISO/IEC DIS 14772-1, 1997, available at ], Java and EAI (External Authoring Interface) [Silicon Graphics, Inc., The External Authoring Interface (EAI), available at ], to help people with acquired brain injury (ABI) to relearn basic living skills at home at a low cost. As these technologies are open standard and feature usability on the Internet, WBVTM achieves the goals of portability, easy accessibility and cost-effectiveness.
Multi-Class Motor Imagery EEG Decoding for Brain-Computer Interfaces
Wang, Deng; Miao, Duoqian; Blohm, Gunnar
2012-01-01
Recent studies show that scalp electroencephalography (EEG) as a non-invasive interface has great potential for brain-computer interfaces (BCIs). However, one factor that has limited practical applications for EEG-based BCI so far is the difficulty to decode brain signals in a reliable and efficient way. This paper proposes a new robust processing framework for decoding of multi-class motor imagery (MI) that is based on five main processing steps. (i) Raw EEG segmentation without the need of visual artifact inspection. (ii) Considering that EEG recordings are often contaminated not just by electrooculography (EOG) but also other types of artifacts, we propose to first implement an automatic artifact correction method that combines regression analysis with independent component analysis for recovering the original source signals. (iii) The significant difference between frequency components based on event-related (de-) synchronization and sample entropy is then used to find non-contiguous discriminating rhythms. After spectral filtering using the discriminating rhythms, a channel selection algorithm is used to select only relevant channels. (iv) Feature vectors are extracted based on the inter-class diversity and time-varying dynamic characteristics of the signals. (v) Finally, a support vector machine is employed for four-class classification. We tested our proposed algorithm on experimental data that was obtained from dataset 2a of BCI competition IV (2008). The overall four-class kappa values (between 0.41 and 0.80) were comparable to other models but without requiring any artifact-contaminated trial removal. The performance showed that multi-class MI tasks can be reliably discriminated using artifact-contaminated EEG recordings from a few channels. This may be a promising avenue for online robust EEG-based BCI applications. PMID:23087607
Intention Concepts and Brain-Machine Interfacing
Thinnes-Elker, Franziska; Iljina, Olga; Apostolides, John Kyle; Kraemer, Felicitas; Schulze-Bonhage, Andreas; Aertsen, Ad; Ball, Tonio
2012-01-01
Intentions, including their temporal properties and semantic content, are receiving increased attention, and neuroscientific studies in humans vary with respect to the topography of intention-related neural responses. This may reflect the fact that the kind of intentions investigated in one study may not be exactly the same kind investigated in the other. Fine-grained intention taxonomies developed in the philosophy of mind may be useful to identify the neural correlates of well-defined types of intentions, as well as to disentangle them from other related mental states, such as mere urges to perform an action. Intention-related neural signals may be exploited by brain-machine interfaces (BMIs) that are currently being developed to restore speech and motor control in paralyzed patients. Such BMI devices record the brain activity of the agent, interpret (“decode”) the agent’s intended action, and send the corresponding execution command to an artificial effector system, e.g., a computer cursor or a robotic arm. In the present paper, we evaluate the potential of intention concepts from philosophy of mind to improve the performance and safety of BMIs based on higher-order, intention-related control signals. To this end, we address the distinction between future-, present-directed, and motor intentions, as well as the organization of intentions in time, specifically to what extent it is sequential or hierarchical. This has consequences as to whether these different types of intentions can be expected to occur simultaneously or not. We further illustrate how it may be useful or even necessary to distinguish types of intentions exposited in philosophy, including yes- vs. no-intentions and oblique vs. direct intentions, to accurately decode the agent’s intentions from neural signals in practical BMI applications. PMID:23162504
NASA Astrophysics Data System (ADS)
Perruchoud, David; Pisotta, Iolanda; Carda, Stefano; Murray, Micah M.; Ionta, Silvio
2016-08-01
Objective. Brain-machine interfaces (BMIs) re-establish communication channels between the nervous system and an external device. The use of BMI technology has generated significant developments in rehabilitative medicine, promising new ways to restore lost sensory-motor functions. However and despite high-caliber basic research, only a few prototypes have successfully left the laboratory and are currently home-deployed. Approach. The failure of this laboratory-to-user transfer likely relates to the absence of BMI solutions for providing naturalistic feedback about the consequences of the BMI’s actions. To overcome this limitation, nowadays cutting-edge BMI advances are guided by the principle of biomimicry; i.e. the artificial reproduction of normal neural mechanisms. Main results. Here, we focus on the importance of somatosensory feedback in BMIs devoted to reproducing movements with the goal of serving as a reference framework for future research on innovative rehabilitation procedures. First, we address the correspondence between users’ needs and BMI solutions. Then, we describe the main features of invasive and non-invasive BMIs, including their degree of biomimicry and respective advantages and drawbacks. Furthermore, we explore the prevalent approaches for providing quasi-natural sensory feedback in BMI settings. Finally, we cover special situations that can promote biomimicry and we present the future directions in basic research and clinical applications. Significance. The continued incorporation of biomimetic features into the design of BMIs will surely serve to further ameliorate the realism of BMIs, as well as tremendously improve their actuation, acceptance, and use.
Design and validation of a real-time spiking-neural-network decoder for brain-machine interfaces.
Dethier, Julie; Nuyujukian, Paul; Ryu, Stephen I; Shenoy, Krishna V; Boahen, Kwabena
2013-06-01
Cortically-controlled motor prostheses aim to restore functions lost to neurological disease and injury. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage cortex. One possible solution is to use ultra-low power neuromorphic chips to decode neural signals for these intracortical implants. The first step is to explore in simulation the feasibility of translating decoding algorithms for brain-machine interface (BMI) applications into spiking neural networks (SNNs). Here we demonstrate the validity of the approach by implementing an existing Kalman-filter-based decoder in a simulated SNN using the Neural Engineering Framework (NEF), a general method for mapping control algorithms onto SNNs. To measure this system's robustness and generalization, we tested it online in closed-loop BMI experiments with two rhesus monkeys. Across both monkeys, a Kalman filter implemented using a 2000-neuron SNN has comparable performance to that of a Kalman filter implemented using standard floating point techniques. These results demonstrate the tractability of SNN implementations of statistical signal processing algorithms on different monkeys and for several tasks, suggesting that a SNN decoder, implemented on a neuromorphic chip, may be a feasible computational platform for low-power fully-implanted prostheses. The validation of this closed-loop decoder system and the demonstration of its robustness and generalization hold promise for SNN implementations on an ultra-low power neuromorphic chip using the NEF.
Perruchoud, David; Pisotta, Iolanda; Carda, Stefano; Murray, Micah M; Ionta, Silvio
2016-08-01
Brain-machine interfaces (BMIs) re-establish communication channels between the nervous system and an external device. The use of BMI technology has generated significant developments in rehabilitative medicine, promising new ways to restore lost sensory-motor functions. However and despite high-caliber basic research, only a few prototypes have successfully left the laboratory and are currently home-deployed. The failure of this laboratory-to-user transfer likely relates to the absence of BMI solutions for providing naturalistic feedback about the consequences of the BMI's actions. To overcome this limitation, nowadays cutting-edge BMI advances are guided by the principle of biomimicry; i.e. the artificial reproduction of normal neural mechanisms. Here, we focus on the importance of somatosensory feedback in BMIs devoted to reproducing movements with the goal of serving as a reference framework for future research on innovative rehabilitation procedures. First, we address the correspondence between users' needs and BMI solutions. Then, we describe the main features of invasive and non-invasive BMIs, including their degree of biomimicry and respective advantages and drawbacks. Furthermore, we explore the prevalent approaches for providing quasi-natural sensory feedback in BMI settings. Finally, we cover special situations that can promote biomimicry and we present the future directions in basic research and clinical applications. The continued incorporation of biomimetic features into the design of BMIs will surely serve to further ameliorate the realism of BMIs, as well as tremendously improve their actuation, acceptance, and use.
Viventi, Jonathan; Kim, Dae-Hyeong; Vigeland, Leif; Frechette, Eric S; Blanco, Justin A; Kim, Yun-Soung; Avrin, Andrew E; Tiruvadi, Vineet R; Hwang, Suk-Won; Vanleer, Ann C; Wulsin, Drausin F; Davis, Kathryn; Gelber, Casey E; Palmer, Larry; Van der Spiegel, Jan; Wu, Jian; Xiao, Jianliang; Huang, Yonggang; Contreras, Diego; Rogers, John A; Litt, Brian
2011-11-13
Arrays of electrodes for recording and stimulating the brain are used throughout clinical medicine and basic neuroscience research, yet are unable to sample large areas of the brain while maintaining high spatial resolution because of the need to individually wire each passive sensor at the electrode-tissue interface. To overcome this constraint, we developed new devices that integrate ultrathin and flexible silicon nanomembrane transistors into the electrode array, enabling new dense arrays of thousands of amplified and multiplexed sensors that are connected using fewer wires. We used this system to record spatial properties of cat brain activity in vivo, including sleep spindles, single-trial visual evoked responses and electrographic seizures. We found that seizures may manifest as recurrent spiral waves that propagate in the neocortex. The developments reported here herald a new generation of diagnostic and therapeutic brain-machine interface devices.
Barz, F; Livi, A; Lanzilotto, M; Maranesi, M; Bonini, L; Paul, O; Ruther, P
2017-06-01
Application-specific designs of electrode arrays offer an improved effectiveness for providing access to targeted brain regions in neuroscientific research and brain machine interfaces. The simultaneous and stable recording of neuronal ensembles is the main goal in the design of advanced neural interfaces. Here, we describe the development and assembly of highly customizable 3D microelectrode arrays and demonstrate their recording performance in chronic applications in non-human primates. System assembly relies on a microfabricated stacking component that is combined with Michigan-style silicon-based electrode arrays interfacing highly flexible polyimide cables. Based on the novel stacking component, the lead time for implementing prototypes with altered electrode pitches is minimal. Once the fabrication and assembly accuracy of the stacked probes have been characterized, their recording performance is assessed during in vivo chronic experiments in awake rhesus macaques (Macaca mulatta) trained to execute reaching-grasping motor tasks. Using a single set of fabrication tools, we implemented three variants of the stacking component for electrode distances of 250, 300 and 350 µm in the stacking direction. We assembled neural probes with up to 96 channels and an electrode density of 98 electrodes mm -2 . Furthermore, we demonstrate that the shank alignment is accurate to a few µm at an angular alignment better than 1°. Three 64-channel probes were chronically implanted in two monkeys providing single-unit activity on more than 60% of all channels and excellent recording stability. Histological tissue sections, obtained 52 d after implantation from one of the monkeys, showed minimal tissue damage, in accordance with the high quality and stability of the recorded neural activity. The versatility of our fabrication and assembly approach should significantly support the development of ideal interface geometries for a broad spectrum of applications. With the demonstrated performance, these probes are suitable for both semi-chronic and chronic applications.
NASA Astrophysics Data System (ADS)
Barz, F.; Livi, A.; Lanzilotto, M.; Maranesi, M.; Bonini, L.; Paul, O.; Ruther, P.
2017-06-01
Objective. Application-specific designs of electrode arrays offer an improved effectiveness for providing access to targeted brain regions in neuroscientific research and brain machine interfaces. The simultaneous and stable recording of neuronal ensembles is the main goal in the design of advanced neural interfaces. Here, we describe the development and assembly of highly customizable 3D microelectrode arrays and demonstrate their recording performance in chronic applications in non-human primates. Approach. System assembly relies on a microfabricated stacking component that is combined with Michigan-style silicon-based electrode arrays interfacing highly flexible polyimide cables. Based on the novel stacking component, the lead time for implementing prototypes with altered electrode pitches is minimal. Once the fabrication and assembly accuracy of the stacked probes have been characterized, their recording performance is assessed during in vivo chronic experiments in awake rhesus macaques (Macaca mulatta) trained to execute reaching-grasping motor tasks. Main results. Using a single set of fabrication tools, we implemented three variants of the stacking component for electrode distances of 250, 300 and 350 µm in the stacking direction. We assembled neural probes with up to 96 channels and an electrode density of 98 electrodes mm-2. Furthermore, we demonstrate that the shank alignment is accurate to a few µm at an angular alignment better than 1°. Three 64-channel probes were chronically implanted in two monkeys providing single-unit activity on more than 60% of all channels and excellent recording stability. Histological tissue sections, obtained 52 d after implantation from one of the monkeys, showed minimal tissue damage, in accordance with the high quality and stability of the recorded neural activity. Significance. The versatility of our fabrication and assembly approach should significantly support the development of ideal interface geometries for a broad spectrum of applications. With the demonstrated performance, these probes are suitable for both semi-chronic and chronic applications.
A Brain-Machine-Brain Interface for Rewiring of Cortical Circuitry after Traumatic Brain Injury
2011-09-01
parietal bones, and a threaded rod was implanted into the interparietal bone. These were affixed to the skull with dental acrylic. A hybrid, 16...then sealed with a silicone polymer (Kwik-Cast, WPI). The base of the probe connector was lowered onto the dental acrylic and fixed into place. An...the skull using a dental drill with a trephine bit over the cortex contralateral to the dominant forelimb. A total of 14 animals received CCI in the
A Brain-Machine-Brain Interface for Rewiring of Cortical Circuitry after Traumatic Brain Injury
2015-11-01
asymmetric biphasic current pulses up to ~100 A with passive discharge , and W-level digital signal processing 6 (DSP) unit for real-time SAR based on...compliance of 4.68 V with a 5 V supply, when configured for monophasic stimulation with passive discharge . The programmable microstimulator could also...severely disrupted. While the underlying white matter was intact, distortion of the most superficial aspects of the corona radiate was evident. In the
Development and experimentation of an eye/brain/task testbed
NASA Technical Reports Server (NTRS)
Harrington, Nora; Villarreal, James
1987-01-01
The principal objective is to develop a laboratory testbed that will provide a unique capability to elicit, control, record, and analyze the relationship of operator task loading, operator eye movement, and operator brain wave data in a computer system environment. The ramifications of an integrated eye/brain monitor to the man machine interface are staggering. The success of such a system would benefit users of space and defense, paraplegics, and the monitoring of boring screens (nuclear power plants, air defense, etc.)
Open multi-agent control architecture to support virtual-reality-based man-machine interfaces
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel
2001-10-01
Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.
NASA Astrophysics Data System (ADS)
Chun, Honggu; Chung, Taek Dong
2015-07-01
Iontronics is an emerging technology based on sophisticated control of ions as signal carriers that bridges solid-state electronics and biological system. It is found in nature, e.g., information transduction and processing of brain in which neurons are dynamically polarized or depolarized by ion transport across cell membranes. It suggests the operating principle of aqueous circuits made of predesigned structures and functional materials that characteristically interact with ions of various charge, mobility, and affinity. Working in aqueous environments, iontronic devices offer profound implications for biocompatible or biodegradable logic circuits for sensing, ecofriendly monitoring, and brain-machine interfacing. Furthermore, iontronics based on multi-ionic carriers sheds light on futuristic biomimic information processing. In this review, we overview the historical achievements and the current state of iontronics with regard to theory, fabrication, integration, and applications, concluding with comments on where the technology may advance.
2012-01-01
A brain-computer interface (BCI) is a communication system that can help users interact with the outside environment by translating brain signals into machine commands. The use of electroencephalographic (EEG) signals has become the most common approach for a BCI because of their usability and strong reliability. Many EEG-based BCI devices have been developed with traditional wet- or micro-electro-mechanical-system (MEMS)-type EEG sensors. However, those traditional sensors have uncomfortable disadvantage and require conductive gel and skin preparation on the part of the user. Therefore, acquiring the EEG signals in a comfortable and convenient manner is an important factor that should be incorporated into a novel BCI device. In the present study, a wearable, wireless and portable EEG-based BCI device with dry foam-based EEG sensors was developed and was demonstrated using a gaming control application. The dry EEG sensors operated without conductive gel; however, they were able to provide good conductivity and were able to acquire EEG signals effectively by adapting to irregular skin surfaces and by maintaining proper skin-sensor impedance on the forehead site. We have also demonstrated a real-time cognitive stage detection application of gaming control using the proposed portable device. The results of the present study indicate that using this portable EEG-based BCI device to conveniently and effectively control the outside world provides an approach for researching rehabilitation engineering. PMID:22284235
NASA Technical Reports Server (NTRS)
Barrett, Eamon B. (Editor); Pearson, James J. (Editor)
1989-01-01
Image understanding concepts and models, image understanding systems and applications, advanced digital processors and software tools, and advanced man-machine interfaces are among the topics discussed. Particular papers are presented on such topics as neural networks for computer vision, object-based segmentation and color recognition in multispectral images, the application of image algebra to image measurement and feature extraction, and the integration of modeling and graphics to create an infrared signal processing test bed.
Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.; ...
2016-09-22
Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry,STIG),which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIGmore » method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as out perform traditional within-subject calibration techniques when limited data is available. Here, this method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.
Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry,STIG),which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIGmore » method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as out perform traditional within-subject calibration techniques when limited data is available. Here, this method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.« less
Selectivity and Longevity of Peripheral-Nerve and Machine Interfaces: A Review
Ghafoor, Usman; Kim, Sohee; Hong, Keum-Shik
2017-01-01
For those individuals with upper-extremity amputation, a daily normal living activity is no longer possible or it requires additional effort and time. With the aim of restoring their sensory and motor functions, theoretical and technological investigations have been carried out in the field of neuroprosthetic systems. For transmission of sensory feedback, several interfacing modalities including indirect (non-invasive), direct-to-peripheral-nerve (invasive), and cortical stimulation have been applied. Peripheral nerve interfaces demonstrate an edge over the cortical interfaces due to the sensitivity in attaining cortical brain signals. The peripheral nerve interfaces are highly dependent on interface designs and are required to be biocompatible with the nerves to achieve prolonged stability and longevity. Another criterion is the selection of nerves that allows minimal invasiveness and damages as well as high selectivity for a large number of nerve fascicles. In this paper, we review the nerve-machine interface modalities noted above with more focus on peripheral nerve interfaces, which are responsible for provision of sensory feedback. The invasive interfaces for recording and stimulation of electro-neurographic signals include intra-fascicular, regenerative-type interfaces that provide multiple contact channels to a group of axons inside the nerve and the extra-neural-cuff-type interfaces that enable interaction with many axons around the periphery of the nerve. Section Current Prosthetic Technology summarizes the advancements made to date in the field of neuroprosthetics toward the achievement of a bidirectional nerve-machine interface with more focus on sensory feedback. In the Discussion section, the authors propose a hybrid interface technique for achieving better selectivity and long-term stability using the available nerve interfacing techniques. PMID:29163122
Software platform for managing the classification of error- related potentials of observers
NASA Astrophysics Data System (ADS)
Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.
2015-09-01
Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.
Brain-machine interfaces: electrophysiological challenges and limitations.
Lega, Bradley C; Serruya, Mijail D; Zaghloul, Kareem A
2011-01-01
Brain-machine interfaces (BMI) seek to directly communicate with the human nervous system in order to diagnose and treat intrinsic neurological disorders. While the first generation of these devices has realized significant clinical successes, they often rely on gross electrical stimulation using empirically derived parameters through open-loop mechanisms of action that are not yet fully understood. Their limitations reflect the inherent challenge in developing the next generation of these devices. This review identifies lessons learned from the first generation of BMI devices (chiefly deep brain stimulation), identifying key problems for which the solutions will aid the development of the next generation of technologies. Our analysis examines four hypotheses for the mechanism by which brain stimulation alters surrounding neurophysiologic activity. We then focus on motor prosthetics, describing various approaches to overcoming the problems of decoding neural signals. We next turn to visual prosthetics, an area for which the challenges of signal coding to match neural architecture has been partially overcome. Finally, we close with a review of cortical stimulation, examining basic principles that will be incorporated into the design of future devices. Throughout the review, we relate the issues of each specific topic to the common thread of BMI research: translating new knowledge of network neuroscience into improved devices for neuromodulation.
Balasubramanian, Karthikeyan; Southerland, Joshua; Vaidya, Mukta; Qian, Kai; Eleryan, Ahmed; Fagg, Andrew H; Sluzky, Marc; Oweiss, Karim; Hatsopoulos, Nicholas
2013-01-01
Operant conditioning with biofeedback has been shown to be an effective method to modify neural activity to generate goal-directed actions in a brain-machine interface. It is particularly useful when neural activity cannot be mathematically mapped to motor actions of the actual body such as in the case of amputation. Here, we implement an operant conditioning approach with visual feedback in which an amputated monkey is trained to control a multiple degree-of-freedom robot to perform a reach-to-grasp behavior. A key innovation is that each controlled dimension represents a behaviorally relevant synergy among a set of joint degrees-of-freedom. We present a number of behavioral metrics by which to assess improvements in BMI control with exposure to the system. The use of non-human primates with chronic amputation is arguably the most clinically-relevant model of human amputation that could have direct implications for developing a neural prosthesis to treat humans with missing upper limbs.
A Brain-Machine Interface Operating with a Real-Time Spiking Neural Network Control Algorithm.
Dethier, Julie; Nuyujukian, Paul; Eliasmith, Chris; Stewart, Terry; Elassaad, Shauki A; Shenoy, Krishna V; Boahen, Kwabena
2011-01-01
Motor prostheses aim to restore function to disabled patients. Despite compelling proof of concept systems, barriers to clinical translation remain. One challenge is to develop a low-power, fully-implantable system that dissipates only minimal power so as not to damage tissue. To this end, we implemented a Kalman-filter based decoder via a spiking neural network (SNN) and tested it in brain-machine interface (BMI) experiments with a rhesus monkey. The Kalman filter was trained to predict the arm's velocity and mapped on to the SNN using the Neural Engineering Framework (NEF). A 2,000-neuron embedded Matlab SNN implementation runs in real-time and its closed-loop performance is quite comparable to that of the standard Kalman filter. The success of this closed-loop decoder holds promise for hardware SNN implementations of statistical signal processing algorithms on neuromorphic chips, which may offer power savings necessary to overcome a major obstacle to the successful clinical translation of neural motor prostheses.
NASA Technical Reports Server (NTRS)
Roske-Hofstrand, Renate J.
1990-01-01
The man-machine interface and its influence on the characteristics of computer displays in automated air traffic is discussed. The graphical presentation of spatial relationships and the problems it poses for air traffic control, and the solution of such problems are addressed. Psychological factors involved in the man-machine interface are stressed.
Degenhart, Alan D.; Eles, James; Dum, Richard; Mischel, Jessica L.; Smalianchuk, Ivan; Endler, Bridget; Ashmore, Robin C.; Tyler-Kabara, Elizabeth C.; Hatsopoulos, Nicholas G.; Wang, Wei; Batista, Aaron P.; Cui, X. Tracy
2016-01-01
Electrocorticography (ECoG), used as a neural recording modality for brain-machine interfaces (BMIs), potentially allows for field potentials to be recorded from the surface of the cerebral cortex for long durations without suffering the host-tissue reaction to the extent that it is common with intracortical microelectrodes. Though the stability of signals obtained from chronically-implanted ECoG electrodes has begun receiving attention, to date little work has characterized the effects of long-term implantation of ECoG electrodes on underlying cortical tissue. We implanted a high-density ECoG electrode grid subdurally over cortical motor areas of a Rhesus macaque for 666 days. Histological analysis revealed minimal damage to the cortex underneath the implant, though the grid itself was encapsulated in collagenous tissue. We observed macrophages and foreign body giant cells at the tissue-array interface, indicative of a stereotypical foreign body response. Despite this encapsulation, cortical modulation during reaching movements was observed more than 18 months post-implantation. These results suggest that ECoG may provide a means by which stable chronic cortical recordings can be obtained with comparatively little tissue damage, facilitating the development of clinically-viable brain-machine interface systems. PMID:27351722
Flint, Robert D; Scheid, Michael R; Wright, Zachary A; Solla, Sara A; Slutzky, Marc W
2016-03-23
The human motor system is capable of remarkably precise control of movements--consider the skill of professional baseball pitchers or surgeons. This precise control relies upon stable representations of movements in the brain. Here, we investigated the stability of cortical activity at multiple spatial and temporal scales by recording local field potentials (LFPs) and action potentials (multiunit spikes, MSPs) while two monkeys controlled a cursor either with their hand or directly from the brain using a brain-machine interface. LFPs and some MSPs were remarkably stable over time periods ranging from 3 d to over 3 years; overall, LFPs were significantly more stable than spikes. We then assessed whether the stability of all neural activity, or just a subset of activity, was necessary to achieve stable behavior. We showed that projections of neural activity into the subspace relevant to the task (the "task-relevant space") were significantly more stable than were projections into the task-irrelevant (or "task-null") space. This provides cortical evidence in support of the minimum intervention principle, which proposes that optimal feedback control (OFC) allows the brain to tightly control only activity in the task-relevant space while allowing activity in the task-irrelevant space to vary substantially from trial to trial. We found that the brain appears capable of maintaining stable movement representations for extremely long periods of time, particularly so for neural activity in the task-relevant space, which agrees with OFC predictions. It is unknown whether cortical signals are stable for more than a few weeks. Here, we demonstrate that motor cortical signals can exhibit high stability over several years. This result is particularly important to brain-machine interfaces because it could enable stable performance with infrequent recalibration. Although we can maintain movement accuracy over time, movement components that are unrelated to the goals of a task (such as elbow position during reaching) often vary from trial to trial. This is consistent with the minimum intervention principle of optimal feedback control. We provide evidence that the motor cortex acts according to this principle: cortical activity is more stable in the task-relevant space and more variable in the task-irrelevant space. Copyright © 2016 the authors 0270-6474/16/363623-10$15.00/0.
Takano, Kouji; Hata, Naoki; Kansaku, Kenji
2011-01-01
The brain–machine interface (BMI) or brain–computer interface is a new interface technology that uses neurophysiological signals from the brain to control external machines or computers. This technology is expected to support daily activities, especially for persons with disabilities. To expand the range of activities enabled by this type of interface, here, we added augmented reality (AR) to a P300-based BMI. In this new system, we used a see-through head-mount display (HMD) to create control panels with flicker visual stimuli to support the user in areas close to controllable devices. When the attached camera detects an AR marker, the position and orientation of the marker are calculated, and the control panel for the pre-assigned appliance is created by the AR system and superimposed on the HMD. The participants were required to control system-compatible devices, and they successfully operated them without significant training. Online performance with the HMD was not different from that using an LCD monitor. Posterior and lateral (right or left) channel selections contributed to operation of the AR–BMI with both the HMD and LCD monitor. Our results indicate that AR–BMI systems operated with a see-through HMD may be useful in building advanced intelligent environments. PMID:21541307
Interface Metaphors for Interactive Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jasper, Robert J.; Blaha, Leslie M.
To promote more interactive and dynamic machine learn- ing, we revisit the notion of user-interface metaphors. User-interface metaphors provide intuitive constructs for supporting user needs through interface design elements. A user-interface metaphor provides a visual or action pattern that leverages a user’s knowledge of another domain. Metaphors suggest both the visual representations that should be used in a display as well as the interactions that should be afforded to the user. We argue that user-interface metaphors can also offer a method of extracting interaction-based user feedback for use in machine learning. Metaphors offer indirect, context-based information that can be usedmore » in addition to explicit user inputs, such as user-provided labels. Implicit information from user interactions with metaphors can augment explicit user input for active learning paradigms. Or it might be leveraged in systems where explicit user inputs are more challenging to obtain. Each interaction with the metaphor provides an opportunity to gather data and learn. We argue this approach is especially important in streaming applications, where we desire machine learning systems that can adapt to dynamic, changing data.« less
NASA Astrophysics Data System (ADS)
Milekovic, Tomislav; Fischer, Jörg; Pistohl, Tobias; Ruescher, Johanna; Schulze-Bonhage, Andreas; Aertsen, Ad; Rickert, Jörn; Ball, Tonio; Mehring, Carsten
2012-08-01
A brain-machine interface (BMI) can be used to control movements of an artificial effector, e.g. movements of an arm prosthesis, by motor cortical signals that control the equivalent movements of the corresponding body part, e.g. arm movements. This approach has been successfully applied in monkeys and humans by accurately extracting parameters of movements from the spiking activity of multiple single neurons. We show that the same approach can be realized using brain activity measured directly from the surface of the human cortex using electrocorticography (ECoG). Five subjects, implanted with ECoG implants for the purpose of epilepsy assessment, took part in our study. Subjects used directionally dependent ECoG signals, recorded during active movements of a single arm, to control a computer cursor in one out of two directions. Significant BMI control was achieved in four out of five subjects with correct directional decoding in 69%-86% of the trials (75% on average). Our results demonstrate the feasibility of an online BMI using decoding of movement direction from human ECoG signals. Thus, to achieve such BMIs, ECoG signals might be used in conjunction with or as an alternative to intracortical neural signals.
Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed
2017-01-01
Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.
Body-Machine Interfaces after Spinal Cord Injury: Rehabilitation and Brain Plasticity.
Seáñez-González, Ismael; Pierella, Camilla; Farshchiansadegh, Ali; Thorp, Elias B; Wang, Xue; Parrish, Todd; Mussa-Ivaldi, Ferdinando A
2016-12-19
The purpose of this study was to identify rehabilitative effects and changes in white matter microstructure in people with high-level spinal cord injury following bilateral upper-extremity motor skill training. Five subjects with high-level (C5-C6) spinal cord injury (SCI) performed five visuo-spatial motor training tasks over 12 sessions (2-3 sessions per week). Subjects controlled a two-dimensional cursor with bilateral simultaneous movements of the shoulders using a non-invasive inertial measurement unit-based body-machine interface. Subjects' upper-body ability was evaluated before the start, in the middle and a day after the completion of training. MR imaging data were acquired before the start and within two days of the completion of training. Subjects learned to use upper-body movements that survived the injury to control the body-machine interface and improved their performance with practice. Motor training increased Manual Muscle Test scores and the isometric force of subjects' shoulders and upper arms. Moreover, motor training increased fractional anisotropy (FA) values in the cingulum of the left hemisphere by 6.02% on average, indicating localized white matter microstructure changes induced by activity-dependent modulation of axon diameter, myelin thickness or axon number. This body-machine interface may serve as a platform to develop a new generation of assistive-rehabilitative devices that promote the use of, and that re-strengthen, the motor and sensory functions that survived the injury.
Mundahl, John; Jianjun Meng; He, Jeffrey; Bin He
2016-08-01
Brain-computer interface (BCI) systems allow users to directly control computers and other machines by modulating their brain waves. In the present study, we investigated the effect of soft drinks on resting state (RS) EEG signals and BCI control. Eight healthy human volunteers each participated in three sessions of BCI cursor tasks and resting state EEG. During each session, the subjects drank an unlabeled soft drink with either sugar, caffeine, or neither ingredient. A comparison of resting state spectral power shows a substantial decrease in alpha and beta power after caffeine consumption relative to control. Despite attenuation of the frequency range used for the control signal, caffeine average BCI performance was the same as control. Our work provides a useful characterization of caffeine, the world's most popular stimulant, on brain signal frequencies and their effect on BCI performance.
Astrand, Elaine; Wardak, Claire; Ben Hamed, Suliann
2014-01-01
Brain–machine interfaces (BMIs) using motor cortical activity to drive an external effector like a screen cursor or a robotic arm have seen enormous success and proven their great rehabilitation potential. An emerging parallel effort is now directed to BMIs controlled by endogenous cognitive activity, also called cognitive BMIs. While more challenging, this approach opens new dimensions to the rehabilitation of cognitive disorders. In the present work, we focus on BMIs driven by visuospatial attention signals and we provide a critical review of these studies in the light of the accumulated knowledge about the psychophysics, anatomy, and neurophysiology of visual spatial attention. Importantly, we provide a unique comparative overview of the several studies, ranging from non-invasive to invasive human and non-human primates studies, that decode attention-related information from ongoing neuronal activity. We discuss these studies in the light of the challenges attention-driven cognitive BMIs have to face. In a second part of the review, we discuss past and current attention-based neurofeedback studies, describing both the covert effects of neurofeedback onto neuronal activity and its overt behavioral effects. Importantly, we compare neurofeedback studies based on the amplitude of cortical activity to studies based on the enhancement of cortical information content. Last, we discuss several lines of future research and applications for attention-driven cognitive brain-computer interfaces (BCIs), including the rehabilitation of cognitive deficits, restored communication in locked-in patients, and open-field applications for enhanced cognition in normal subjects. The core motivation of this work is the key idea that the improvement of current cognitive BMIs for therapeutic and open field applications needs to be grounded in a proper interdisciplinary understanding of the physiology of the cognitive function of interest, be it spatial attention, working memory or any other cognitive signal. PMID:25161613
All printed touchless human-machine interface based on only five functional materials
NASA Astrophysics Data System (ADS)
Scheipl, G.; Zirkl, M.; Sawatdee, A.; Helbig, U.; Krause, M.; Kraker, E.; Andersson Ersman, P.; Nilsson, D.; Platt, D.; Bodö, P.; Bauer, S.; Domann, G.; Mogessie, A.; Hartmann, Paul; Stadlober, B.
2012-02-01
We demonstrate the printing of a complex smart integrated system using only five functional inks: the fluoropolymer P(VDF:TrFE) (Poly(vinylidene fluoride trifluoroethylene) sensor ink, the conductive polymer PEDOT:PSS (poly(3,4 ethylenedioxythiophene):poly(styrene sulfonic acid) ink, a conductive carbon paste, a polymeric electrolyte and SU8 for separation. The result is a touchless human-machine interface, including piezo- and pyroelectric sensor pixels (sensitive to pressure changes and impinging infrared light), transistors for impedance matching and signal conditioning, and an electrochromic display. Applications may not only emerge in human-machine interfaces, but also in transient temperature or pressure sensing used in safety technology, in artificial skins and in disposable sensor labels.
Closed-loop brain training: the science of neurofeedback.
Sitaram, Ranganatha; Ros, Tomas; Stoeckel, Luke; Haller, Sven; Scharnowski, Frank; Lewis-Peacock, Jarrod; Weiskopf, Nikolaus; Blefari, Maria Laura; Rana, Mohit; Oblak, Ethan; Birbaumer, Niels; Sulzer, James
2017-02-01
Neurofeedback is a psychophysiological procedure in which online feedback of neural activation is provided to the participant for the purpose of self-regulation. Learning control over specific neural substrates has been shown to change specific behaviours. As a progenitor of brain-machine interfaces, neurofeedback has provided a novel way to investigate brain function and neuroplasticity. In this Review, we examine the mechanisms underlying neurofeedback, which have started to be uncovered. We also discuss how neurofeedback is being used in novel experimental and clinical paradigms from a multidisciplinary perspective, encompassing neuroscientific, neuroengineering and learning-science viewpoints.
The Technology Review 10: Emerging Technologies that Will Change the World.
ERIC Educational Resources Information Center
Technology Review, 2001
2001-01-01
Identifies 10 emerging areas of technology that will soon have a profound impact on the economy and on how people live and work: brain-machine interfaces; flexible transistors; data mining; digital rights management; biometrics; natural language processing; microphotonics; untangling code; robot design; and microfluidics. In each area, one…
Design of a 32-Channel EEG System for Brain Control Interface Applications
Wang, Ching-Sung
2012-01-01
This study integrates the hardware circuit design and the development support of the software interface to achieve a 32-channel EEG system for BCI applications. Since the EEG signals of human bodies are generally very weak, in addition to preventing noise interference, it also requires avoiding the waveform distortion as well as waveform offset and so on; therefore, the design of a preamplifier with high common-mode rejection ratio and high signal-to-noise ratio is very important. Moreover, the friction between the electrode pads and the skin as well as the design of dual power supply will generate DC bias which affects the measurement signals. For this reason, this study specially designs an improved single-power AC-coupled circuit, which effectively reduces the DC bias and improves the error caused by the effects of part errors. At the same time, the digital way is applied to design the adjustable amplification and filter function, which can design for different EEG frequency bands. For the analog circuit, a frequency band will be taken out through the filtering circuit and then the digital filtering design will be used to adjust the extracted frequency band for the target frequency band, combining with MATLAB to design man-machine interface for displaying brain wave. Finally the measured signals are compared to the traditional 32-channel EEG signals. In addition to meeting the IFCN standards, the system design also conducted measurement verification in the standard EEG isolation room in order to demonstrate the accuracy and reliability of this system design. PMID:22778545
Design of a 32-channel EEG system for brain control interface applications.
Wang, Ching-Sung
2012-01-01
This study integrates the hardware circuit design and the development support of the software interface to achieve a 32-channel EEG system for BCI applications. Since the EEG signals of human bodies are generally very weak, in addition to preventing noise interference, it also requires avoiding the waveform distortion as well as waveform offset and so on; therefore, the design of a preamplifier with high common-mode rejection ratio and high signal-to-noise ratio is very important. Moreover, the friction between the electrode pads and the skin as well as the design of dual power supply will generate DC bias which affects the measurement signals. For this reason, this study specially designs an improved single-power AC-coupled circuit, which effectively reduces the DC bias and improves the error caused by the effects of part errors. At the same time, the digital way is applied to design the adjustable amplification and filter function, which can design for different EEG frequency bands. For the analog circuit, a frequency band will be taken out through the filtering circuit and then the digital filtering design will be used to adjust the extracted frequency band for the target frequency band, combining with MATLAB to design man-machine interface for displaying brain wave. Finally the measured signals are compared to the traditional 32-channel EEG signals. In addition to meeting the IFCN standards, the system design also conducted measurement verification in the standard EEG isolation room in order to demonstrate the accuracy and reliability of this system design.
Herbert, Robert; Kim, Jong-Hoon; Kim, Yun Soung; Lee, Hye Moon
2018-01-01
Flexible hybrid electronics (FHE), designed in wearable and implantable configurations, have enormous applications in advanced healthcare, rapid disease diagnostics, and persistent human-machine interfaces. Soft, contoured geometries and time-dynamic deformation of the targeted tissues require high flexibility and stretchability of the integrated bioelectronics. Recent progress in developing and engineering soft materials has provided a unique opportunity to design various types of mechanically compliant and deformable systems. Here, we summarize the required properties of soft materials and their characteristics for configuring sensing and substrate components in wearable and implantable devices and systems. Details of functionality and sensitivity of the recently developed FHE are discussed with the application areas in medicine, healthcare, and machine interactions. This review concludes with a discussion on limitations of current materials, key requirements for next generation materials, and new application areas. PMID:29364861
Herbert, Robert; Kim, Jong-Hoon; Kim, Yun Soung; Lee, Hye Moon; Yeo, Woon-Hong
2018-01-24
Flexible hybrid electronics (FHE), designed in wearable and implantable configurations, have enormous applications in advanced healthcare, rapid disease diagnostics, and persistent human-machine interfaces. Soft, contoured geometries and time-dynamic deformation of the targeted tissues require high flexibility and stretchability of the integrated bioelectronics. Recent progress in developing and engineering soft materials has provided a unique opportunity to design various types of mechanically compliant and deformable systems. Here, we summarize the required properties of soft materials and their characteristics for configuring sensing and substrate components in wearable and implantable devices and systems. Details of functionality and sensitivity of the recently developed FHE are discussed with the application areas in medicine, healthcare, and machine interactions. This review concludes with a discussion on limitations of current materials, key requirements for next generation materials, and new application areas.
Designing Guiding Systems for Brain-Computer Interfaces
Kosmyna, Nataliya; Lécuyer, Anatole
2017-01-01
Brain–Computer Interface (BCI) community has focused the majority of its research efforts on signal processing and machine learning, mostly neglecting the human in the loop. Guiding users on how to use a BCI is crucial in order to teach them to produce stable brain patterns. In this work, we explore the instructions and feedback for BCIs in order to provide a systematic taxonomy to describe the BCI guiding systems. The purpose of our work is to give necessary clues to the researchers and designers in Human–Computer Interaction (HCI) in making the fusion between BCIs and HCI more fruitful but also to better understand the possibilities BCIs can provide to them. PMID:28824400
Peikon, Ian D; Fitzsimmons, Nathan A; Lebedev, Mikhail A; Nicolelis, Miguel A L
2009-06-15
Collection and analysis of limb kinematic data are essential components of the study of biological motion, including research into biomechanics, kinesiology, neurophysiology and brain-machine interfaces (BMIs). In particular, BMI research requires advanced, real-time systems capable of sampling limb kinematics with minimal contact to the subject's body. To answer this demand, we have developed an automated video tracking system for real-time tracking of multiple body parts in freely behaving primates. The system employs high-contrast markers painted on the animal's joints to continuously track the three-dimensional positions of their limbs during activity. Two-dimensional coordinates captured by each video camera are combined and converted to three-dimensional coordinates using a quadratic fitting algorithm. Real-time operation of the system is accomplished using direct memory access (DMA). The system tracks the markers at a rate of 52 frames per second (fps) in real-time and up to 100fps if video recordings are captured to be later analyzed off-line. The system has been tested in several BMI primate experiments, in which limb position was sampled simultaneously with chronic recordings of the extracellular activity of hundreds of cortical cells. During these recordings, multiple computational models were employed to extract a series of kinematic parameters from neuronal ensemble activity in real-time. The system operated reliably under these experimental conditions and was able to compensate for marker occlusions that occurred during natural movements. We propose that this system could also be extended to applications that include other classes of biological motion.
Design and validation of a real-time spiking-neural-network decoder for brain-machine interfaces
NASA Astrophysics Data System (ADS)
Dethier, Julie; Nuyujukian, Paul; Ryu, Stephen I.; Shenoy, Krishna V.; Boahen, Kwabena
2013-06-01
Objective. Cortically-controlled motor prostheses aim to restore functions lost to neurological disease and injury. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. In particular, intracortical prostheses must satisfy stringent power dissipation constraints so as not to damage cortex. Approach. One possible solution is to use ultra-low power neuromorphic chips to decode neural signals for these intracortical implants. The first step is to explore in simulation the feasibility of translating decoding algorithms for brain-machine interface (BMI) applications into spiking neural networks (SNNs). Main results. Here we demonstrate the validity of the approach by implementing an existing Kalman-filter-based decoder in a simulated SNN using the Neural Engineering Framework (NEF), a general method for mapping control algorithms onto SNNs. To measure this system’s robustness and generalization, we tested it online in closed-loop BMI experiments with two rhesus monkeys. Across both monkeys, a Kalman filter implemented using a 2000-neuron SNN has comparable performance to that of a Kalman filter implemented using standard floating point techniques. Significance. These results demonstrate the tractability of SNN implementations of statistical signal processing algorithms on different monkeys and for several tasks, suggesting that a SNN decoder, implemented on a neuromorphic chip, may be a feasible computational platform for low-power fully-implanted prostheses. The validation of this closed-loop decoder system and the demonstration of its robustness and generalization hold promise for SNN implementations on an ultra-low power neuromorphic chip using the NEF.
Brain-controlled muscle stimulation for the restoration of motor function
Ethier, Christian; Miller, Lee E
2014-01-01
Loss of the ability to move, as a consequence of spinal cord injury or neuromuscular disorder, has devastating consequences for the paralyzed individual, and great economic consequences for society. Functional Electrical Stimulation (FES) offers one means to restore some mobility to these individuals, improving not only their autonomy, but potentially their general health and well-being as well. FES uses electrical stimulation to cause the paralyzed muscles to contract. Existing clinical systems require the stimulation to be preprogrammed, with the patient typically using residual voluntary movement of another body part to trigger and control the patterned stimulation. The rapid development of neural interfacing in the past decade offers the promise of dramatically improved control for these patients, potentially allowing continuous control of FES through signals recorded from motor cortex, as the patient attempts to control the paralyzed body part. While application of these ‘Brain Machine Interfaces’ (BMIs) has undergone dramatic development for control of computer cursors and even robotic limbs, their use as an interface for FES has been much more limited. In this review, we consider both FES and BMI technologies and discuss the prospect for combining the two to provide important new options for paralyzed individuals. PMID:25447224
NASA Astrophysics Data System (ADS)
Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko
2018-04-01
Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.
Real-Time fMRI Pattern Decoding and Neurofeedback Using FRIEND: An FSL-Integrated BCI Toolbox
Sato, João R.; Basilio, Rodrigo; Paiva, Fernando F.; Garrido, Griselda J.; Bramati, Ivanei E.; Bado, Patricia; Tovar-Moll, Fernanda; Zahn, Roland; Moll, Jorge
2013-01-01
The demonstration that humans can learn to modulate their own brain activity based on feedback of neurophysiological signals opened up exciting opportunities for fundamental and applied neuroscience. Although EEG-based neurofeedback has been long employed both in experimental and clinical investigation, functional MRI (fMRI)-based neurofeedback emerged as a promising method, given its superior spatial resolution and ability to gauge deep cortical and subcortical brain regions. In combination with improved computational approaches, such as pattern recognition analysis (e.g., Support Vector Machines, SVM), fMRI neurofeedback and brain decoding represent key innovations in the field of neuromodulation and functional plasticity. Expansion in this field and its applications critically depend on the existence of freely available, integrated and user-friendly tools for the neuroimaging research community. Here, we introduce FRIEND, a graphic-oriented user-friendly interface package for fMRI neurofeedback and real-time multivoxel pattern decoding. The package integrates routines for image preprocessing in real-time, ROI-based feedback (single-ROI BOLD level and functional connectivity) and brain decoding-based feedback using SVM. FRIEND delivers an intuitive graphic interface with flexible processing pipelines involving optimized procedures embedding widely validated packages, such as FSL and libSVM. In addition, a user-defined visual neurofeedback module allows users to easily design and run fMRI neurofeedback experiments using ROI-based or multivariate classification approaches. FRIEND is open-source and free for non-commercial use. Processing tutorials and extensive documentation are available. PMID:24312569
A BMI-based occupational therapy assist suit: asynchronous control by SSVEP
Sakurada, Takeshi; Kawase, Toshihiro; Takano, Kouji; Komatsu, Tomoaki; Kansaku, Kenji
2013-01-01
A brain-machine interface (BMI) is an interface technology that uses neurophysiological signals from the brain to control external machines. Recent invasive BMI technologies have succeeded in the asynchronous control of robot arms for a useful series of actions, such as reaching and grasping. In this study, we developed non-invasive BMI technologies aiming to make such useful movements using the subject's own hands by preparing a BMI-based occupational therapy assist suit (BOTAS). We prepared a pre-recorded series of useful actions—a grasping-a-ball movement and a carrying-the-ball movement—and added asynchronous control using steady-state visual evoked potential (SSVEP) signals. A SSVEP signal was used to trigger the grasping-a-ball movement and another SSVEP signal was used to trigger the carrying-the-ball movement. A support vector machine was used to classify EEG signals recorded from the visual cortex (Oz) in real time. Untrained, able-bodied participants (n = 12) operated the system successfully. Classification accuracy and time required for SSVEP detection were ~88% and 3 s, respectively. We further recruited three patients with upper cervical spinal cord injuries (SCIs); they also succeeded in operating the system without training. These data suggest that our BOTAS system is potentially useful in terms of rehabilitation of patients with upper limb disabilities. PMID:24068982
Lee, Giljae; Matsunaga, Andréa; Dura-Bernal, Salvador; Zhang, Wenjie; Lytton, William W; Francis, Joseph T; Fortes, José Ab
2014-11-01
Development of more sophisticated implantable brain-machine interface (BMI) will require both interpretation of the neurophysiological data being measured and subsequent determination of signals to be delivered back to the brain. Computational models are the heart of the machine of BMI and therefore an essential tool in both of these processes. One approach is to utilize brain biomimetic models (BMMs) to develop and instantiate these algorithms. These then must be connected as hybrid systems in order to interface the BMM with in vivo data acquisition devices and prosthetic devices. The combined system then provides a test bed for neuroprosthetic rehabilitative solutions and medical devices for the repair and enhancement of damaged brain. We propose here a computer network-based design for this purpose, detailing its internal modules and data flows. We describe a prototype implementation of the design, enabling interaction between the Plexon Multichannel Acquisition Processor (MAP) server, a commercial tool to collect signals from microelectrodes implanted in a live subject and a BMM, a NEURON-based model of sensorimotor cortex capable of controlling a virtual arm. The prototype implementation supports an online mode for real-time simulations, as well as an offline mode for data analysis and simulations without real-time constraints, and provides binning operations to discretize continuous input to the BMM and filtering operations for dealing with noise. Evaluation demonstrated that the implementation successfully delivered monkey spiking activity to the BMM through LAN environments, respecting real-time constraints.
EDITORIAL: Focus on the neural interface Focus on the neural interface
NASA Astrophysics Data System (ADS)
Durand, Dominique M.
2009-10-01
The possibility of an effective connection between neural tissue and computers has inspired scientists and engineers to develop new ways of controlling and obtaining information from the nervous system. These applications range from `brain hacking' to neural control of artificial limbs with brain signals. Notwithstanding the significant advances in neural prosthetics in the last few decades and the success of some stimulation devices such as cochlear prosthesis, neurotechnology remains below its potential for restoring neural function in patients with nervous system disorders. One of the reasons for this limited impact can be found at the neural interface and close attention to the integration between electrodes and tissue should improve the possibility of successful outcomes. The neural interfaces research community consists of investigators working in areas such as deep brain stimulation, functional neuromuscular/electrical stimulation, auditory prostheses, cortical prostheses, neuromodulation, microelectrode array technology, brain-computer/machine interfaces. Following the success of previous neuroprostheses and neural interfaces workshops, funding (from NIH) was obtained to establish a biennial conference in the area of neural interfaces. The first Neural Interfaces Conference took place in Cleveland, OH in 2008 and several topics from this conference have been selected for publication in this special section of the Journal of Neural Engineering. Three `perspectives' review the areas of neural regeneration (Corredor and Goldberg), cochlear implants (O'Leary et al) and neural prostheses (Anderson). Seven articles focus on various aspects of neural interfacing. One of the most popular of these areas is the field of brain-computer interfaces. Fraser et al, report on a method to generate robust control with simple signal processing algorithms of signals obtained with electrodes implanted in the brain. One problem with implanted electrode arrays, however, is that they can fail to record reliably neural signals for long periods of time. McConnell et al show that by measuring the impedance of the tissue, one can evaluate the extent of the tissue response to the presence of the electrode. Another problem with the neural interface is the mismatch of the mechanical properties between electrode and tissue. Basinger et al use finite element modeling to analyze this mismatch in retinal prostheses and guide the design of new implantable devices. Electrical stimulation has been the method of choice to activate externally the nervous system. However, Zhang et al show that a novel dual hybrid device integrating electrical and optical stimulation can provide an effective interface for simultaneous recording and stimulation. By interfacing an EMG recording system and a movement detection system, Johnson and Fuglevand develop a model capable of predicting muscle activity during movement that could be important for the development of motor prostheses. Sensory restoration is another unsolved problem in neural prostheses. By developing a novel interface between the dorsal root ganglia and electrodes arrays, Gaunt et al show that it is possible to recruit afferent fibers for sensory substitution. Finally, by interfacing directly with muscles, Jung and colleagues show that stimulation of muscles involved in locomotion following spinal cord damage in rats can provide an effective treatment modality for incomplete spinal cord injury. This series of articles clearly shows that the interface is indeed one of the keys to successful therapeutic neural devices. The next Neural Interfaces Conference will take place in Los Angeles, CA in June 2010 and one can expect to see new developments in neural engineering obtained by focusing on the neural interface.
Causal network in a deafferented non-human primate brain.
Balasubramanian, Karthikeyan; Takahashi, Kazutaka; Hatsopoulos, Nicholas G
2015-01-01
De-afferented/efferented neural ensembles can undergo causal changes when interfaced to neuroprosthetic devices. These changes occur via recruitment or isolation of neurons, alterations in functional connectivity within the ensemble and/or changes in the role of neurons, i.e., excitatory/inhibitory. In this work, emergence of a causal network and changes in the dynamics are demonstrated for a deafferented brain region exposed to BMI (brain-machine interface) learning. The BMI was controlling a robot for reach-and-grasp behavior. And, the motor cortical regions used for the BMI were deafferented due to chronic amputation, and ensembles of neurons were decoded for velocity control of the multi-DOF robot. A generalized linear model-framework based Granger causality (GLM-GC) technique was used in estimating the ensemble connectivity. Model selection was based on the AIC (Akaike Information Criterion).
Low latency messages on distributed memory multiprocessors
NASA Technical Reports Server (NTRS)
Rosing, Matthew; Saltz, Joel
1993-01-01
Many of the issues in developing an efficient interface for communication on distributed memory machines are described and a portable interface is proposed. Although the hardware component of message latency is less than one microsecond on many distributed memory machines, the software latency associated with sending and receiving typed messages is on the order of 50 microseconds. The reason for this imbalance is that the software interface does not match the hardware. By changing the interface to match the hardware more closely, applications with fine grained communication can be put on these machines. Based on several tests that were run on the iPSC/860, an interface that will better match current distributed memory machines is proposed. The model used in the proposed interface consists of a computation processor and a communication processor on each node. Communication between these processors and other nodes in the system is done through a buffered network. Information that is transmitted is either data or procedures to be executed on the remote processor. The dual processor system is better suited for efficiently handling asynchronous communications compared to a single processor system. The ability to send data or procedure is very flexible for minimizing message latency, based on the type of communication being performed. The test performed and the proposed interface are described.
On the role of cost-sensitive learning in multi-class brain-computer interfaces.
Devlaminck, Dieter; Waegeman, Willem; Wyns, Bart; Otte, Georges; Santens, Patrick
2010-06-01
Brain-computer interfaces (BCIs) present an alternative way of communication for people with severe disabilities. One of the shortcomings in current BCI systems, recently put forward in the fourth BCI competition, is the asynchronous detection of motor imagery versus resting state. We investigated this extension to the three-class case, in which the resting state is considered virtually lying between two motor classes, resulting in a large penalty when one motor task is misclassified into the other motor class. We particularly focus on the behavior of different machine-learning techniques and on the role of multi-class cost-sensitive learning in such a context. To this end, four different kernel methods are empirically compared, namely pairwise multi-class support vector machines (SVMs), two cost-sensitive multi-class SVMs and kernel-based ordinal regression. The experimental results illustrate that ordinal regression performs better than the other three approaches when a cost-sensitive performance measure such as the mean-squared error is considered. By contrast, multi-class cost-sensitive learning enables us to control the number of large errors made between two motor tasks.
MARTI: man-machine animation real-time interface
NASA Astrophysics Data System (ADS)
Jones, Christian M.; Dlay, Satnam S.
1997-05-01
The research introduces MARTI (man-machine animation real-time interface) for the realization of natural human-machine interfacing. The system uses simple vocal sound-tracks of human speakers to provide lip synchronization of computer graphical facial models. We present novel research in a number of engineering disciplines, which include speech recognition, facial modeling, and computer animation. This interdisciplinary research utilizes the latest, hybrid connectionist/hidden Markov model, speech recognition system to provide very accurate phone recognition and timing for speaker independent continuous speech, and expands on knowledge from the animation industry in the development of accurate facial models and automated animation. The research has many real-world applications which include the provision of a highly accurate and 'natural' man-machine interface to assist user interactions with computer systems and communication with one other using human idiosyncrasies; a complete special effects and animation toolbox providing automatic lip synchronization without the normal constraints of head-sets, joysticks, and skilled animators; compression of video data to well below standard telecommunication channel bandwidth for video communications and multi-media systems; assisting speech training and aids for the handicapped; and facilitating player interaction for 'video gaming' and 'virtual worlds.' MARTI has introduced a new level of realism to man-machine interfacing and special effect animation which has been previously unseen.
The Self-Paced Graz Brain-Computer Interface: Methods and Applications
Scherer, Reinhold; Schloegl, Alois; Lee, Felix; Bischof, Horst; Janša, Janez; Pfurtscheller, Gert
2007-01-01
We present the self-paced 3-class Graz brain-computer interface (BCI) which is based on the detection of sensorimotor electroencephalogram (EEG) rhythms induced by motor imagery. Self-paced operation means that the BCI is able to determine whether the ongoing brain activity is intended as control signal (intentional control) or not (non-control state). The presented system is able to automatically reduce electrooculogram (EOG) artifacts, to detect electromyographic (EMG) activity, and uses only three bipolar EEG channels. Two applications are presented: the freeSpace virtual environment (VE) and the Brainloop interface. The freeSpace is a computer-game-like application where subjects have to navigate through the environment and collect coins by autonomously selecting navigation commands. Three subjects participated in these feedback experiments and each learned to navigate through the VE and collect coins. Two out of the three succeeded in collecting all three coins. The Brainloop interface provides an interface between the Graz-BCI and Google Earth. PMID:18350133
Toward a workbench for rodent brain image data: systems architecture and design.
Moene, Ivar A; Subramaniam, Shankar; Darin, Dmitri; Leergaard, Trygve B; Bjaalie, Jan G
2007-01-01
We present a novel system for storing and manipulating microscopic images from sections through the brain and higher-level data extracted from such images. The system is designed and built on a three-tier paradigm and provides the research community with a web-based interface for facile use in neuroscience research. The Oracle relational database management system provides the ability to store a variety of objects relevant to the images and provides the framework for complex querying of data stored in the system. Further, the suite of applications intimately tied into the infrastructure in the application layer provide the user the ability not only to query and visualize the data, but also to perform analysis operations based on the tools embedded into the system. The presentation layer uses extant protocols of the modern web browser and this provides ease of use of the system. The present release, named Functional Anatomy of the Cerebro-Cerebellar System (FACCS), available through The Rodent Brain Workbench (http:// rbwb.org/), is targeted at the functional anatomy of the cerebro-cerebellar system in rats, and holds axonal tracing data from these projections. The system is extensible to other circuits and projections and to other categories of image data and provides a unique environment for analysis of rodent brain maps in the context of anatomical data. The FACCS application assumes standard animal brain atlas models and can be extended to future models. The system is available both for interactive use from a remote web-browser client as well as for download to a local server machine.
Biomarkers for Musculoskeletal Pain Conditions: Use of Brain Imaging and Machine Learning.
Boissoneault, Jeff; Sevel, Landrew; Letzen, Janelle; Robinson, Michael; Staud, Roland
2017-01-01
Chronic musculoskeletal pain condition often shows poor correlations between tissue abnormalities and clinical pain. Therefore, classification of pain conditions like chronic low back pain, osteoarthritis, and fibromyalgia depends mostly on self report and less on objective findings like X-ray or magnetic resonance imaging (MRI) changes. However, recent advances in structural and functional brain imaging have identified brain abnormalities in chronic pain conditions that can be used for illness classification. Because the analysis of complex and multivariate brain imaging data is challenging, machine learning techniques have been increasingly utilized for this purpose. The goal of machine learning is to train specific classifiers to best identify variables of interest on brain MRIs (i.e., biomarkers). This report describes classification techniques capable of separating MRI-based brain biomarkers of chronic pain patients from healthy controls with high accuracy (70-92%) using machine learning, as well as critical scientific, practical, and ethical considerations related to their potential clinical application. Although self-report remains the gold standard for pain assessment, machine learning may aid in the classification of chronic pain disorders like chronic back pain and fibromyalgia as well as provide mechanistic information regarding their neural correlates.
Resquin, F; Ibañez, J; Gonzalez-Vargas, J; Brunetti, F; Dimbwadyo, I; Alves, S; Carrasco, L; Torres, L; Pons, Jose Luis
2016-08-01
Reaching and grasping are two of the most affected functions after stroke. Hybrid rehabilitation systems combining Functional Electrical Stimulation with Robotic devices have been proposed in the literature to improve rehabilitation outcomes. In this work, we present the combined use of a hybrid robotic system with an EEG-based Brain-Machine Interface to detect the user's movement intentions to trigger the assistance. The platform has been tested in a single session with a stroke patient. The results show how the patient could successfully interact with the BMI and command the assistance of the hybrid system with low latencies. Also, the Feedback Error Learning controller implemented in this system could adjust the required FES intensity to perform the task.
Neurofeedback Training for BCI Control
NASA Astrophysics Data System (ADS)
Neuper, Christa; Pfurtscheller, Gert
Brain-computer interface (BCI) systems detect changes in brain signals that reflect human intention, then translate these signals to control monitors or external devices (for a comprehensive review, see [1]). BCIs typically measure electrical signals resulting from neural firing (i.e. neuronal action potentials, Electroencephalogram (ECoG), or Electroencephalogram (EEG)). Sophisticated pattern recognition and classification algorithms convert neural activity into the required control signals. BCI research has focused heavily on developing powerful signal processing and machine learning techniques to accurately classify neural activity [2-4].
Flexible Organic Electronics for Use in Neural Sensing
Bink, Hank; Lai, Yuming; Saudari, Sangameshwar R.; Helfer, Brian; Viventi, Jonathan; Van der Spiegel, Jan; Litt, Brian; Kagan, Cherie
2016-01-01
Recent research in brain-machine interfaces and devices to treat neurological disease indicate that important network activity exists at temporal and spatial scales beyond the resolution of existing implantable devices. High density, active electrode arrays hold great promise in enabling high-resolution interface with the brain to access and influence this network activity. Integrating flexible electronic devices directly at the neural interface can enable thousands of multiplexed electrodes to be connected using many fewer wires. Active electrode arrays have been demonstrated using flexible, inorganic silicon transistors. However, these approaches may be limited in their ability to be cost-effectively scaled to large array sizes (8×8 cm). Here we show amplifiers built using flexible organic transistors with sufficient performance for neural signal recording. We also demonstrate a pathway for a fully integrated, amplified and multiplexed electrode array built from these devices. PMID:22255558
Emergent coordination underlying learning to reach to grasp with a brain-machine interface.
Vaidya, Mukta; Balasubramanian, Karthikeyan; Southerland, Joshua; Badreldin, Islam; Eleryan, Ahmed; Shattuck, Kelsey; Gururangan, Suchin; Slutzky, Marc; Osborne, Leslie; Fagg, Andrew; Oweiss, Karim; Hatsopoulos, Nicholas G
2018-04-01
The development of coordinated reach-to-grasp movement has been well studied in infants and children. However, the role of motor cortex during this development is unclear because it is difficult to study in humans. We took the approach of using a brain-machine interface (BMI) paradigm in rhesus macaques with prior therapeutic amputations to examine the emergence of novel, coordinated reach to grasp. Previous research has shown that after amputation, the cortical area previously involved in the control of the lost limb undergoes reorganization, but prior BMI work has largely relied on finding neurons that already encode specific movement-related information. In this study, we taught macaques to cortically control a robotic arm and hand through operant conditioning, using neurons that were not explicitly reach or grasp related. Over the course of training, stereotypical patterns emerged and stabilized in the cross-covariance between the reaching and grasping velocity profiles, between pairs of neurons involved in controlling reach and grasp, and to a comparable, but lesser, extent between other stable neurons in the network. In fact, we found evidence of this structured coordination between pairs composed of all combinations of neurons decoding reach or grasp and other stable neurons in the network. The degree of and participation in coordination was highly correlated across all pair types. Our approach provides a unique model for studying the development of novel, coordinated reach-to-grasp movement at the behavioral and cortical levels. NEW & NOTEWORTHY Given that motor cortex undergoes reorganization after amputation, our work focuses on training nonhuman primates with chronic amputations to use neurons that are not reach or grasp related to control a robotic arm to reach to grasp through the use of operant conditioning, mimicking early development. We studied the development of a novel, coordinated behavior at the behavioral and cortical level, and the neural plasticity in M1 associated with learning to use a brain-machine interface.
TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (HP9000 SERIES 300/400 VERSION)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is expected to be available on media suitable for seven different machine platforms: 1) DEC VAX computers running VMS (TK50 cartridge in VAX BACKUP format), 2) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and 7) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2.
TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is expected to be available on media suitable for seven different machine platforms: 1) DEC VAX computers running VMS (TK50 cartridge in VAX BACKUP format), 2) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and 7) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2.
Kaplan, A Ya
2016-01-01
Technology brain-computer interface (BCI) based on the registration and interpretation of EEG has recently become one of the most popular developments in neuroscience and psychophysiology. This is due not only to the intended future use of these technologies in many areas of practical human activity, but also to the fact that IMC--is a completely new paradigm in psychophysiology, allowing test hypotheses about the possibilities of the human brain to the development of skills of interaction with the outside world without the mediation of the motor system, i.e. only with the help of voluntary modulation of EEG generators. This paper examines the theoretical and experimental basis, the current state and prospects of development of training, communicational and assisting complexes based on BCI to control them without muscular effort on the basis of mental commands detected in the EEG of patients with severely impaired speech and motor system.
TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (HP9000 SERIES 700/800 VERSION)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.
TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (IBM RS/6000 VERSION)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.
TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (SUN4 VERSION WITH MOTIF)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.
TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (SILICON GRAPHICS VERSION)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.
TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (SUN4 VERSION)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.
TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (DEC RISC ULTRIX VERSION)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.
Dynamic XML-based exchange of relational data: application to the Human Brain Project.
Tang, Zhengming; Kadiyska, Yana; Li, Hao; Suciu, Dan; Brinkley, James F
2003-01-01
This paper discusses an approach to exporting relational data in XML format for data exchange over the web. We describe the first real-world application of SilkRoute, a middleware program that dynamically converts existing relational data to a user-defined XML DTD. The application, called XBrain, wraps SilkRoute in a Java Server Pages framework, thus permitting a web-based XQuery interface to a legacy relational database. The application is demonstrated as a query interface to the University of Washington Brain Project's Language Map Experiment Management System, which is used to manage data about language organization in the brain.
Cortical Spiking Network Interfaced with Virtual Musculoskeletal Arm and Robotic Arm.
Dura-Bernal, Salvador; Zhou, Xianlian; Neymotin, Samuel A; Przekwas, Andrzej; Francis, Joseph T; Lytton, William W
2015-01-01
Embedding computational models in the physical world is a critical step towards constraining their behavior and building practical applications. Here we aim to drive a realistic musculoskeletal arm model using a biomimetic cortical spiking model, and make a robot arm reproduce the same trajectories in real time. Our cortical model consisted of a 3-layered cortex, composed of several hundred spiking model-neurons, which display physiologically realistic dynamics. We interconnected the cortical model to a two-joint musculoskeletal model of a human arm, with realistic anatomical and biomechanical properties. The virtual arm received muscle excitations from the neuronal model, and fed back proprioceptive information, forming a closed-loop system. The cortical model was trained using spike timing-dependent reinforcement learning to drive the virtual arm in a 2D reaching task. Limb position was used to simultaneously control a robot arm using an improved network interface. Virtual arm muscle activations responded to motoneuron firing rates, with virtual arm muscles lengths encoded via population coding in the proprioceptive population. After training, the virtual arm performed reaching movements which were smoother and more realistic than those obtained using a simplistic arm model. This system provided access to both spiking network properties and to arm biophysical properties, including muscle forces. The use of a musculoskeletal virtual arm and the improved control system allowed the robot arm to perform movements which were smoother than those reported in our previous paper using a simplistic arm. This work provides a novel approach consisting of bidirectionally connecting a cortical model to a realistic virtual arm, and using the system output to drive a robotic arm in real time. Our techniques are applicable to the future development of brain neuroprosthetic control systems, and may enable enhanced brain-machine interfaces with the possibility for finer control of limb prosthetics.
A Brain-Machine Interface Instructed by Direct Intracortical Microstimulation
O'Doherty, Joseph E.; Lebedev, Mikhail A.; Hanson, Timothy L.; Fitzsimmons, Nathan A.; Nicolelis, Miguel A. L.
2009-01-01
Brain–machine interfaces (BMIs) establish direct communication between the brain and artificial actuators. As such, they hold considerable promise for restoring mobility and communication in patients suffering from severe body paralysis. To achieve this end, future BMIs must also provide a means for delivering sensory signals from the actuators back to the brain. Prosthetic sensation is needed so that neuroprostheses can be better perceived and controlled. Here we show that a direct intracortical input can be added to a BMI to instruct rhesus monkeys in choosing the direction of reaching movements generated by the BMI. Somatosensory instructions were provided to two monkeys operating the BMI using either: (a) vibrotactile stimulation of the monkey's hands or (b) multi-channel intracortical microstimulation (ICMS) delivered to the primary somatosensory cortex (S1) in one monkey and posterior parietal cortex (PP) in the other. Stimulus delivery was contingent on the position of the computer cursor: the monkey placed it in the center of the screen to receive machine–brain recursive input. After 2 weeks of training, the same level of proficiency in utilizing somatosensory information was achieved with ICMS of S1 as with the stimulus delivered to the hand skin. ICMS of PP was not effective. These results indicate that direct, bi-directional communication between the brain and neuroprosthetic devices can be achieved through the combination of chronic multi-electrode recording and microstimulation of S1. We propose that in the future, bidirectional BMIs incorporating ICMS may become an effective paradigm for sensorizing neuroprosthetic devices. PMID:19750199
Benyamini, Miri; Zacksenhouse, Miriam
2015-01-01
Recent experiments with brain-machine-interfaces (BMIs) indicate that the extent of neural modulations increased abruptly upon starting to operate the interface, and especially after the monkey stopped moving its hand. In contrast, neural modulations that are correlated with the kinematics of the movement remained relatively unchanged. Here we demonstrate that similar changes are produced by simulated neurons that encode the relevant signals generated by an optimal feedback controller during simulated BMI experiments. The optimal feedback controller relies on state estimation that integrates both visual and proprioceptive feedback with prior estimations from an internal model. The processing required for optimal state estimation and control were conducted in the state-space, and neural recording was simulated by modeling two populations of neurons that encode either only the estimated state or also the control signal. Spike counts were generated as realizations of doubly stochastic Poisson processes with linear tuning curves. The model successfully reconstructs the main features of the kinematics and neural activity during regular reaching movements. Most importantly, the activity of the simulated neurons successfully reproduces the observed changes in neural modulations upon switching to brain control. Further theoretical analysis and simulations indicate that increasing the process noise during normal reaching movement results in similar changes in neural modulations. Thus, we conclude that the observed changes in neural modulations during BMI experiments can be attributed to increasing process noise associated with the imperfect BMI filter, and, more directly, to the resulting increase in the variance of the encoded signals associated with state estimation and the required control signal.
Benyamini, Miri; Zacksenhouse, Miriam
2015-01-01
Recent experiments with brain-machine-interfaces (BMIs) indicate that the extent of neural modulations increased abruptly upon starting to operate the interface, and especially after the monkey stopped moving its hand. In contrast, neural modulations that are correlated with the kinematics of the movement remained relatively unchanged. Here we demonstrate that similar changes are produced by simulated neurons that encode the relevant signals generated by an optimal feedback controller during simulated BMI experiments. The optimal feedback controller relies on state estimation that integrates both visual and proprioceptive feedback with prior estimations from an internal model. The processing required for optimal state estimation and control were conducted in the state-space, and neural recording was simulated by modeling two populations of neurons that encode either only the estimated state or also the control signal. Spike counts were generated as realizations of doubly stochastic Poisson processes with linear tuning curves. The model successfully reconstructs the main features of the kinematics and neural activity during regular reaching movements. Most importantly, the activity of the simulated neurons successfully reproduces the observed changes in neural modulations upon switching to brain control. Further theoretical analysis and simulations indicate that increasing the process noise during normal reaching movement results in similar changes in neural modulations. Thus, we conclude that the observed changes in neural modulations during BMI experiments can be attributed to increasing process noise associated with the imperfect BMI filter, and, more directly, to the resulting increase in the variance of the encoded signals associated with state estimation and the required control signal. PMID:26042002
An Investment Behavior Analysis using by Brain Computer Interface
NASA Astrophysics Data System (ADS)
Suzuki, Kyoko; Kinoshita, Kanta; Miyagawa, Kazuhiro; Shiomi, Shinichi; Misawa, Tadanobu; Shimokawa, Tetsuya
In this paper, we will construct a new Brain Computer Interface (BCI), for the purpose of analyzing human's investment decision makings. The BCI is made up of three functional parts which take roles of, measuring brain information, determining market price in an artificial market, and specifying investment decision model, respectively. When subjects make decisions, their brain information is conveyed to the part of specifying investment decision model through the part of measuring brain information, whereas, their decisions of investment order are sent to the part of artificial market to form market prices. Both the support vector machine and the 3 layered perceptron are used to assess the investment decision model. In order to evaluate our BCI, we conduct an experiment in which subjects and a computer trader agent trade shares of stock in the artificial market and test how the computer trader agent can forecast market price formation and investment decision makings from the brain information of subjects. The result of the experiment shows that the brain information can improve the accuracy of forecasts, and so the computer trader agent can supply market liquidity to stabilize market volatility without his loss.
He, Yongtian; Nathan, Kevin; Venkatakrishnan, Anusha; Rovekamp, Roger; Beck, Christopher; Ozdemir, Recep; Francisco, Gerard E; Contreras-Vidal, Jose L
2014-01-01
Stroke remains a leading cause of disability, limiting independent ambulation in survivors, and consequently affecting quality of life (QOL). Recent technological advances in neural interfacing with robotic rehabilitation devices are promising in the context of gait rehabilitation. Here, the X1, NASA's powered robotic lower limb exoskeleton, is introduced as a potential diagnostic, assistive, and therapeutic tool for stroke rehabilitation. Additionally, the feasibility of decoding lower limb joint kinematics and kinetics during walking with the X1 from scalp electroencephalographic (EEG) signals--the first step towards the development of a brain-machine interface (BMI) system to the X1 exoskeleton--is demonstrated.
NASA Astrophysics Data System (ADS)
Gao, Lin; Cheng, Wei; Zhang, Jinhua; Wang, Jue
2016-08-01
Brain-computer interface (BCI) systems provide an alternative communication and control approach for people with limited motor function. Therefore, the feature extraction and classification approach should differentiate the relative unusual state of motion intention from a common resting state. In this paper, we sought a novel approach for multi-class classification in BCI applications. We collected electroencephalographic (EEG) signals registered by electrodes placed over the scalp during left hand motor imagery, right hand motor imagery, and resting state for ten healthy human subjects. We proposed using the Kolmogorov complexity (Kc) for feature extraction and a multi-class Adaboost classifier with extreme learning machine as base classifier for classification, in order to classify the three-class EEG samples. An average classification accuracy of 79.5% was obtained for ten subjects, which greatly outperformed commonly used approaches. Thus, it is concluded that the proposed method could improve the performance for classification of motor imagery tasks for multi-class samples. It could be applied in further studies to generate the control commands to initiate the movement of a robotic exoskeleton or orthosis, which finally facilitates the rehabilitation of disabled people.
Pirooznia, Mehdi; Deng, Youping
2006-12-12
Graphical user interface (GUI) software promotes novelty by allowing users to extend the functionality. SVM Classifier is a cross-platform graphical application that handles very large datasets well. The purpose of this study is to create a GUI application that allows SVM users to perform SVM training, classification and prediction. The GUI provides user-friendly access to state-of-the-art SVM methods embodied in the LIBSVM implementation of Support Vector Machine. We implemented the java interface using standard swing libraries. We used a sample data from a breast cancer study for testing classification accuracy. We achieved 100% accuracy in classification among the BRCA1-BRCA2 samples with RBF kernel of SVM. We have developed a java GUI application that allows SVM users to perform SVM training, classification and prediction. We have demonstrated that support vector machines can accurately classify genes into functional categories based upon expression data from DNA microarray hybridization experiments. Among the different kernel functions that we examined, the SVM that uses a radial basis kernel function provides the best performance. The SVM Classifier is available at http://mfgn.usm.edu/ebl/svm/.
Comparison of spike-sorting algorithms for future hardware implementation.
Gibson, Sarah; Judy, Jack W; Markovic, Dejan
2008-01-01
Applications such as brain-machine interfaces require hardware spike sorting in order to (1) obtain single-unit activity and (2) perform data reduction for wireless transmission of data. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection and feature extraction algorithms for spike sorting are described briefly and evaluated in terms of accuracy versus computational complexity. The nonlinear energy operator method is chosen as the optimal spike detection algorithm, being most robust over noise and relatively simple. The discrete derivatives method [1] is chosen as the optimal feature extraction method, maintaining high accuracy across SNRs with a complexity orders of magnitude less than that of traditional methods such as PCA.
Shen, Guohua; Zhang, Jing; Wang, Mengxing; Lei, Du; Yang, Guang; Zhang, Shanmin; Du, Xiaoxia
2014-06-01
Multivariate pattern classification analysis (MVPA) has been applied to functional magnetic resonance imaging (fMRI) data to decode brain states from spatially distributed activation patterns. Decoding upper limb movements from non-invasively recorded human brain activation is crucial for implementing a brain-machine interface that directly harnesses an individual's thoughts to control external devices or computers. The aim of this study was to decode the individual finger movements from fMRI single-trial data. Thirteen healthy human subjects participated in a visually cued delayed finger movement task, and only one slight button press was performed in each trial. Using MVPA, the decoding accuracy (DA) was computed separately for the different motor-related regions of interest. For the construction of feature vectors, the feature vectors from two successive volumes in the image series for a trial were concatenated. With these spatial-temporal feature vectors, we obtained a 63.1% average DA (84.7% for the best subject) for the contralateral primary somatosensory cortex and a 46.0% average DA (71.0% for the best subject) for the contralateral primary motor cortex; both of these values were significantly above the chance level (20%). In addition, we implemented searchlight MVPA to search for informative regions in an unbiased manner across the whole brain. Furthermore, by applying searchlight MVPA to each volume of a trial, we visually demonstrated the information for decoding, both spatially and temporally. The results suggest that the non-invasive fMRI technique may provide informative features for decoding individual finger movements and the potential of developing an fMRI-based brain-machine interface for finger movement. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Jeyabalan, Vickneswaran; Samraj, Andrews; Loo, Chu Kiong
2010-10-01
Aiming at the implementation of brain-machine interfaces (BMI) for the aid of disabled people, this paper presents a system design for real-time communication between the BMI and programmable logic controllers (PLCs) to control an electrical actuator that could be used in devices to help the disabled. Motor imaginary signals extracted from the brain’s motor cortex using an electroencephalogram (EEG) were used as a control signal. The EEG signals were pre-processed by means of adaptive recursive band-pass filtrations (ARBF) and classified using simplified fuzzy adaptive resonance theory mapping (ARTMAP) in which the classified signals are then translated into control signals used for machine control via the PLC. A real-time test system was designed using MATLAB for signal processing, KEP-Ware V4 OLE for process control (OPC), a wireless local area network router, an Omron Sysmac CPM1 PLC and a 5 V/0.3A motor. This paper explains the signal processing techniques, the PLC's hardware configuration, OPC configuration and real-time data exchange between MATLAB and PLC using the MATLAB OPC toolbox. The test results indicate that the function of exchanging real-time data can be attained between the BMI and PLC through OPC server and proves that it is an effective and feasible method to be applied to devices such as wheelchairs or electronic equipment.
Functional near-infrared spectroscopy for adaptive human-computer interfaces
NASA Astrophysics Data System (ADS)
Yuksel, Beste F.; Peck, Evan M.; Afergan, Daniel; Hincks, Samuel W.; Shibata, Tomoki; Kainerstorfer, Jana; Tgavalekos, Kristen; Sassaroli, Angelo; Fantini, Sergio; Jacob, Robert J. K.
2015-03-01
We present a brain-computer interface (BCI) that detects, analyzes and responds to user cognitive state in real-time using machine learning classifications of functional near-infrared spectroscopy (fNIRS) data. Our work is aimed at increasing the narrow communication bandwidth between the human and computer by implicitly measuring users' cognitive state without any additional effort on the part of the user. Traditionally, BCIs have been designed to explicitly send signals as the primary input. However, such systems are usually designed for people with severe motor disabilities and are too slow and inaccurate for the general population. In this paper, we demonstrate with previous work1 that a BCI that implicitly measures cognitive workload can improve user performance and awareness compared to a control condition by adapting to user cognitive state in real-time. We also discuss some of the other applications we have used in this field to measure and respond to cognitive states such as cognitive workload, multitasking, and user preference.
Höhne, Johannes; Holz, Elisa; Staiger-Sälzer, Pit; Müller, Klaus-Robert; Kübler, Andrea; Tangermann, Michael
2014-01-01
Brain-Computer Interfaces (BCIs) strive to decode brain signals into control commands for severely handicapped people with no means of muscular control. These potential users of noninvasive BCIs display a large range of physical and mental conditions. Prior studies have shown the general applicability of BCI with patients, with the conflict of either using many training sessions or studying only moderately restricted patients. We present a BCI system designed to establish external control for severely motor-impaired patients within a very short time. Within only six experimental sessions, three out of four patients were able to gain significant control over the BCI, which was based on motor imagery or attempted execution. For the most affected patient, we found evidence that the BCI could outperform the best assistive technology (AT) of the patient in terms of control accuracy, reaction time and information transfer rate. We credit this success to the applied user-centered design approach and to a highly flexible technical setup. State-of-the art machine learning methods allowed the exploitation and combination of multiple relevant features contained in the EEG, which rapidly enabled the patients to gain substantial BCI control. Thus, we could show the feasibility of a flexible and tailorable BCI application in severely disabled users. This can be considered a significant success for two reasons: Firstly, the results were obtained within a short period of time, matching the tight clinical requirements. Secondly, the participating patients showed, compared to most other studies, very severe communication deficits. They were dependent on everyday use of AT and two patients were in a locked-in state. For the most affected patient a reliable communication was rarely possible with existing AT.
Höhne, Johannes; Holz, Elisa; Staiger-Sälzer, Pit; Müller, Klaus-Robert; Kübler, Andrea; Tangermann, Michael
2014-01-01
Brain-Computer Interfaces (BCIs) strive to decode brain signals into control commands for severely handicapped people with no means of muscular control. These potential users of noninvasive BCIs display a large range of physical and mental conditions. Prior studies have shown the general applicability of BCI with patients, with the conflict of either using many training sessions or studying only moderately restricted patients. We present a BCI system designed to establish external control for severely motor-impaired patients within a very short time. Within only six experimental sessions, three out of four patients were able to gain significant control over the BCI, which was based on motor imagery or attempted execution. For the most affected patient, we found evidence that the BCI could outperform the best assistive technology (AT) of the patient in terms of control accuracy, reaction time and information transfer rate. We credit this success to the applied user-centered design approach and to a highly flexible technical setup. State-of-the art machine learning methods allowed the exploitation and combination of multiple relevant features contained in the EEG, which rapidly enabled the patients to gain substantial BCI control. Thus, we could show the feasibility of a flexible and tailorable BCI application in severely disabled users. This can be considered a significant success for two reasons: Firstly, the results were obtained within a short period of time, matching the tight clinical requirements. Secondly, the participating patients showed, compared to most other studies, very severe communication deficits. They were dependent on everyday use of AT and two patients were in a locked-in state. For the most affected patient a reliable communication was rarely possible with existing AT. PMID:25162231
Papers from the Fifth International Brain-Computer Interface Meeting
NASA Astrophysics Data System (ADS)
Huggins, Jane E.; Wolpaw, Jonathan R.
2014-06-01
Brain-computer interfaces (BCIs), also known as brain-machine interfaces (BMIs), translate brain activity into new outputs that replace, restore, enhance, supplement or improve natural brain outputs. BCI research and development has grown rapidly for the past two decades. It is beginning to provide useful communication and control capacities to people with severe neuromuscular disabilities; and it is expanding into new areas such as neurorehabilitation that may greatly increase its clinical impact. At the same time, significant challenges remain, particularly in regard to translating laboratory advances into clinical use. The papers in this special section report some of the work presented at the Fifth International BCI Meeting held on 3-7 June 2013 at the Asilomar Conference Center in Pacific Grove, California, USA. Like its predecessors over the past 15 years, this meeting was supported by the National Institutes of Health, the National Science Foundation, and a variety of other governmental and private sponsors [1]. This fifth meeting was organized and managed by a program committee of BCI researchers from throughout the world [2]. It retained the distinctive retreat-style format developed by the Wadsworth Center researchers who organized and managed the first four meetings. The 301 attendees came from 165 research groups in 29 countries; 37% were students or postdoctoral fellows. Of more than 200 extended abstracts submitted for peer review, 25 were selected for oral presentation [3], and 181 were presented as posters [4] and published in the open-access conference proceedings [5]. The meeting featured 19 highly interactive workshops [6] covering the broad spectrum of BCI research and development, as well as many demonstrations of BCI systems and associated technology. Like the first four meetings, this one included attendees and embraced topics from across the broad spectrum of disciplines essential to effective BCI research and development, including neuroscience, engineering, applied mathematics, computer science, psychology and rehabilitation. In addition, this fifth meeting extended the spectrum in two very important ways. For the first time, presentations were given by several people who could potentially benefit from current BCI technology-people with severe disabilities who need assistive technology for communication. One presented in person and one remotely. A Virtual BCI User's Forum allowed these presenters and other potential BCI users to speak directly to the BCI research community about the advantages and disadvantages of current BCIs and important directions for future study (see [7]). Their personal experiences and desires can help guide BCI research and development. Their active participation, particularly in regard to the selection of goals and the evaluation and optimization of new methods and systems, is essential if BCIs are to become clinically valuable and widely used technology. The second major innovation in this meeting was the strong emphasis on ethical issues related to BCI development and use. The meeting opened with a keynote presentation entitled 'Neuroethics, BCIs and the Cyborg Myth' by Dr Joseph Fins, a noted authority on neuroethics from the Weill Cornell Medical College and the Rockefeller University. He focused on the ability of BCIs to relieve suffering and restore function, while cautioning against applications that take intentional control away from the user. Ethical issues were also addressed in several of the workshops, and arose on multiple occasions and in multiple contexts over the course of the meeting. Their prominence reflected the growing importance and difficulty of ethical issues as BCI capacities and applications grow and extend to potentially enhancing or supplementing normal nervous system function. The 16 articles in this special section reflect the breadth, depth, growing maturity and future directions of BCI research. The first paper presents a tutorial on best practices in BCI performance measurement [8]. The following eight papers focus on specific BCI applications and on methods for increasing their usefulness for people with severe disabilities. The next two examine how brain activity and BCI use affect each other. The final five studies investigate brain signals and evaluate new signal processing algorithms in order to improve BCI performance and broaden its possible applications in some of the newest areas of BCI research, including the direct interpretation of speech from electrocorticographic (ECoG) activity [9]. Together, these papers span many aspects of BCI research, including different recording modalities (i.e. electroencephalogram (EEG), ECoG, functional magnetic resonance imaging (fMRI)) and signal types (e.g. P300 event-related potentials (ERPs), sensorimotor rhythms, steady-state visual evoked potentials (SSVEPs)). Furthermore, additional clinically related studies that were presented at the meeting but were considered to be outside the scope of the Journal of Neural Engineering will appear in a special issue of the Archives of Physical Medicine and Rehabilitation . With a theme of 'Defining the Future' the Fifth International BCI Meeting tackled the issues of a rapidly growing multidisciplinary research and development enterprise that is now entering clinical use. Important new areas that received attention included the need for active involvement of the people with severe disabilities who are the primary initial users of BCI technology and the growing importance and difficulty of the multiple ethical questions raised by BCIs and their potential applications. The meeting also marked the launching of the new journal Brain--Computer Interfaces , dedicated to BCI research and development, and initiated the establishment of the Brain--Computer Interface Society, which will organize and manage the Sixth International BCI Meeting to be held in 2016. References [1] http://bcimeeting.org/2013/sponsors.html [2] http://bcimeeting.org/2013/meetinginfo.html [3] http://bcimeeting.org/2013/researchsessions.html (indexes individual abstracts) [4] http://bcimeeting.org/2013/posters.html (indexes individual abstracts) [5] http://castor.tugraz.at/doku/BCIMeeting2013/BCIMeeting2013_all.pdf [6] Huggins J E et al 2014 Workshops of the Fifth International Brain--Computer Interface Meeting: Defining the Future Brain--Computer Interface J. 1 27-49 [7] Peters B, Bieker G, Heckman S M, Huggins J E, Wolf C, Zeitlin D and Fried-Oken M 2014 Brain--computer interface users speak up: the Virtual Users' Forum at the 2013 International BCI Meeting Archives of Physical Medicine and Rehabilitation vol 95 fall supplement at press [8] Thompson D E et al 2014 Performance measurement for brain-computer or brain-machine interfaces: a tutorial J. Neural Eng. 11 035001 [9] Mugler E, Patton J, Flint R, Wright Z, Schuele S, Rosenow J, Shih J, Krusienski D and Slutzky M 2014 Direct classification of all American English phonemes using signals from functional speech motor cortex J. Neural Eng. 11 035015
Insect-machine interface based neurocybernetics.
Bozkurt, Alper; Gilmour, Robert F; Sinha, Ayesa; Stern, David; Lal, Amit
2009-06-01
We present details of a novel bioelectric interface formed by placing microfabricated probes into insect during metamorphic growth cycles. The inserted microprobes emerge with the insect where the development of tissue around the electronics during the pupal development allows mechanically stable and electrically reliable structures coupled to the insect. Remarkably, the insects do not react adversely or otherwise to the inserted electronics in the pupae stage, as is true when the electrodes are inserted in adult stages. We report on the electrical and mechanical characteristics of this novel bioelectronic interface, which we believe would be adopted by many investigators trying to investigate biological behavior in insects with negligible or minimal traumatic effect encountered when probes are inserted in adult stages. This novel insect-machine interface also allows for hybrid insect-machine platforms for further studies. As an application, we demonstrate our first results toward navigation of flight in moths. When instrumented with equipment to gather information for environmental sensing, such insects potentially can assist man to monitor the ecosystems that we share with them for sustainability. The simplicity of the optimized surgical procedure we invented allows for batch insertions to the insect for automatic and mass production of such hybrid insect-machine platforms. Therefore, our bioelectronic interface and hybrid insect-machine platform enables multidisciplinary scientific and engineering studies not only to investigate the details of insect behavioral physiology but also to control it.
Artificial Intelligence/Robotics Applications to Navy Aircraft Maintenance.
1984-06-01
other automatic machinery such as presses, molding machines , and numerically-controlled machine tools, just as people do. A-36...Robotics Technologies 3 B. Relevant AI Technologies 4 1. Expert Systems 4 2. Automatic Planning 4 3. Natural Language 5 4. Machine Vision...building machines that imitate human behavior. Artificial intelligence is concerned with the functions of the brain, whereas robotics include, in
Noise Reduction in Brainwaves by Using Both EEG Signals and Frontal Viewing Camera Images
Bang, Jae Won; Choi, Jong-Suk; Park, Kang Ryoung
2013-01-01
Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) have been used in various applications, including human–computer interfaces, diagnosis of brain diseases, and measurement of cognitive status. However, EEG signals can be contaminated with noise caused by user's head movements. Therefore, we propose a new method that combines an EEG acquisition device and a frontal viewing camera to isolate and exclude the sections of EEG data containing these noises. This method is novel in the following three ways. First, we compare the accuracies of detecting head movements based on the features of EEG signals in the frequency and time domains and on the motion features of images captured by the frontal viewing camera. Second, the features of EEG signals in the frequency domain and the motion features captured by the frontal viewing camera are selected as optimal ones. The dimension reduction of the features and feature selection are performed using linear discriminant analysis. Third, the combined features are used as inputs to support vector machine (SVM), which improves the accuracy in detecting head movements. The experimental results show that the proposed method can detect head movements with an average error rate of approximately 3.22%, which is smaller than that of other methods. PMID:23669713
An online BCI game based on the decoding of users' attention to color stimulus.
Yang, Lingling; Leung, Howard
2013-01-01
Studies have shown that statistically there are differences in theta, alpha and beta band powers when people look at blue and red colors. In this paper, a game has been developed to test whether these statistical differences are good enough for online Brain Computer Interface (BCI) application. We implemented a two-choice BCI game in which the subject makes the choice by looking at a color option and our system decodes the subject's intention by analyzing the EEG signal. In our system, band power features of the EEG data were used to train a support vector machine (SVM) classification model. An online mechanism was adopted to update the classification model during the training stage to account for individual differences. Our results showed that an accuracy of 70%-80% could be achieved and it provided evidence for the possibility in applying color stimuli to BCI applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Emma M.; Hendrix, Val; Chertkov, Michael
This white paper introduces the application of advanced data analytics to the modernized grid. In particular, we consider the field of machine learning and where it is both useful, and not useful, for the particular field of the distribution grid and buildings interface. While analytics, in general, is a growing field of interest, and often seen as the golden goose in the burgeoning distribution grid industry, its application is often limited by communications infrastructure, or lack of a focused technical application. Overall, the linkage of analytics to purposeful application in the grid space has been limited. In this paper wemore » consider the field of machine learning as a subset of analytical techniques, and discuss its ability and limitations to enable the future distribution grid and the building-to-grid interface. To that end, we also consider the potential for mixing distributed and centralized analytics and the pros and cons of these approaches. Machine learning is a subfield of computer science that studies and constructs algorithms that can learn from data and make predictions and improve forecasts. Incorporation of machine learning in grid monitoring and analysis tools may have the potential to solve data and operational challenges that result from increasing penetration of distributed and behind-the-meter energy resources. There is an exponentially expanding volume of measured data being generated on the distribution grid, which, with appropriate application of analytics, may be transformed into intelligible, actionable information that can be provided to the right actors – such as grid and building operators, at the appropriate time to enhance grid or building resilience, efficiency, and operations against various metrics or goals – such as total carbon reduction or other economic benefit to customers. While some basic analysis into these data streams can provide a wealth of information, computational and human boundaries on performing the analysis are becoming significant, with more data and multi-objective concerns. Efficient applications of analysis and the machine learning field are being considered in the loop.« less
An adaptive brain actuated system for augmenting rehabilitation
Roset, Scott A.; Gant, Katie; Prasad, Abhishek; Sanchez, Justin C.
2014-01-01
For people living with paralysis, restoration of hand function remains the top priority because it leads to independence and improvement in quality of life. In approaches to restore hand and arm function, a goal is to better engage voluntary control and counteract maladaptive brain reorganization that results from non-use. Standard rehabilitation augmented with developments from the study of brain-computer interfaces could provide a combined therapy approach for motor cortex rehabilitation and to alleviate motor impairments. In this paper, an adaptive brain-computer interface system intended for application to control a functional electrical stimulation (FES) device is developed as an experimental test bed for augmenting rehabilitation with a brain-computer interface. The system's performance is improved throughout rehabilitation by passive user feedback and reinforcement learning. By continuously adapting to the user's brain activity, similar adaptive systems could be used to support clinical brain-computer interface neurorehabilitation over multiple days. PMID:25565945
Münßinger, Jana I.; Halder, Sebastian; Kleih, Sonja C.; Furdea, Adrian; Raco, Valerio; Hösle, Adi; Kübler, Andrea
2010-01-01
Brain–computer interfaces (BCIs) enable paralyzed patients to communicate; however, up to date, no creative expression was possible. The current study investigated the accuracy and user-friendliness of P300-Brain Painting, a new BCI application developed to paint pictures using brain activity only. Two different versions of the P300-Brain Painting application were tested: A colored matrix tested by a group of ALS-patients (n = 3) and healthy participants (n = 10), and a black and white matrix tested by healthy participants (n = 10). The three ALS-patients achieved high accuracies; two of them reaching above 89% accuracy. In healthy subjects, a comparison between the P300-Brain Painting application (colored matrix) and the P300-Spelling application revealed significantly lower accuracy and P300 amplitudes for the P300-Brain Painting application. This drop in accuracy and P300 amplitudes was not found when comparing the P300-Spelling application to an adapted, black and white matrix of the P300-Brain Painting application. By employing a black and white matrix, the accuracy of the P300-Brain Painting application was significantly enhanced and reached the accuracy of the P300-Spelling application. ALS-patients greatly enjoyed P300-Brain Painting and were able to use the application with the same accuracy as healthy subjects. P300-Brain Painting enables paralyzed patients to express themselves creatively and to participate in the prolific society through exhibitions. PMID:21151375
Artificial bee colony algorithm for single-trial electroencephalogram analysis.
Hsu, Wei-Yen; Hu, Ya-Ping
2015-04-01
In this study, we propose an analysis system combined with feature selection to further improve the classification accuracy of single-trial electroencephalogram (EEG) data. Acquiring event-related brain potential data from the sensorimotor cortices, the system comprises artifact and background noise removal, feature extraction, feature selection, and feature classification. First, the artifacts and background noise are removed automatically by means of independent component analysis and surface Laplacian filter, respectively. Several potential features, such as band power, autoregressive model, and coherence and phase-locking value, are then extracted for subsequent classification. Next, artificial bee colony (ABC) algorithm is used to select features from the aforementioned feature combination. Finally, selected subfeatures are classified by support vector machine. Comparing with and without artifact removal and feature selection, using a genetic algorithm on single-trial EEG data for 6 subjects, the results indicate that the proposed system is promising and suitable for brain-computer interface applications. © EEG and Clinical Neuroscience Society (ECNS) 2014.
Concurrent Image Processing Executive (CIPE)
NASA Technical Reports Server (NTRS)
Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.
1988-01-01
The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.
Cao, Ran; Pu, Xianjie; Du, Xinyu; Yang, Wei; Wang, Jiaona; Guo, Hengyu; Zhao, Shuyu; Yuan, Zuqing; Zhang, Chi; Li, Congju; Wang, Zhong Lin
2018-05-22
Multifunctional electronic textiles (E-textiles) with embedded electric circuits hold great application prospects for future wearable electronics. However, most E-textiles still have critical challenges, including air permeability, satisfactory washability, and mass fabrication. In this work, we fabricate a washable E-textile that addresses all of the concerns and shows its application as a self-powered triboelectric gesture textile for intelligent human-machine interfacing. Utilizing conductive carbon nanotubes (CNTs) and screen-printing technology, this kind of E-textile embraces high conductivity (0.2 kΩ/sq), high air permeability (88.2 mm/s), and can be manufactured on common fabric at large scales. Due to the advantage of the interaction between the CNTs and the fabrics, the electrode shows excellent stability under harsh mechanical deformation and even after being washed. Moreover, based on a single-electrode mode triboelectric nanogenerator and electrode pattern design, our E-textile exhibits highly sensitive touch/gesture sensing performance and has potential applications for human-machine interfacing.
McMullen, David P.; Hotson, Guy; Katyal, Kapil D.; Wester, Brock A.; Fifer, Matthew S.; McGee, Timothy G.; Harris, Andrew; Johannes, Matthew S.; Vogelstein, R. Jacob; Ravitz, Alan D.; Anderson, William S.; Thakor, Nitish V.; Crone, Nathan E.
2014-01-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 seconds for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. PMID:24760914
McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E
2014-07-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.
Region based Brain Computer Interface for a home control application.
Akman Aydin, Eda; Bay, Omer Faruk; Guler, Inan
2015-08-01
Environment control is one of the important challenges for disabled people who suffer from neuromuscular diseases. Brain Computer Interface (BCI) provides a communication channel between the human brain and the environment without requiring any muscular activation. The most important expectation for a home control application is high accuracy and reliable control. Region-based paradigm is a stimulus paradigm based on oddball principle and requires selection of a target at two levels. This paper presents an application of region based paradigm for a smart home control application for people with neuromuscular diseases. In this study, a region based stimulus interface containing 49 commands was designed. Five non-disabled subjects were attended to the experiments. Offline analysis results of the experiments yielded 95% accuracy for five flashes. This result showed that region based paradigm can be used to select commands of a smart home control application with high accuracy in the low number of repetitions successfully. Furthermore, a statistically significant difference was not observed between the level accuracies.
Optimal Achievable Encoding for Brain Machine Interface
2017-12-22
dictionary-based encoding approach to translate a visual image into sequential patterns of electrical stimulation in real time , in a manner that...including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and...networks, and by applying linear decoding to complete recorded populations of retinal ganglion cells for the first time . Third, we developed a greedy
Quantifying the role of motor imagery in brain-machine interfaces
NASA Astrophysics Data System (ADS)
Marchesotti, Silvia; Bassolino, Michela; Serino, Andrea; Bleuler, Hannes; Blanke, Olaf
2016-04-01
Despite technical advances in brain machine interfaces (BMI), for as-yet unknown reasons the ability to control a BMI remains limited to a subset of users. We investigate whether individual differences in BMI control based on motor imagery (MI) are related to differences in MI ability. We assessed whether differences in kinesthetic and visual MI, in the behavioral accuracy of MI, and in electroencephalographic variables, were able to differentiate between high- versus low-aptitude BMI users. High-aptitude BMI users showed higher MI accuracy as captured by subjective and behavioral measurements, pointing to a prominent role of kinesthetic rather than visual imagery. Additionally, for the first time, we applied mental chronometry, a measure quantifying the degree to which imagined and executed movements share a similar temporal profile. We also identified enhanced lateralized μ-band oscillations over sensorimotor cortices during MI in high- versus low-aptitude BMI users. These findings reveal that subjective, behavioral, and EEG measurements of MI are intimately linked to BMI control. We propose that poor BMI control cannot be ascribed only to intrinsic limitations of EEG recordings and that specific questionnaires and mental chronometry can be used as predictors of BMI performance (without the need to record EEG activity).
A four-dimensional virtual hand brain-machine interface using active dimension selection.
Rouse, Adam G
2016-06-01
Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. ADS utilizes a two stage decoder by using neural signals to both (i) select an active dimension being controlled and (ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits s(-1) for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand.
Quantifying the role of motor imagery in brain-machine interfaces
Marchesotti, Silvia; Bassolino, Michela; Serino, Andrea; Bleuler, Hannes; Blanke, Olaf
2016-01-01
Despite technical advances in brain machine interfaces (BMI), for as-yet unknown reasons the ability to control a BMI remains limited to a subset of users. We investigate whether individual differences in BMI control based on motor imagery (MI) are related to differences in MI ability. We assessed whether differences in kinesthetic and visual MI, in the behavioral accuracy of MI, and in electroencephalographic variables, were able to differentiate between high- versus low-aptitude BMI users. High-aptitude BMI users showed higher MI accuracy as captured by subjective and behavioral measurements, pointing to a prominent role of kinesthetic rather than visual imagery. Additionally, for the first time, we applied mental chronometry, a measure quantifying the degree to which imagined and executed movements share a similar temporal profile. We also identified enhanced lateralized μ-band oscillations over sensorimotor cortices during MI in high- versus low-aptitude BMI users. These findings reveal that subjective, behavioral, and EEG measurements of MI are intimately linked to BMI control. We propose that poor BMI control cannot be ascribed only to intrinsic limitations of EEG recordings and that specific questionnaires and mental chronometry can be used as predictors of BMI performance (without the need to record EEG activity). PMID:27052520
A Bidirectional Brain-Machine Interface Algorithm That Approximates Arbitrary Force-Fields
Semprini, Marianna; Mussa-Ivaldi, Ferdinando A.; Panzeri, Stefano
2014-01-01
We examine bidirectional brain-machine interfaces that control external devices in a closed loop by decoding motor cortical activity to command the device and by encoding the state of the device by delivering electrical stimuli to sensory areas. Although it is possible to design this artificial sensory-motor interaction while maintaining two independent channels of communication, here we propose a rule that closes the loop between flows of sensory and motor information in a way that approximates a desired dynamical policy expressed as a field of forces acting upon the controlled external device. We previously developed a first implementation of this approach based on linear decoding of neural activity recorded from the motor cortex into a set of forces (a force field) applied to a point mass, and on encoding of position of the point mass into patterns of electrical stimuli delivered to somatosensory areas. However, this previous algorithm had the limitation that it only worked in situations when the position-to-force map to be implemented is invertible. Here we overcome this limitation by developing a new non-linear form of the bidirectional interface that can approximate a virtually unlimited family of continuous fields. The new algorithm bases both the encoding of position information and the decoding of motor cortical activity on an explicit map between spike trains and the state space of the device computed with Multi-Dimensional-Scaling. We present a detailed computational analysis of the performance of the interface and a validation of its robustness by using synthetic neural responses in a simulated sensory-motor loop. PMID:24626393
Optical HMI with biomechanical energy harvesters integrated in textile supports
NASA Astrophysics Data System (ADS)
De Pasquale, G.; Kim, SG; De Pasquale, D.
2015-12-01
This paper reports the design, prototyping and experimental validation of a human-machine interface (HMI), named GoldFinger, integrated into a glove with energy harvesting from fingers motion. The device is addressed to medical applications, design tools, virtual reality field and to industrial applications where the interaction with machines is restricted by safety procedures. The HMI prototype includes four piezoelectric transducers applied to the fingers backside at PIP (proximal inter-phalangeal) joints, electric wires embedded in the fabric connecting the transducers, aluminum case for the electronics, wearable switch made with conductive fabrics to turn the communication channel on and off, and a LED. The electronic circuit used to manage the power and to control the light emitter includes a diodes bridge, leveling capacitors, storage battery and switch made by conductive fabric. The communication with the machine is managed by dedicated software, which includes the user interface, the optical tracking, and the continuous updating of the machine microcontroller. The energetic benefit of energy harvester on the battery lifetime is inversely proportional to the activation time of the optical emitter. In most applications, the optical port is active for 1 to 5% of the time, corresponding to battery lifetime increasing between about 14% and 70%.
Human-machine interface for a VR-based medical imaging environment
NASA Astrophysics Data System (ADS)
Krapichler, Christian; Haubner, Michael; Loesch, Andreas; Lang, Manfred K.; Englmeier, Karl-Hans
1997-05-01
Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.
NASA Astrophysics Data System (ADS)
Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun
2006-06-01
This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.
Machine learning for neuroimaging with scikit-learn.
Abraham, Alexandre; Pedregosa, Fabian; Eickenberg, Michael; Gervais, Philippe; Mueller, Andreas; Kossaifi, Jean; Gramfort, Alexandre; Thirion, Bertrand; Varoquaux, Gaël
2014-01-01
Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g., multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g., resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain.
Machine learning for neuroimaging with scikit-learn
Abraham, Alexandre; Pedregosa, Fabian; Eickenberg, Michael; Gervais, Philippe; Mueller, Andreas; Kossaifi, Jean; Gramfort, Alexandre; Thirion, Bertrand; Varoquaux, Gaël
2014-01-01
Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g., multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g., resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain. PMID:24600388
Concurrent Image Processing Executive (CIPE). Volume 1: Design overview
NASA Technical Reports Server (NTRS)
Lee, Meemong; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.
1990-01-01
The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are described. The target machine for this software is a JPL/Caltech Mark 3fp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules: user interface, host-resident executive, hypercube-resident executive, and application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube, a data management method which distributes, redistributes, and tracks data set information was implemented. The data management also allows data sharing among application programs. The CIPE software architecture provides a flexible environment for scientific analysis of complex remote sensing image data, such as planetary data and imaging spectrometry, utilizing state-of-the-art concurrent computation capabilities.
NASA Technical Reports Server (NTRS)
Quealy, Angela; Cole, Gary L.; Blech, Richard A.
1993-01-01
The Application Portable Parallel Library (APPL) is a subroutine-based library of communication primitives that is callable from applications written in FORTRAN or C. APPL provides a consistent programmer interface to a variety of distributed and shared-memory multiprocessor MIMD machines. The objective of APPL is to minimize the effort required to move parallel applications from one machine to another, or to a network of homogeneous machines. APPL encompasses many of the message-passing primitives that are currently available on commercial multiprocessor systems. This paper describes APPL (version 2.3.1) and its usage, reports the status of the APPL project, and indicates possible directions for the future. Several applications using APPL are discussed, as well as performance and overhead results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, J.R.; Netrologic, Inc., San Diego, CA)
1988-01-01
Topics presented include integrating neural networks and expert systems, neural networks and signal processing, machine learning, cognition and avionics applications, artificial intelligence and man-machine interface issues, real time expert systems, artificial intelligence, and engineering applications. Also considered are advanced problem solving techniques, combinational optimization for scheduling and resource control, data fusion/sensor fusion, back propagation with momentum, shared weights and recurrency, automatic target recognition, cybernetics, optical neural networks.
The Two-Brains Hypothesis: Towards a guide for brain-brain and brain-machine interfaces.
Goodman, G; Poznanski, R R; Cacha, L; Bercovich, D
2015-09-01
Great advances have been made in signaling information on brain activity in individuals, or passing between an individual and a computer or robot. These include recording of natural activity using implants under the scalp or by external means or the reverse feeding of such data into the brain. In one recent example, noninvasive transcranial magnetic stimulation (TMS) allowed feeding of digitalized information into the central nervous system (CNS). Thus, noninvasive electroencephalography (EEG) recordings of motor signals at the scalp, representing specific motor intention of hand moving in individual humans, were fed as repetitive transcranial magnetic stimulation (rTMS) at a maximum intensity of 2.0[Formula: see text]T through a circular magnetic coil placed flush on each of the heads of subjects present at a different location. The TMS was said to induce an electric current influencing axons of the motor cortex causing the intended hand movement: the first example of the transfer of motor intention and its expression, between the brains of two remote humans. However, to date the mechanisms involved, not least that relating to the participation of magnetic induction, remain unclear. In general, in animal biology, magnetic fields are usually the poor relation of neuronal current: generally "unseen" and if apparent, disregarded or just given a nod. Niels Bohr searched for a biological parallel to complementary phenomena of physics. Pertinently, the two-brains hypothesis (TBH) proposed recently that advanced animals, especially man, have two brains i.e., the animal CNS evolved as two fundamentally different though interdependent, complementary organs: one electro-ionic (tangible, known and accessible), and the other, electromagnetic (intangible and difficult to access) - a stable, structured and functional 3D compendium of variously induced interacting electro-magnetic (EM) fields. Research on the CNS in health and disease progresses including that on brain-brain, brain-computer and brain-robot engineering. As they grow even closer, these disciplines involve their own unique complexities, including direction by the laws of inductive physics. So the novel TBH hypothesis has wide fundamental implications, including those related to TMS. These require rethinking and renewed research engaging the fully complementary equivalence of mutual magnetic and electric field induction in the CNS and, within this context, a new mathematics of the brain to decipher higher cognitive operations not possible with current brain-brain and brain-machine interfaces. Bohr may now rest.
Machine Learning and Radiology
Wang, Shijun; Summers, Ronald M.
2012-01-01
In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. PMID:22465077
Intelligible machine learning with malibu.
Langlois, Robert E; Lu, Hui
2008-01-01
malibu is an open-source machine learning work-bench developed in C/C++ for high-performance real-world applications, namely bioinformatics and medical informatics. It leverages third-party machine learning implementations for more robust bug-free software. This workbench handles several well-studied supervised machine learning problems including classification, regression, importance-weighted classification and multiple-instance learning. The malibu interface was designed to create reproducible experiments ideally run in a remote and/or command line environment. The software can be found at: http://proteomics.bioengr. uic.edu/malibu/index.html.
Quandt, F.; Reichert, C.; Hinrichs, H.; Heinze, H.J.; Knight, R.T.; Rieger, J.W.
2012-01-01
It is crucial to understand what brain signals can be decoded from single trials with different recording techniques for the development of Brain-Machine Interfaces. A specific challenge for non-invasive recording methods are activations confined to small spatial areas on the cortex such as the finger representation of one hand. Here we study the information content of single trial brain activity in non-invasive MEG and EEG recordings elicited by finger movements of one hand. We investigate the feasibility of decoding which of four fingers of one hand performed a slight button press. With MEG we demonstrate reliable discrimination of single button presses performed with the thumb, the index, the middle or the little finger (average over all subjects and fingers 57%, best subject 70%, empirical guessing level: 25.1%). EEG decoding performance was less robust (average over all subjects and fingers 43%, best subject 54%, empirical guessing level 25.1%). Spatiotemporal patterns of amplitude variations in the time series provided best information for discriminating finger movements. Non-phase-locked changes of mu and beta oscillations were less predictive. Movement related high gamma oscillations were observed in average induced oscillation amplitudes in the MEG but did not provide sufficient information about the finger's identity in single trials. Importantly, pre-movement neuronal activity provided information about the preparation of the movement of a specific finger. Our study demonstrates the potential of non-invasive MEG to provide informative features for individual finger control in a Brain-Machine Interface neuroprosthesis. PMID:22155040
Augmenting intracortical brain-machine interface with neurally driven error detectors
NASA Astrophysics Data System (ADS)
Even-Chen, Nir; Stavisky, Sergey D.; Kao, Jonathan C.; Ryu, Stephen I.; Shenoy, Krishna V.
2017-12-01
Objective. Making mistakes is inevitable, but identifying them allows us to correct or adapt our behavior to improve future performance. Current brain-machine interfaces (BMIs) make errors that need to be explicitly corrected by the user, thereby consuming time and thus hindering performance. We hypothesized that neural correlates of the user perceiving the mistake could be used by the BMI to automatically correct errors. However, it was unknown whether intracortical outcome error signals were present in the premotor and primary motor cortices, brain regions successfully used for intracortical BMIs. Approach. We report here for the first time a putative outcome error signal in spiking activity within these cortices when rhesus macaques performed an intracortical BMI computer cursor task. Main results. We decoded BMI trial outcomes shortly after and even before a trial ended with 96% and 84% accuracy, respectively. This led us to develop and implement in real-time a first-of-its-kind intracortical BMI error ‘detect-and-act’ system that attempts to automatically ‘undo’ or ‘prevent’ mistakes. The detect-and-act system works independently and in parallel to a kinematic BMI decoder. In a challenging task that resulted in substantial errors, this approach improved the performance of a BMI employing two variants of the ubiquitous Kalman velocity filter, including a state-of-the-art decoder (ReFIT-KF). Significance. Detecting errors in real-time from the same brain regions that are commonly used to control BMIs should improve the clinical viability of BMIs aimed at restoring motor function to people with paralysis.
A Symbiotic Brain-Machine Interface through Value-Based Decision Making
Mahmoudi, Babak; Sanchez, Justin C.
2011-01-01
Background In the development of Brain Machine Interfaces (BMIs), there is a great need to enable users to interact with changing environments during the activities of daily life. It is expected that the number and scope of the learning tasks encountered during interaction with the environment as well as the pattern of brain activity will vary over time. These conditions, in addition to neural reorganization, pose a challenge to decoding neural commands for BMIs. We have developed a new BMI framework in which a computational agent symbiotically decoded users' intended actions by utilizing both motor commands and goal information directly from the brain through a continuous Perception-Action-Reward Cycle (PARC). Methodology The control architecture designed was based on Actor-Critic learning, which is a PARC-based reinforcement learning method. Our neurophysiology studies in rat models suggested that Nucleus Accumbens (NAcc) contained a rich representation of goal information in terms of predicting the probability of earning reward and it could be translated into an evaluative feedback for adaptation of the decoder with high precision. Simulated neural control experiments showed that the system was able to maintain high performance in decoding neural motor commands during novel tasks or in the presence of reorganization in the neural input. We then implanted a dual micro-wire array in the primary motor cortex (M1) and the NAcc of rat brain and implemented a full closed-loop system in which robot actions were decoded from the single unit activity in M1 based on an evaluative feedback that was estimated from NAcc. Conclusions Our results suggest that adapting the BMI decoder with an evaluative feedback that is directly extracted from the brain is a possible solution to the problem of operating BMIs in changing environments with dynamic neural signals. During closed-loop control, the agent was able to solve a reaching task by capturing the action and reward interdependency in the brain. PMID:21423797
Mohanty, Rosaleena; Sinha, Anita M; Remsik, Alexander B; Dodd, Keith C; Young, Brittany M; Jacobson, Tyler; McMillan, Matthew; Thoma, Jaclyn; Advani, Hemali; Nair, Veena A; Kang, Theresa J; Caldera, Kristin; Edwards, Dorothy F; Williams, Justin C; Prabhakaran, Vivek
2018-01-01
Interventional therapy using brain-computer interface (BCI) technology has shown promise in facilitating motor recovery in stroke survivors; however, the impact of this form of intervention on functional networks outside of the motor network specifically is not well-understood. Here, we investigated resting-state functional connectivity (rs-FC) in stroke participants undergoing BCI therapy across stages, namely pre- and post-intervention, to identify discriminative functional changes using a machine learning classifier with the goal of categorizing participants into one of the two therapy stages. Twenty chronic stroke participants with persistent upper-extremity motor impairment received neuromodulatory training using a closed-loop neurofeedback BCI device, and rs-functional MRI (rs-fMRI) scans were collected at four time points: pre-, mid-, post-, and 1 month post-therapy. To evaluate the peak effects of this intervention, rs-FC was analyzed from two specific stages, namely pre- and post-therapy. In total, 236 seeds spanning both motor and non-motor regions of the brain were computed at each stage. A univariate feature selection was applied to reduce the number of features followed by a principal component-based data transformation used by a linear binary support vector machine (SVM) classifier to classify each participant into a therapy stage. The SVM classifier achieved a cross-validation accuracy of 92.5% using a leave-one-out method. Outside of the motor network, seeds from the fronto-parietal task control, default mode, subcortical, and visual networks emerged as important contributors to the classification. Furthermore, a higher number of functional changes were observed to be strengthening from the pre- to post-therapy stage than the ones weakening, both of which involved motor and non-motor regions of the brain. These findings may provide new evidence to support the potential clinical utility of BCI therapy as a form of stroke rehabilitation that not only benefits motor recovery but also facilitates recovery in other brain networks. Moreover, delineation of stronger and weaker changes may inform more optimal designs of BCI interventional therapy so as to facilitate strengthened and suppress weakened changes in the recovery process.
CLIPS interface development tools and their application
NASA Technical Reports Server (NTRS)
Engel, Bernard A.; Rewerts, Chris C.; Srinivasan, Raghavan; Rogers, Joseph B.; Jones, Don D.
1990-01-01
A package of C-based PC user interface development functions has been developed and integrated into CLIPS. The primary function is ASK which provides a means to ask the user questions via multiple choice menus or the keyboard and then returns the user response to CLIPS. A parameter-like structure supplies information for the interface. Another function, SHOW, provides a means to paginate and display text. A third function, TITLE, formats and displays title screens. A similar set of C-based functions that are more general and thus will run on UNIX and machines have also been developed. Seven expert system applications were transformed from commercial development environments into CLIPS and utilize ASK, SHOW, and TITLE. Development of numerous new expert system applications using CLIPS and these interface functions has started. These functions greatly reduce the time required to build interfaces for CLIPS applications.
NASA Astrophysics Data System (ADS)
Zhang, Zhen; Jiao, Xuejun; Xu, Fengang; Jiang, Jin; Yang, Hanjun; Cao, Yong; Fu, Jiahao
2017-01-01
Functional near-infrared spectroscopy (fNIRS), which can measure cortex hemoglobin activity, has been widely adopted in brain-computer interface (BCI). To explore the feasibility of recognizing motor imagery (MI) and motor execution (ME) in the same motion. We measured changes of oxygenated hemoglobin (HBO) and deoxygenated hemoglobin (HBR) on PFC and Motor Cortex (MC) when 15 subjects performing hand extension and finger tapping tasks. The mean, slope, quadratic coefficient and approximate entropy features were extracted from HBO as the input of support vector machine (SVM). For the four-class fNIRS-BCI classifiers, we realized 87.65% and 87.58% classification accuracy corresponding to hand extension and finger tapping tasks. In conclusion, it is effective for fNIRS-BCI to recognize MI and ME in the same motion.
Modular, bluetooth enabled, wireless electroencephalograph (EEG) platform.
Lovelace, Joseph A; Witt, Tyler S; Beyette, Fred R
2013-01-01
A design for a modular, compact, and accurate wireless electroencephalograph (EEG) system is proposed. EEG is the only non-invasive measure for neuronal function of the brain. Using a number of digital signal processing (DSP) techniques, this neuronal function can be acquired and processed into meaningful representations of brain activity. The system described here utilizes Bluetooth to wirelessly transmit the digitized brain signal for an end application use. In this way, the system is portable, and modular in terms of the device to which it can interface. Brain Computer Interface (BCI) has become a popular extension of EEG systems in modern research. This design serves as a platform for applications using BCI capability.
Synofzik, M
2007-12-01
Through the rapid progress in neuropharmacology it seems to become possible to effectively improve our cognitive capacities and emotional states by easily applicable means. Moreover, deep-brain stimulation may allow an effective therapeutic option for those neurological and psychiatric diseases which still can not be sufficiently treated by pharmacological measures. So far, however, both the benefit and the harm of these techniques are only insufficiently understood by neuroscience and detailed ethical analyses are still missing. In this article ethical criteria and most recent empirical evidence are systematically brought together for the first time. This analysis shows that it is irrelevant for an ethical evaluation whether a drug or a brain-machine interface is categorized as "enhancement" or "treatment" or whether it changes "human nature". The only decisive criteria are whether the intervention (1.) benefits the patient, (2.) does not harm the patient and (3.) is desired by the patient. However, current empirical data in both fields, neuropharmacology and deep-brain stimulation are still too sparse to adequately evaluate these criteria. Moreover, the focus in both fields has been strongly misled by neglecting the distinction between "benefit" and "efficacy": In past years research and clinical practice have only focused on physiological effects, but not on the actual benefit to the patient.
The Brainarium: An Interactive Immersive Tool for Brain Education, Art, and Neurotherapy
2016-01-01
Recent theoretical and technological advances in neuroimaging techniques now allow brain electrical activity to be recorded using affordable and user-friendly equipment for nonscientist end-users. An increasing number of educators and artists have begun using electroencephalogram (EEG) to control multimedia and live artistic contents. In this paper, we introduce a new concept based on brain computer interface (BCI) technologies: the Brainarium. The Brainarium is a new pedagogical and artistic tool, which can deliver and illustrate scientific knowledge, as well as a new framework for scientific exploration. The Brainarium consists of a portable planetarium device that is being used as brain metaphor. This is done by projecting multimedia content on the planetarium dome and displaying EEG data recorded from a subject in real time using Brain Machine Interface (BMI) technologies. The system has been demonstrated through several performances involving an interaction between the subject controlling the BMI, a musician, and the audience during series of exhibitions and workshops in schools. We report here feedback from 134 participants who filled questionnaires to rate their experiences. Our results show improved subjective learning compared to conventional methods, improved entertainment value, improved absorption into the material being presented, and little discomfort. PMID:27698660
The Brainarium: An Interactive Immersive Tool for Brain Education, Art, and Neurotherapy.
Grandchamp, Romain; Delorme, Arnaud
2016-01-01
Recent theoretical and technological advances in neuroimaging techniques now allow brain electrical activity to be recorded using affordable and user-friendly equipment for nonscientist end-users. An increasing number of educators and artists have begun using electroencephalogram (EEG) to control multimedia and live artistic contents. In this paper, we introduce a new concept based on brain computer interface (BCI) technologies: the Brainarium. The Brainarium is a new pedagogical and artistic tool, which can deliver and illustrate scientific knowledge, as well as a new framework for scientific exploration. The Brainarium consists of a portable planetarium device that is being used as brain metaphor. This is done by projecting multimedia content on the planetarium dome and displaying EEG data recorded from a subject in real time using Brain Machine Interface (BMI) technologies. The system has been demonstrated through several performances involving an interaction between the subject controlling the BMI, a musician, and the audience during series of exhibitions and workshops in schools. We report here feedback from 134 participants who filled questionnaires to rate their experiences. Our results show improved subjective learning compared to conventional methods, improved entertainment value, improved absorption into the material being presented, and little discomfort.
Prediction of brain-computer interface aptitude from individual brain structure.
Halder, S; Varkuti, B; Bogdan, M; Kübler, A; Rosenstiel, W; Sitaram, R; Birbaumer, N
2013-01-01
Brain-computer interface (BCI) provide a non-muscular communication channel for patients with impairments of the motor system. A significant number of BCI users is unable to obtain voluntary control of a BCI-system in proper time. This makes methods that can be used to determine the aptitude of a user necessary. We hypothesized that integrity and connectivity of involved white matter connections may serve as a predictor of individual BCI-performance. Therefore, we analyzed structural data from anatomical scans and DTI of motor imagery BCI-users differentiated into high and low BCI-aptitude groups based on their overall performance. Using a machine learning classification method we identified discriminating structural brain trait features and correlated the best features with a continuous measure of individual BCI-performance. Prediction of the aptitude group of each participant was possible with near perfect accuracy (one error). Tissue volumetric analysis yielded only poor classification results. In contrast, the structural integrity and myelination quality of deep white matter structures such as the Corpus Callosum, Cingulum, and Superior Fronto-Occipital Fascicle were positively correlated with individual BCI-performance. This confirms that structural brain traits contribute to individual performance in BCI use.
Prediction of brain-computer interface aptitude from individual brain structure
Halder, S.; Varkuti, B.; Bogdan, M.; Kübler, A.; Rosenstiel, W.; Sitaram, R.; Birbaumer, N.
2013-01-01
Objective: Brain-computer interface (BCI) provide a non-muscular communication channel for patients with impairments of the motor system. A significant number of BCI users is unable to obtain voluntary control of a BCI-system in proper time. This makes methods that can be used to determine the aptitude of a user necessary. Methods: We hypothesized that integrity and connectivity of involved white matter connections may serve as a predictor of individual BCI-performance. Therefore, we analyzed structural data from anatomical scans and DTI of motor imagery BCI-users differentiated into high and low BCI-aptitude groups based on their overall performance. Results: Using a machine learning classification method we identified discriminating structural brain trait features and correlated the best features with a continuous measure of individual BCI-performance. Prediction of the aptitude group of each participant was possible with near perfect accuracy (one error). Conclusions: Tissue volumetric analysis yielded only poor classification results. In contrast, the structural integrity and myelination quality of deep white matter structures such as the Corpus Callosum, Cingulum, and Superior Fronto-Occipital Fascicle were positively correlated with individual BCI-performance. Significance: This confirms that structural brain traits contribute to individual performance in BCI use. PMID:23565083
Nurmikko, Arto V.; Donoghue, John P.; Hochberg, Leigh R.; Patterson, William R.; Song, Yoon-Kyu; Bull, Christopher W.; Borton, David A.; Laiwalla, Farah; Park, Sunmee; Ming, Yin; Aceros, Juan
2011-01-01
Acquiring neural signals at high spatial and temporal resolution directly from brain microcircuits and decoding their activity to interpret commands and/or prior planning activity, such as motion of an arm or a leg, is a prime goal of modern neurotechnology. Its practical aims include assistive devices for subjects whose normal neural information pathways are not functioning due to physical damage or disease. On the fundamental side, researchers are striving to decipher the code of multiple neural microcircuits which collectively make up nature’s amazing computing machine, the brain. By implanting biocompatible neural sensor probes directly into the brain, in the form of microelectrode arrays, it is now possible to extract information from interacting populations of neural cells with spatial and temporal resolution at the single cell level. With parallel advances in application of statistical and mathematical techniques tools for deciphering the neural code, extracted populations or correlated neurons, significant understanding has been achieved of those brain commands that control, e.g., the motion of an arm in a primate (monkey or a human subject). These developments are accelerating the work on neural prosthetics where brain derived signals may be employed to bypass, e.g., an injured spinal cord. One key element in achieving the goals for practical and versatile neural prostheses is the development of fully implantable wireless microelectronic “brain-interfaces” within the body, a point of special emphasis of this paper. PMID:21654935
Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface
NASA Astrophysics Data System (ADS)
Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry
2007-04-01
As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.
2017-01-01
Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain–computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method. PMID:28558002
2011-01-01
Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements. PMID:21798025
Stålring, Jonna C; Carlsson, Lars A; Almeida, Pedro; Boyer, Scott
2011-07-28
Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements.
Illusory movement perception improves motor control for prosthetic hands
Marasco, Paul D.; Hebert, Jacqueline S.; Sensinger, Jon W.; Shell, Courtney E.; Schofield, Jonathon S.; Thumser, Zachary C.; Nataraj, Raviraj; Beckler, Dylan T.; Dawson, Michael R.; Blustein, Dan H.; Gill, Satinder; Mensh, Brett D.; Granja-Vazquez, Rafael; Newcomb, Madeline D.; Carey, Jason P.; Orzell, Beth M.
2018-01-01
To effortlessly complete an intentional movement, the brain needs feedback from the body regarding the movement’s progress. This largely non-conscious kinesthetic sense helps the brain to learn relationships between motor commands and outcomes to correct movement errors. Prosthetic systems for restoring function have predominantly focused on controlling motorized joint movement. Without the kinesthetic sense, however, these devices do not become intuitively controllable. Here we report a method for endowing human amputees with a kinesthetic perception of dexterous robotic hands. Vibrating the muscles used for prosthetic control via a neural-machine interface produced the illusory perception of complex grip movements. Within minutes, three amputees integrated this kinesthetic feedback and improved movement control. Combining intent, kinesthesia, and vision instilled participants with a sense of agency over the robotic movements. This feedback approach for closed-loop control opens a pathway to seamless integration of minds and machines. PMID:29540617
Third Conference on Artificial Intelligence for Space Applications, part 1
NASA Technical Reports Server (NTRS)
Denton, Judith S. (Compiler); Freeman, Michael S. (Compiler); Vereen, Mary (Compiler)
1987-01-01
The application of artificial intelligence to spacecraft and aerospace systems is discussed. Expert systems, robotics, space station automation, fault diagnostics, parallel processing, knowledge representation, scheduling, man-machine interfaces and neural nets are among the topics discussed.
A Brain-Computer Interface Project Applied in Computer Engineering
ERIC Educational Resources Information Center
Katona, Jozsef; Kovari, Attila
2016-01-01
Keeping up with novel methods and keeping abreast of new applications are crucial issues in engineering education. In brain research, one of the most significant research areas in recent decades, many developments have application in both modern engineering technology and education. New measurement methods in the observation of brain activity open…
Implanted Miniaturized Antenna for Brain Computer Interface Applications: Analysis and Design
Zhao, Yujuan; Rennaker, Robert L.; Hutchens, Chris; Ibrahim, Tamer S.
2014-01-01
Implantable Brain Computer Interfaces (BCIs) are designed to provide real-time control signals for prosthetic devices, study brain function, and/or restore sensory information lost as a result of injury or disease. Using Radio Frequency (RF) to wirelessly power a BCI could widely extend the number of applications and increase chronic in-vivo viability. However, due to the limited size and the electromagnetic loss of human brain tissues, implanted miniaturized antennas suffer low radiation efficiency. This work presents simulations, analysis and designs of implanted antennas for a wireless implantable RF-powered brain computer interface application. The results show that thin (on the order of 100 micrometers thickness) biocompatible insulating layers can significantly impact the antenna performance. The proper selection of the dielectric properties of the biocompatible insulating layers and the implantation position inside human brain tissues can facilitate efficient RF power reception by the implanted antenna. While the results show that the effects of the human head shape on implanted antenna performance is somewhat negligible, the constitutive properties of the brain tissues surrounding the implanted antenna can significantly impact the electrical characteristics (input impedance, and operational frequency) of the implanted antenna. Three miniaturized antenna designs are simulated and demonstrate that maximum RF power of up to 1.8 milli-Watts can be received at 2 GHz when the antenna implanted around the dura, without violating the Specific Absorption Rate (SAR) limits. PMID:25079941
Preprocessing and meta-classification for brain-computer interfaces.
Hammon, Paul S; de Sa, Virginia R
2007-03-01
A brain-computer interface (BCI) is a system which allows direct translation of brain states into actions, bypassing the usual muscular pathways. A BCI system works by extracting user brain signals, applying machine learning algorithms to classify the user's brain state, and performing a computer-controlled action. Our goal is to improve brain state classification. Perhaps the most obvious way to improve classification performance is the selection of an advanced learning algorithm. However, it is now well known in the BCI community that careful selection of preprocessing steps is crucial to the success of any classification scheme. Furthermore, recent work indicates that combining the output of multiple classifiers (meta-classification) leads to improved classification rates relative to single classifiers (Dornhege et al., 2004). In this paper, we develop an automated approach which systematically analyzes the relative contributions of different preprocessing and meta-classification approaches. We apply this procedure to three data sets drawn from BCI Competition 2003 (Blankertz et al., 2004) and BCI Competition III (Blankertz et al., 2006), each of which exhibit very different characteristics. Our final classification results compare favorably with those from past BCI competitions. Additionally, we analyze the relative contributions of individual preprocessing and meta-classification choices and discuss which types of BCI data benefit most from specific algorithms.
Cyber-workstation for computational neuroscience.
Digiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C; Fortes, Jose; Sanchez, Justin C
2010-01-01
A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface.
Improving brain-machine interface performance by decoding intended future movements
NASA Astrophysics Data System (ADS)
Willett, Francis R.; Suminski, Aaron J.; Fagg, Andrew H.; Hatsopoulos, Nicholas G.
2013-04-01
Objective. A brain-machine interface (BMI) records neural signals in real time from a subject's brain, interprets them as motor commands, and reroutes them to a device such as a robotic arm, so as to restore lost motor function. Our objective here is to improve BMI performance by minimizing the deleterious effects of delay in the BMI control loop. We mitigate the effects of delay by decoding the subject's intended movements a short time lead in the future. Approach. We use the decoded, intended future movements of the subject as the control signal that drives the movement of our BMI. This should allow the user's intended trajectory to be implemented more quickly by the BMI, reducing the amount of delay in the system. In our experiment, a monkey (Macaca mulatta) uses a future prediction BMI to control a simulated arm to hit targets on a screen. Main Results. Results from experiments with BMIs possessing different system delays (100, 200 and 300 ms) show that the monkey can make significantly straighter, faster and smoother movements when the decoder predicts the user's future intent. We also characterize how BMI performance changes as a function of delay, and explore offline how the accuracy of future prediction decoders varies at different time leads. Significance. This study is the first to characterize the effects of control delays in a BMI and to show that decoding the user's future intent can compensate for the negative effect of control delay on BMI performance.
Cyber-Workstation for Computational Neuroscience
DiGiovanna, Jack; Rattanatamrong, Prapaporn; Zhao, Ming; Mahmoudi, Babak; Hermer, Linda; Figueiredo, Renato; Principe, Jose C.; Fortes, Jose; Sanchez, Justin C.
2009-01-01
A Cyber-Workstation (CW) to study in vivo, real-time interactions between computational models and large-scale brain subsystems during behavioral experiments has been designed and implemented. The design philosophy seeks to directly link the in vivo neurophysiology laboratory with scalable computing resources to enable more sophisticated computational neuroscience investigation. The architecture designed here allows scientists to develop new models and integrate them with existing models (e.g. recursive least-squares regressor) by specifying appropriate connections in a block-diagram. Then, adaptive middleware transparently implements these user specifications using the full power of remote grid-computing hardware. In effect, the middleware deploys an on-demand and flexible neuroscience research test-bed to provide the neurophysiology laboratory extensive computational power from an outside source. The CW consolidates distributed software and hardware resources to support time-critical and/or resource-demanding computing during data collection from behaving animals. This power and flexibility is important as experimental and theoretical neuroscience evolves based on insights gained from data-intensive experiments, new technologies and engineering methodologies. This paper describes briefly the computational infrastructure and its most relevant components. Each component is discussed within a systematic process of setting up an in vivo, neuroscience experiment. Furthermore, a co-adaptive brain machine interface is implemented on the CW to illustrate how this integrated computational and experimental platform can be used to study systems neurophysiology and learning in a behavior task. We believe this implementation is also the first remote execution and adaptation of a brain-machine interface. PMID:20126436
NASA Astrophysics Data System (ADS)
Hajj-Hassan, Mohamad; Gonzalez, Timothy; Ghafer-Zadeh, Ebrahim; Chodavarapu, Vamsy; Musallam, Sam; Andrews, Mark
2009-02-01
Neural microelectrodes are an important component of neural prosthetic systems which assist paralyzed patients by allowing them to operate computers or robots using their neural activity. These microelectrodes are also used in clinical settings to localize the locus of seizure initiation in epilepsy or to stimulate sub-cortical structures in patients with Parkinson's disease. In neural prosthetic systems, implanted microelectrodes record the electrical potential generated by specific thoughts and relay the signals to algorithms trained to interpret these thoughts. In this paper, we describe novel elongated multi-site neural electrodes that can record electrical signals and specific neural biomarkers and that can reach depths greater than 8mm in the sulcus of non-human primates (monkeys). We hypothesize that additional signals recorded by the multimodal probes will increase the information yield when compared to standard probes that record just electropotentials. We describe integration of optical biochemical sensors with neural microelectrodes. The sensors are made using sol-gel derived xerogel thin films that encapsulate specific biomarker responsive luminophores in their nanostructured pores. The desired neural biomarkers are O2, pH, K+, and Na+ ions. As a prototype, we demonstrate direct-write patterning to create oxygen-responsive xerogel waveguide structures on the neural microelectrodes. The recording of neural biomarkers along with electrical activity could help the development of intelligent and more userfriendly neural prosthesis/brain machine interfaces as well as aid in providing answers to complex brain diseases and disorders.
Spuler, Martin
2015-08-01
A Brain-Computer Interface (BCI) allows to control a computer by brain activity only, without the need for muscle control. In this paper, we present an EEG-based BCI system based on code-modulated visual evoked potentials (c-VEPs) that enables the user to work with arbitrary Windows applications. Other BCI systems, like the P300 speller or BCI-based browsers, allow control of one dedicated application designed for use with a BCI. In contrast, the system presented in this paper does not consist of one dedicated application, but enables the user to control mouse cursor and keyboard input on the level of the operating system, thereby making it possible to use arbitrary applications. As the c-VEP BCI method was shown to enable very fast communication speeds (writing more than 20 error-free characters per minute), the presented system is the next step in replacing the traditional mouse and keyboard and enabling complete brain-based control of a computer.
Aricò, Pietro; Borghini, Gianluca; Di Flumeri, Gianluca; Colosimo, Alfredo; Bonelli, Stefano; Golfetti, Alessia; Pozzi, Simone; Imbert, Jean-Paul; Granger, Géraud; Benhacene, Raïlane; Babiloni, Fabio
2016-01-01
Adaptive Automation (AA) is a promising approach to keep the task workload demand within appropriate levels in order to avoid both the under - and over-load conditions, hence enhancing the overall performance and safety of the human-machine system. The main issue on the use of AA is how to trigger the AA solutions without affecting the operative task. In this regard, passive Brain-Computer Interface (pBCI) systems are a good candidate to activate automation, since they are able to gather information about the covert behavior (e.g., mental workload) of a subject by analyzing its neurophysiological signals (i.e., brain activity), and without interfering with the ongoing operational activity. We proposed a pBCI system able to trigger AA solutions integrated in a realistic Air Traffic Management (ATM) research simulator developed and hosted at ENAC (É cole Nationale de l'Aviation Civile of Toulouse, France). Twelve Air Traffic Controller (ATCO) students have been involved in the experiment and they have been asked to perform ATM scenarios with and without the support of the AA solutions. Results demonstrated the effectiveness of the proposed pBCI system, since it enabled the AA mostly during the high-demanding conditions (i.e., overload situations) inducing a reduction of the mental workload under which the ATCOs were operating. On the contrary, as desired, the AA was not activated when workload level was under the threshold, to prevent too low demanding conditions that could bring the operator's workload level toward potentially dangerous conditions of underload.
NASA Astrophysics Data System (ADS)
Shimoda, Kentaro; Nagasaka, Yasuo; Chao, Zenas C.; Fujii, Naotaka
2012-06-01
Brain-machine interface (BMI) technology captures brain signals to enable control of prosthetic or communication devices with the goal of assisting patients who have limited or no ability to perform voluntary movements. Decoding of inherent information in brain signals to interpret the user's intention is one of main approaches for developing BMI technology. Subdural electrocorticography (sECoG)-based decoding provides good accuracy, but surgical complications are one of the major concerns for this approach to be applied in BMIs. In contrast, epidural electrocorticography (eECoG) is less invasive, thus it is theoretically more suitable for long-term implementation, although it is unclear whether eECoG signals carry sufficient information for decoding natural movements. We successfully decoded continuous three-dimensional hand trajectories from eECoG signals in Japanese macaques. A steady quantity of information of continuous hand movements could be acquired from the decoding system for at least several months, and a decoding model could be used for ˜10 days without significant degradation in accuracy or recalibration. The correlation coefficients between observed and predicted trajectories were lower than those for sECoG-based decoding experiments we previously reported, owing to a greater degree of chewing artifacts in eECoG-based decoding than is found in sECoG-based decoding. As one of the safest invasive recording methods available, eECoG provides an acceptable level of performance. With the ease of replacement and upgrades, eECoG systems could become the first-choice interface for real-life BMI applications.
Aricò, Pietro; Borghini, Gianluca; Di Flumeri, Gianluca; Colosimo, Alfredo; Bonelli, Stefano; Golfetti, Alessia; Pozzi, Simone; Imbert, Jean-Paul; Granger, Géraud; Benhacene, Raïlane; Babiloni, Fabio
2016-01-01
Adaptive Automation (AA) is a promising approach to keep the task workload demand within appropriate levels in order to avoid both the under- and over-load conditions, hence enhancing the overall performance and safety of the human-machine system. The main issue on the use of AA is how to trigger the AA solutions without affecting the operative task. In this regard, passive Brain-Computer Interface (pBCI) systems are a good candidate to activate automation, since they are able to gather information about the covert behavior (e.g., mental workload) of a subject by analyzing its neurophysiological signals (i.e., brain activity), and without interfering with the ongoing operational activity. We proposed a pBCI system able to trigger AA solutions integrated in a realistic Air Traffic Management (ATM) research simulator developed and hosted at ENAC (École Nationale de l'Aviation Civile of Toulouse, France). Twelve Air Traffic Controller (ATCO) students have been involved in the experiment and they have been asked to perform ATM scenarios with and without the support of the AA solutions. Results demonstrated the effectiveness of the proposed pBCI system, since it enabled the AA mostly during the high-demanding conditions (i.e., overload situations) inducing a reduction of the mental workload under which the ATCOs were operating. On the contrary, as desired, the AA was not activated when workload level was under the threshold, to prevent too low demanding conditions that could bring the operator's workload level toward potentially dangerous conditions of underload. PMID:27833542
Machine learning and radiology.
Wang, Shijun; Summers, Ronald M
2012-07-01
In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. Copyright © 2012. Published by Elsevier B.V.
Ma, Jiaxin; Zhang, Yu; Cichocki, Andrzej; Matsuno, Fumitoshi
2015-03-01
This study presents a novel human-machine interface (HMI) based on both electrooculography (EOG) and electroencephalography (EEG). This hybrid interface works in two modes: an EOG mode recognizes eye movements such as blinks, and an EEG mode detects event related potentials (ERPs) like P300. While both eye movements and ERPs have been separately used for implementing assistive interfaces, which help patients with motor disabilities in performing daily tasks, the proposed hybrid interface integrates them together. In this way, both the eye movements and ERPs complement each other. Therefore, it can provide a better efficiency and a wider scope of application. In this study, we design a threshold algorithm that can recognize four kinds of eye movements including blink, wink, gaze, and frown. In addition, an oddball paradigm with stimuli of inverted faces is used to evoke multiple ERP components including P300, N170, and VPP. To verify the effectiveness of the proposed system, two different online experiments are carried out. One is to control a multifunctional humanoid robot, and the other is to control four mobile robots. In both experiments, the subjects can complete tasks effectively by using the proposed interface, whereas the best completion time is relatively short and very close to the one operated by hand.
Formal verification of human-automation interaction
NASA Technical Reports Server (NTRS)
Degani, Asaf; Heymann, Michael
2002-01-01
This paper discusses a formal and rigorous approach to the analysis of operator interaction with machines. It addresses the acute problem of detecting design errors in human-machine interaction and focuses on verifying the correctness of the interaction in complex and automated control systems. The paper describes a systematic methodology for evaluating whether the interface provides the necessary information about the machine to enable the operator to perform a specified task successfully and unambiguously. It also addresses the adequacy of information provided to the user via training material (e.g., user manual) about the machine's behavior. The essentials of the methodology, which can be automated and applied to the verification of large systems, are illustrated by several examples and through a case study of pilot interaction with an autopilot aboard a modern commercial aircraft. The expected application of this methodology is an augmentation and enhancement, by formal verification, of human-automation interfaces.
A hybrid brain-computer interface-based mail client.
Yu, Tianyou; Li, Yuanqing; Long, Jinyi; Li, Feng
2013-01-01
Brain-computer interface-based communication plays an important role in brain-computer interface (BCI) applications; electronic mail is one of the most common communication tools. In this study, we propose a hybrid BCI-based mail client that implements electronic mail communication by means of real-time classification of multimodal features extracted from scalp electroencephalography (EEG). With this BCI mail client, users can receive, read, write, and attach files to their mail. Using a BCI mouse that utilizes hybrid brain signals, that is, motor imagery and P300 potential, the user can select and activate the function keys and links on the mail client graphical user interface (GUI). An adaptive P300 speller is employed for text input. The system has been tested with 6 subjects, and the experimental results validate the efficacy of the proposed method.
A Hybrid Brain-Computer Interface-Based Mail Client
Yu, Tianyou; Li, Yuanqing; Long, Jinyi; Li, Feng
2013-01-01
Brain-computer interface-based communication plays an important role in brain-computer interface (BCI) applications; electronic mail is one of the most common communication tools. In this study, we propose a hybrid BCI-based mail client that implements electronic mail communication by means of real-time classification of multimodal features extracted from scalp electroencephalography (EEG). With this BCI mail client, users can receive, read, write, and attach files to their mail. Using a BCI mouse that utilizes hybrid brain signals, that is, motor imagery and P300 potential, the user can select and activate the function keys and links on the mail client graphical user interface (GUI). An adaptive P300 speller is employed for text input. The system has been tested with 6 subjects, and the experimental results validate the efficacy of the proposed method. PMID:23690880
Alam, Monzurul; Chen, Xi; Zhang, Zicong; Li, Yan; He, Jufang
2014-01-01
A brain-machine interface (BMI) is a neuroprosthetic device that can restore motor function of individuals with paralysis. Although the feasibility of BMI control of upper-limb neuroprostheses has been demonstrated, a BMI for the restoration of lower-limb motor functions has not yet been developed. The objective of this study was to determine if gait-related information can be captured from neural activity recorded from the primary motor cortex of rats, and if this neural information can be used to stimulate paralysed hindlimb muscles after complete spinal cord transection. Neural activity was recorded from the hindlimb area of the primary motor cortex of six female Sprague Dawley rats during treadmill locomotion before and after mid-thoracic transection. Before spinal transection there was a strong association between neural activity and the step cycle. This association decreased after spinal transection. However, the locomotive state (standing vs. walking) could still be successfully decoded from neural recordings made after spinal transection. A novel BMI device was developed that processed this neural information in real-time and used it to control electrical stimulation of paralysed hindlimb muscles. This system was able to elicit hindlimb muscle contractions that mimicked forelimb stepping. We propose this lower-limb BMI as a future neuroprosthesis for human paraplegics. PMID:25084446
A four-dimensional virtual hand brain-machine interface using active dimension selection
NASA Astrophysics Data System (ADS)
Rouse, Adam G.
2016-06-01
Objective. Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. Approach. ADS utilizes a two stage decoder by using neural signals to both (i) select an active dimension being controlled and (ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Main results. Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits s-1 for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. Significance. ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand.
A four-dimensional virtual hand brain-machine interface using active dimension selection
Rouse, Adam G.
2018-01-01
Objective Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. Approach ADS utilizes a two stage decoder by using neural signals to both i) select an active dimension being controlled and ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. Main Results Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits/s for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. Significance ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand. PMID:27171896
Donati, Ana R C; Shokur, Solaiman; Morya, Edgard; Campos, Debora S F; Moioli, Renan C; Gitti, Claudia M; Augusto, Patricia B; Tripodi, Sandra; Pires, Cristhiane G; Pereira, Gislaine A; Brasil, Fabricio L; Gallo, Simone; Lin, Anthony A; Takigami, Angelo K; Aratanha, Maria A; Joshi, Sanjay; Bleuler, Hannes; Cheng, Gordon; Rudolph, Alan; Nicolelis, Miguel A L
2016-08-11
Brain-machine interfaces (BMIs) provide a new assistive strategy aimed at restoring mobility in severely paralyzed patients. Yet, no study in animals or in human subjects has indicated that long-term BMI training could induce any type of clinical recovery. Eight chronic (3-13 years) spinal cord injury (SCI) paraplegics were subjected to long-term training (12 months) with a multi-stage BMI-based gait neurorehabilitation paradigm aimed at restoring locomotion. This paradigm combined intense immersive virtual reality training, enriched visual-tactile feedback, and walking with two EEG-controlled robotic actuators, including a custom-designed lower limb exoskeleton capable of delivering tactile feedback to subjects. Following 12 months of training with this paradigm, all eight patients experienced neurological improvements in somatic sensation (pain localization, fine/crude touch, and proprioceptive sensing) in multiple dermatomes. Patients also regained voluntary motor control in key muscles below the SCI level, as measured by EMGs, resulting in marked improvement in their walking index. As a result, 50% of these patients were upgraded to an incomplete paraplegia classification. Neurological recovery was paralleled by the reemergence of lower limb motor imagery at cortical level. We hypothesize that this unprecedented neurological recovery results from both cortical and spinal cord plasticity triggered by long-term BMI usage.
Donati, Ana R. C.; Shokur, Solaiman; Morya, Edgard; Campos, Debora S. F.; Moioli, Renan C.; Gitti, Claudia M.; Augusto, Patricia B.; Tripodi, Sandra; Pires, Cristhiane G.; Pereira, Gislaine A.; Brasil, Fabricio L.; Gallo, Simone; Lin, Anthony A.; Takigami, Angelo K.; Aratanha, Maria A.; Joshi, Sanjay; Bleuler, Hannes; Cheng, Gordon; Rudolph, Alan; Nicolelis, Miguel A. L.
2016-01-01
Brain-machine interfaces (BMIs) provide a new assistive strategy aimed at restoring mobility in severely paralyzed patients. Yet, no study in animals or in human subjects has indicated that long-term BMI training could induce any type of clinical recovery. Eight chronic (3–13 years) spinal cord injury (SCI) paraplegics were subjected to long-term training (12 months) with a multi-stage BMI-based gait neurorehabilitation paradigm aimed at restoring locomotion. This paradigm combined intense immersive virtual reality training, enriched visual-tactile feedback, and walking with two EEG-controlled robotic actuators, including a custom-designed lower limb exoskeleton capable of delivering tactile feedback to subjects. Following 12 months of training with this paradigm, all eight patients experienced neurological improvements in somatic sensation (pain localization, fine/crude touch, and proprioceptive sensing) in multiple dermatomes. Patients also regained voluntary motor control in key muscles below the SCI level, as measured by EMGs, resulting in marked improvement in their walking index. As a result, 50% of these patients were upgraded to an incomplete paraplegia classification. Neurological recovery was paralleled by the reemergence of lower limb motor imagery at cortical level. We hypothesize that this unprecedented neurological recovery results from both cortical and spinal cord plasticity triggered by long-term BMI usage. PMID:27513629
Virtual reality hardware and graphic display options for brain-machine interfaces
Marathe, Amar R.; Carey, Holle L.; Taylor, Dawn M.
2009-01-01
Virtual reality hardware and graphic displays are reviewed here as a development environment for brain-machine interfaces (BMIs). Two desktop stereoscopic monitors and one 2D monitor were compared in a visual depth discrimination task and in a 3D target-matching task where able-bodied individuals used actual hand movements to match a virtual hand to different target hands. Three graphic representations of the hand were compared: a plain sphere, a sphere attached to the fingertip of a realistic hand and arm, and a stylized pacman-like hand. Several subjects had great difficulty using either stereo monitor for depth perception when perspective size cues were removed. A mismatch in stereo and size cues generated inappropriate depth illusions. This phenomenon has implications for choosing target and virtual hand sizes in BMI experiments. Target matching accuracy was about as good with the 2D monitor as with either 3D monitor. However, users achieved this accuracy by exploring the boundaries of the hand in the target with carefully controlled movements. This method of determining relative depth may not be possible in BMI experiments if movement control is more limited. Intuitive depth cues, such as including a virtual arm, can significantly improve depth perception accuracy with or without stereo viewing. PMID:18006069
Multiscale decoding for reliable brain-machine interface performance over time.
Han-Lin Hsieh; Wong, Yan T; Pesaran, Bijan; Shanechi, Maryam M
2017-07-01
Recordings from invasive implants can degrade over time, resulting in a loss of spiking activity for some electrodes. For brain-machine interfaces (BMI), such a signal degradation lowers control performance. Achieving reliable performance over time is critical for BMI clinical viability. One approach to improve BMI longevity is to simultaneously use spikes and other recording modalities such as local field potentials (LFP), which are more robust to signal degradation over time. We have developed a multiscale decoder that can simultaneously model the different statistical profiles of multi-scale spike/LFP activity (discrete spikes vs. continuous LFP). This decoder can also run at multiple time-scales (millisecond for spikes vs. tens of milliseconds for LFP). Here, we validate the multiscale decoder for estimating the movement of 7 major upper-arm joint angles in a non-human primate (NHP) during a 3D reach-to-grasp task. The multiscale decoder uses motor cortical spike/LFP recordings as its input. We show that the multiscale decoder can improve decoding accuracy by adding information from LFP to spikes, while running at the fast millisecond time-scale of the spiking activity. Moreover, this improvement is achieved using relatively few LFP channels, demonstrating the robustness of the approach. These results suggest that using multiscale decoders has the potential to improve the reliability and longevity of BMIs.
Wireless Cortical Brain-Machine Interface for Whole-Body Navigation in Primates
NASA Astrophysics Data System (ADS)
Rajangam, Sankaranarayani; Tseng, Po-He; Yin, Allen; Lehew, Gary; Schwarz, David; Lebedev, Mikhail A.; Nicolelis, Miguel A. L.
2016-03-01
Several groups have developed brain-machine-interfaces (BMIs) that allow primates to use cortical activity to control artificial limbs. Yet, it remains unknown whether cortical ensembles could represent the kinematics of whole-body navigation and be used to operate a BMI that moves a wheelchair continuously in space. Here we show that rhesus monkeys can learn to navigate a robotic wheelchair, using their cortical activity as the main control signal. Two monkeys were chronically implanted with multichannel microelectrode arrays that allowed wireless recordings from ensembles of premotor and sensorimotor cortical neurons. Initially, while monkeys remained seated in the robotic wheelchair, passive navigation was employed to train a linear decoder to extract 2D wheelchair kinematics from cortical activity. Next, monkeys employed the wireless BMI to translate their cortical activity into the robotic wheelchair’s translational and rotational velocities. Over time, monkeys improved their ability to navigate the wheelchair toward the location of a grape reward. The navigation was enacted by populations of cortical neurons tuned to whole-body displacement. During practice with the apparatus, we also noticed the presence of a cortical representation of the distance to reward location. These results demonstrate that intracranial BMIs could restore whole-body mobility to severely paralyzed patients in the future.
Unscented Kalman Filter for Brain-Machine Interfaces
Li, Zheng; O'Doherty, Joseph E.; Hanson, Timothy L.; Lebedev, Mikhail A.; Henriquez, Craig S.; Nicolelis, Miguel A. L.
2009-01-01
Brain machine interfaces (BMIs) are devices that convert neural signals into commands to directly control artificial actuators, such as limb prostheses. Previous real-time methods applied to decoding behavioral commands from the activity of populations of neurons have generally relied upon linear models of neural tuning and were limited in the way they used the abundant statistical information contained in the movement profiles of motor tasks. Here, we propose an n-th order unscented Kalman filter which implements two key features: (1) use of a non-linear (quadratic) model of neural tuning which describes neural activity significantly better than commonly-used linear tuning models, and (2) augmentation of the movement state variables with a history of n-1 recent states, which improves prediction of the desired command even before incorporating neural activity information and allows the tuning model to capture relationships between neural activity and movement at multiple time offsets simultaneously. This new filter was tested in BMI experiments in which rhesus monkeys used their cortical activity, recorded through chronically implanted multielectrode arrays, to directly control computer cursors. The 10th order unscented Kalman filter outperformed the standard Kalman filter and the Wiener filter in both off-line reconstruction of movement trajectories and real-time, closed-loop BMI operation. PMID:19603074
Alam, Monzurul; Chen, Xi; Zhang, Zicong; Li, Yan; He, Jufang
2014-01-01
A brain-machine interface (BMI) is a neuroprosthetic device that can restore motor function of individuals with paralysis. Although the feasibility of BMI control of upper-limb neuroprostheses has been demonstrated, a BMI for the restoration of lower-limb motor functions has not yet been developed. The objective of this study was to determine if gait-related information can be captured from neural activity recorded from the primary motor cortex of rats, and if this neural information can be used to stimulate paralysed hindlimb muscles after complete spinal cord transection. Neural activity was recorded from the hindlimb area of the primary motor cortex of six female Sprague Dawley rats during treadmill locomotion before and after mid-thoracic transection. Before spinal transection there was a strong association between neural activity and the step cycle. This association decreased after spinal transection. However, the locomotive state (standing vs. walking) could still be successfully decoded from neural recordings made after spinal transection. A novel BMI device was developed that processed this neural information in real-time and used it to control electrical stimulation of paralysed hindlimb muscles. This system was able to elicit hindlimb muscle contractions that mimicked forelimb stepping. We propose this lower-limb BMI as a future neuroprosthesis for human paraplegics.
Hoang, Tuan; Tran, Dat; Huang, Xu
2013-01-01
Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.
Effect of Different Movement Speed Modes on Human Action Observation: An EEG Study.
Luo, Tian-Jian; Lv, Jitu; Chao, Fei; Zhou, Changle
2018-01-01
Action observation (AO) generates event-related desynchronization (ERD) suppressions in the human brain by activating partial regions of the human mirror neuron system (hMNS). The activation of the hMNS response to AO remains controversial for several reasons. Therefore, this study investigated the activation of the hMNS response to a speed factor of AO by controlling the movement speed modes of a humanoid robot's arm movements. Since hMNS activation is reflected by ERD suppressions, electroencephalography (EEG) with BCI analysis methods for ERD suppressions were used as the recording and analysis modalities. Six healthy individuals were asked to participate in experiments comprising five different conditions. Four incremental-speed AO tasks and a motor imagery (MI) task involving imaging of the same movement were presented to the individuals. Occipital and sensorimotor regions were selected for BCI analyses. The experimental results showed that hMNS activation was higher in the occipital region but more robust in the sensorimotor region. Since the attended information impacts the activations of the hMNS during AO, the pattern of hMNS activations first rises and subsequently falls to a stable level during incremental-speed modes of AO. The discipline curves suggested that a moderate speed within a decent inter-stimulus interval (ISI) range produced the highest hMNS activations. Since a brain computer/machine interface (BCI) builds a path-way between human and computer/mahcine, the discipline curves will help to construct BCIs made by patterns of action observation (AO-BCI). Furthermore, a new method for constructing non-invasive brain machine brain interfaces (BMBIs) with moderate AO-BCI and motor imagery BCI (MI-BCI) was inspired by this paper.
The Multimission Image Processing Laboratory's virtual frame buffer interface
NASA Technical Reports Server (NTRS)
Wolfe, T.
1984-01-01
Large image processing systems use multiple frame buffers with differing architectures and vendor supplied interfaces. This variety of architectures and interfaces creates software development, maintenance and portability problems for application programs. Several machine-dependent graphics standards such as ANSI Core and GKS are available, but none of them are adequate for image processing. Therefore, the Multimission Image Processing laboratory project has implemented a programmer level virtual frame buffer interface. This interface makes all frame buffers appear as a generic frame buffer with a specified set of characteristics. This document defines the virtual frame uffer interface and provides information such as FORTRAN subroutine definitions, frame buffer characteristics, sample programs, etc. It is intended to be used by application programmers and system programmers who are adding new frame buffers to a system.
Cortical Spiking Network Interfaced with Virtual Musculoskeletal Arm and Robotic Arm
Dura-Bernal, Salvador; Zhou, Xianlian; Neymotin, Samuel A.; Przekwas, Andrzej; Francis, Joseph T.; Lytton, William W.
2015-01-01
Embedding computational models in the physical world is a critical step towards constraining their behavior and building practical applications. Here we aim to drive a realistic musculoskeletal arm model using a biomimetic cortical spiking model, and make a robot arm reproduce the same trajectories in real time. Our cortical model consisted of a 3-layered cortex, composed of several hundred spiking model-neurons, which display physiologically realistic dynamics. We interconnected the cortical model to a two-joint musculoskeletal model of a human arm, with realistic anatomical and biomechanical properties. The virtual arm received muscle excitations from the neuronal model, and fed back proprioceptive information, forming a closed-loop system. The cortical model was trained using spike timing-dependent reinforcement learning to drive the virtual arm in a 2D reaching task. Limb position was used to simultaneously control a robot arm using an improved network interface. Virtual arm muscle activations responded to motoneuron firing rates, with virtual arm muscles lengths encoded via population coding in the proprioceptive population. After training, the virtual arm performed reaching movements which were smoother and more realistic than those obtained using a simplistic arm model. This system provided access to both spiking network properties and to arm biophysical properties, including muscle forces. The use of a musculoskeletal virtual arm and the improved control system allowed the robot arm to perform movements which were smoother than those reported in our previous paper using a simplistic arm. This work provides a novel approach consisting of bidirectionally connecting a cortical model to a realistic virtual arm, and using the system output to drive a robotic arm in real time. Our techniques are applicable to the future development of brain neuroprosthetic control systems, and may enable enhanced brain-machine interfaces with the possibility for finer control of limb prosthetics. PMID:26635598
1992-10-01
Manager , Advanced Transport Operating Systems Program Office Langley Research Center Mail Stop 265 Hampton, VA 23665-5225 United States Programme Committee...J.H.Lind, and C.G.Burge Advanced Cockpit - Mission and Image Management 4 by J. Struck Aircrew Acceptance of Automation in the Cockpit 5 by M. Hicks and I...DESIGN CONCEPTS AND TOOLS A Systems Approach to the Advanced Aircraft Man-Machine Interface 23 by F. Armogida Management of Avionics Data in the Cockpit
Lin, Yi-Jung; Speedie, Stuart
2003-01-01
User interface design is one of the most important parts of developing applications. Nowadays, a quality user interface must not only accommodate interaction between machines and users, but also needs to recognize the differences and provide functionalities for users from role-to-role or even individual-to-individual. With the web-based application of our Teledermatology consult system, the development environment provides us highly useful opportunities to create dynamic user interfaces, which lets us to gain greater access control and has the potential to increase efficiency of the system. We will describe the two models of user interfaces in our system: Role-based and Adaptive. PMID:14728419
Bulea, Thomas C; Kilicarslan, Atilla; Ozdemir, Recep; Paloski, William H; Contreras-Vidal, Jose L
2013-07-26
Recent studies support the involvement of supraspinal networks in control of bipedal human walking. Part of this evidence encompasses studies, including our previous work, demonstrating that gait kinematics and limb coordination during treadmill walking can be inferred from the scalp electroencephalogram (EEG) with reasonably high decoding accuracies. These results provide impetus for development of non-invasive brain-machine-interface (BMI) systems for use in restoration and/or augmentation of gait- a primary goal of rehabilitation research. To date, studies examining EEG decoding of activity during gait have been limited to treadmill walking in a controlled environment. However, to be practically viable a BMI system must be applicable for use in everyday locomotor tasks such as over ground walking and turning. Here, we present a novel protocol for non-invasive collection of brain activity (EEG), muscle activity (electromyography (EMG)), and whole-body kinematic data (head, torso, and limb trajectories) during both treadmill and over ground walking tasks. By collecting these data in the uncontrolled environment insight can be gained regarding the feasibility of decoding unconstrained gait and surface EMG from scalp EEG.
Martin, Suzanne; Armstrong, Elaine; Thomson, Eileen; Vargiu, Eloisa; Solà, Marc; Dauwalder, Stefan; Miralles, Felip; Daly Lynn, Jean
2017-07-14
Cognitive rehabilitation is established as a core intervention within rehabilitation programs following a traumatic brain injury (TBI). Digitally enabled assistive technologies offer opportunities for clinicians to increase remote access to rehabilitation supporting transition into home. Brain Computer Interface (BCI) systems can harness the residual abilities of individuals with limited function to gain control over computers through their brain waves. This paper presents an online cognitive rehabilitation application developed with therapists, to work remotely with people who have TBI, who will use BCI at home to engage in the therapy. A qualitative research study was completed with people who are community dwellers post brain injury (end users), and a cohort of therapists involved in cognitive rehabilitation. A user-centered approach over three phases in the development, design and feasibility testing of this cognitive rehabilitation application included two tasks (Find-a-Category and a Memory Card task). The therapist could remotely prescribe activity with different levels of difficulty. The service user had a home interface which would present the therapy activities. This novel work was achieved by an international consortium of academics, business partners and service users.
Advances in neuroprosthetic learning and control.
Carmena, Jose M
2013-01-01
Significant progress has occurred in the field of brain-machine interfaces (BMI) since the first demonstrations with rodents, monkeys, and humans controlling different prosthetic devices directly with neural activity. This technology holds great potential to aid large numbers of people with neurological disorders. However, despite this initial enthusiasm and the plethora of available robotic technologies, existing neural interfaces cannot as yet master the control of prosthetic, paralyzed, or otherwise disabled limbs. Here I briefly discuss recent advances from our laboratory into the neural basis of BMIs that should lead to better prosthetic control and clinically viable solutions, as well as new insights into the neurobiology of action.
Taherian, Sarvnaz; Selitskiy, Dmitry; Pau, James; Claire Davies, T
2017-02-01
Using a commercial electroencephalography (EEG)-based brain-computer interface (BCI), the training and testing protocol for six individuals with spastic quadriplegic cerebral palsy (GMFCS and MACS IV and V) was evaluated. A customised, gamified training paradigm was employed. Over three weeks, the participants spent two sessions exploring the system, and up to six sessions playing the game which focussed on EEG feedback of left and right arm motor imagery. The participants showed variable inconclusive results in the ability to produce two distinct EEG patterns. Participant performance was influenced by physical illness, motivation, fatigue and concentration. The results from this case study highlight the infancy of BCIs as a form of assistive technology for people with cerebral palsy. Existing commercial BCIs are not designed according to the needs of end-users. Implications for Rehabilitation Mood, fatigue, physical illness and motivation influence the usability of a brain-computer interface. Commercial brain-computer interfaces are not designed for practical assistive technology use for people with cerebral palsy. Practical brain-computer interface assistive technologies may need to be flexible to suit individual needs.
Mukaino, Masahiko; Ono, Takashi; Shindo, Keiichiro; Fujiwara, Toshiyuki; Ota, Tetsuo; Kimura, Akio; Liu, Meigen; Ushiba, Junichi
2014-04-01
Brain computer interface technology is of great interest to researchers as a potential therapeutic measure for people with severe neurological disorders. The aim of this study was to examine the efficacy of brain computer interface, by comparing conventional neuromuscular electrical stimulation and brain computer interface-driven neuromuscular electrical stimulation, using an A-B-A-B withdrawal single-subject design. A 38-year-old male with severe hemiplegia due to a putaminal haemorrhage participated in this study. The design involved 2 epochs. In epoch A, the patient attempted to open his fingers during the application of neuromuscular electrical stimulation, irrespective of his actual brain activity. In epoch B, neuromuscular electrical stimulation was applied only when a significant motor-related cortical potential was observed in the electroencephalogram. The subject initially showed diffuse functional magnetic resonance imaging activation and small electro-encephalogram responses while attempting finger movement. Epoch A was associated with few neurological or clinical signs of improvement. Epoch B, with a brain computer interface, was associated with marked lateralization of electroencephalogram (EEG) and blood oxygenation level dependent responses. Voluntary electromyogram (EMG) activity, with significant EEG-EMG coherence, was also prompted. Clinical improvement in upper-extremity function and muscle tone was observed. These results indicate that self-directed training with a brain computer interface may induce activity- dependent cortical plasticity and promote functional recovery. This preliminary clinical investigation encourages further research using a controlled design.
Brain-Computer Interfaces: A Neuroscience Paradigm of Social Interaction? A Matter of Perspective
Mattout, Jérémie
2012-01-01
A number of recent studies have put human subjects in true social interactions, with the aim of better identifying the psychophysiological processes underlying social cognition. Interestingly, this emerging Neuroscience of Social Interactions (NSI) field brings up challenges which resemble important ones in the field of Brain-Computer Interfaces (BCI). Importantly, these challenges go beyond common objectives such as the eventual use of BCI and NSI protocols in the clinical domain or common interests pertaining to the use of online neurophysiological techniques and algorithms. Common fundamental challenges are now apparent and one can argue that a crucial one is to develop computational models of brain processes relevant to human interactions with an adaptive agent, whether human or artificial. Coupled with neuroimaging data, such models have proved promising in revealing the neural basis and mental processes behind social interactions. Similar models could help BCI to move from well-performing but offline static machines to reliable online adaptive agents. This emphasizes a social perspective to BCI, which is not limited to a computational challenge but extends to all questions that arise when studying the brain in interaction with its environment. PMID:22675291
Kawakami, Michiyuki; Fujiwara, Toshiyuki; Ushiba, Junichi; Nishimoto, Atsuko; Abe, Kaoru; Honaga, Kaoru; Nishimura, Atsuko; Mizuno, Katsuhiro; Kodama, Mitsuhiko; Masakado, Yoshihisa; Liu, Meigen
2016-09-21
Hybrid assistive neuromuscular dynamic stimulation (HANDS) therapy improved paretic upper extremity motor function in patients with severe to moderate hemiparesis. We hypothesized that brain machine interface (BMI) training would be able to increase paretic finger muscle activity enough to apply HANDS therapy in patients with severe hemiparesis, whose finger extensor was absent. The aim of this study was to assess the efficacy of BMI training followed by HANDS therapy in patients with severe hemiparesis. Twenty-nine patients with chronic stroke who could not extend their paretic fingers were participated this study. We applied BMI training for 10 days at 40 min per day. The BMI detected the patients' motor imagery of paretic finger extension with event-related desynchronization (ERD) over the affected primary sensorimotor cortex, recorded with electroencephalography. Patients wore a motor-driven orthosis, which extended their paretic fingers and was triggered with ERD. When muscle activity in their paretic fingers was detected with surface electrodes after 10 days of BMI training, we applied HANDS therapy for the following 3 weeks. In HANDS therapy, participants received closed-loop, electromyogram-controlled, neuromuscular electrical stimulation (NMES) combined with a wrist-hand splint for 3 weeks at 8 hours a day. Before BMI training, after BMI training, after HANDS therapy and 3month after HANDS therapy, we assessed Fugl-Meyer Assessment upper extremity motor score (FMA) and the Motor Activity Log14-Amount of Use (MAL-AOU) score. After 10 days of BMI training, finger extensor activity had appeared in 21 patients. Eighteen of 21 patients then participated in 3 weeks of HANDS therapy. We found a statistically significant improvement in the FMA and the MAL-AOU scores after the BMI training, and further improvement was seen after the HANDS therapy. Combining BMI training with HANDS therapy could be an effective therapeutic strategy for severe UE paralysis after stroke.
Prins, Noeline W.; Sanchez, Justin C.; Prasad, Abhishek
2014-01-01
Brain-Machine Interfaces (BMIs) can be used to restore function in people living with paralysis. Current BMIs require extensive calibration that increase the set-up times and external inputs for decoder training that may be difficult to produce in paralyzed individuals. Both these factors have presented challenges in transitioning the technology from research environments to activities of daily living (ADL). For BMIs to be seamlessly used in ADL, these issues should be handled with minimal external input thus reducing the need for a technician/caregiver to calibrate the system. Reinforcement Learning (RL) based BMIs are a good tool to be used when there is no external training signal and can provide an adaptive modality to train BMI decoders. However, RL based BMIs are sensitive to the feedback provided to adapt the BMI. In actor-critic BMIs, this feedback is provided by the critic and the overall system performance is limited by the critic accuracy. In this work, we developed an adaptive BMI that could handle inaccuracies in the critic feedback in an effort to produce more accurate RL based BMIs. We developed a confidence measure, which indicated how appropriate the feedback is for updating the decoding parameters of the actor. The results show that with the new update formulation, the critic accuracy is no longer a limiting factor for the overall performance. We tested and validated the system onthree different data sets: synthetic data generated by an Izhikevich neural spiking model, synthetic data with a Gaussian noise distribution, and data collected from a non-human primate engaged in a reaching task. All results indicated that the system with the critic confidence built in always outperformed the system without the critic confidence. Results of this study suggest the potential application of the technique in developing an autonomous BMI that does not need an external signal for training or extensive calibration. PMID:24904257
VOTable JAVA Streaming Writer and Applications.
NASA Astrophysics Data System (ADS)
Kulkarni, P.; Kembhavi, A.; Kale, S.
2004-07-01
Virtual Observatory related tools use a new standard for data transfer called the VOTable format. This is a variant of the xml format that enables easy transfer of data over the web. We describe a streaming interface that can bridge the VOTable format, through a user friendly graphical interface, with the FITS and ASCII formats, which are commonly used by astronomers. A streaming interface is important for efficient use of memory because of the large size of catalogues. The tools are developed in JAVA to provide a platform independent interface. We have also developed a stand-alone version that can be used to convert data stored in ASCII or FITS format on a local machine. The Streaming writer is successfully being used in VOPlot (See Kale et al 2004 for a description of VOPlot).We present the test results of converting huge FITS and ASCII data into the VOTable format on machines that have only limited memory.
Morimoto, Jun; Kawato, Mitsuo
2015-03-06
In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the 'understanding the brain by creating the brain' approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain-machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Davatzikos, Christos
2016-10-01
The past 20 years have seen a mushrooming growth of the field of computational neuroanatomy. Much of this work has been enabled by the development and refinement of powerful, high-dimensional image warping methods, which have enabled detailed brain parcellation, voxel-based morphometric analyses, and multivariate pattern analyses using machine learning approaches. The evolution of these 3 types of analyses over the years has overcome many challenges. We present the evolution of our work in these 3 directions, which largely follows the evolution of this field. We discuss the progression from single-atlas, single-registration brain parcellation work to current ensemble-based parcellation; from relatively basic mass-univariate t-tests to optimized regional pattern analyses combining deformations and residuals; and from basic application of support vector machines to generative-discriminative formulations of multivariate pattern analyses, and to methods dealing with heterogeneity of neuroanatomical patterns. We conclude with discussion of some of the future directions and challenges. Copyright © 2016. Published by Elsevier B.V.
Davatzikos, Christos
2017-01-01
The past 20 years have seen a mushrooming growth of the field of computational neuroanatomy. Much of this work has been enabled by the development and refinement of powerful, high-dimensional image warping methods, which have enabled detailed brain parcellation, voxel-based morphometric analyses, and multivariate pattern analyses using machine learning approaches. The evolution of these 3 types of analyses over the years has overcome many challenges. We present the evolution of our work in these 3 directions, which largely follows the evolution of this field. We discuss the progression from single-atlas, single-registration brain parcellation work to current ensemble-based parcellation; from relatively basic mass-univariate t-tests to optimized regional pattern analyses combining deformations and residuals; and from basic application of support vector machines to generative-discriminative formulations of multivariate pattern analyses, and to methods dealing with heterogeneity of neuroanatomical patterns. We conclude with discussion of some of the future directions and challenges. PMID:27514582
Project Integration Architecture: Implementation of the CORBA-Served Application Infrastructure
NASA Technical Reports Server (NTRS)
Jones, William Henry
2005-01-01
The Project Integration Architecture (PIA) has been demonstrated in a single-machine C++ implementation prototype. The architecture is in the process of being migrated to a Common Object Request Broker Architecture (CORBA) implementation. The migration of the Foundation Layer interfaces is fundamentally complete. The implementation of the Application Layer infrastructure for that migration is reported. The Application Layer provides for distributed user identification and authentication, per-user/per-instance access controls, server administration, the formation of mutually-trusting application servers, a server locality protocol, and an ability to search for interface implementations through such trusted server networks.
Chang, G C; Kang, W J; Luh, J J; Cheng, C K; Lai, J S; Chen, J J; Kuo, T S
1996-10-01
The purpose of this study was to develop a real-time electromyogram (EMG) discrimination system to provide control commands for man-machine interface applications. A host computer with a plug-in data acquisition and processing board containing a TMS320 C31 floating-point digital signal processor was used to attain real-time EMG classification. Two-channel EMG signals were collected by two pairs of surface electrodes located bilaterally between the sternocleidomastoid and the upper trapezius. Five motions of the neck and shoulders were discriminated for each subject. The zero-crossing rate was employed to detect the onset of muscle contraction. The cepstral coefficients, derived from autoregressive coefficients and estimated by a recursive least square algorithm, were used as the recognition features. These features were then discriminated using a modified maximum likelihood distance classifier. The total response time of this EMG discrimination system was achieved about within 0.17 s. Four able bodied and two C5/6 quadriplegic subjects took part in the experiment, and achieved 95% mean recognition rate in discrimination between the five specific motions. The response time and the reliability of recognition indicate that this system has the potential to discriminate body motions for man-machine interface applications.
NASA Astrophysics Data System (ADS)
Liang, J.; Sédillot, S.; Traverson, B.
1997-09-01
This paper addresses federation of a transactional object standard - Object Management Group (OMG) object transaction service (OTS) - with the X/Open distributed transaction processing (DTP) model and International Organization for Standardization (ISO) open systems interconnection (OSI) transaction processing (TP) communication protocol. The two-phase commit propagation rules within a distributed transaction tree are similar in the X/Open, ISO and OMG models. Building an OTS on an OSI TP protocol machine is possible because the two specifications are somewhat complementary. OTS defines a set of external interfaces without specific internal protocol machine, while OSI TP specifies an internal protocol machine without any application programming interface. Given these observations, and having already implemented an X/Open two-phase commit transaction toolkit based on an OSI TP protocol machine, we analyse the feasibility of using this implementation as a transaction service provider for OMG interfaces. Based on the favourable result of this feasibility study, we are implementing an OTS compliant system, which, by initiating the extensibility and openness strengths of OSI TP, is able to provide interoperability between X/Open DTP and OMG OTS models.
Exploring differences between left and right hand motor imagery via spatio-temporal EEG microstate.
Liu, Weifeng; Liu, Xiaoming; Dai, Ruomeng; Tang, Xiaoying
2017-12-01
EEG-based motor imagery is very useful in brain-computer interface. How to identify the imaging movement is still being researched. Electroencephalography (EEG) microstates reflect the spatial configuration of quasi-stable electrical potential topographies. Different microstates represent different brain functions. In this paper, microstate method was used to process the EEG-based motor imagery to obtain microstate. The single-trial EEG microstate sequences differences between two motor imagery tasks - imagination of left and right hand movement were investigated. The microstate parameters - duration, time coverage and occurrence per second as well as the transition probability of the microstate sequences were obtained with spatio-temporal microstate analysis. The results were shown significant differences (P < 0.05) with paired t-test between the two tasks. Then these microstate parameters were used as features and a linear support vector machine (SVM) was utilized to classify the two tasks with mean accuracy 89.17%, superior performance compared to the other methods. These indicate that the microstate can be a promising feature to improve the performance of the brain-computer interface classification.
A practical VEP-based brain-computer interface.
Wang, Yijun; Wang, Ruiping; Gao, Xiaorong; Hong, Bo; Gao, Shangkai
2006-06-01
This paper introduces the development of a practical brain-computer interface at Tsinghua University. The system uses frequency-coded steady-state visual evoked potentials to determine the gaze direction of the user. To ensure more universal applicability of the system, approaches for reducing user variation on system performance have been proposed. The information transfer rate (ITR) has been evaluated both in the laboratory and at the Rehabilitation Center of China, respectively. The system has been proved to be applicable to > 90% of people with a high ITR in living environments.
Renaud, Patrice; Joyal, Christian; Stoleru, Serge; Goyette, Mathieu; Weiskopf, Nikolaus; Birbaumer, Niels
2011-01-01
This chapter proposes a prospective view on using a real-time functional magnetic imaging (rt-fMRI) brain-computer interface (BCI) application as a new treatment for pedophilia. Neurofeedback mediated by interactive virtual stimuli is presented as the key process in this new BCI application. Results on the diagnostic discriminant power of virtual characters depicting sexual stimuli relevant to pedophilia are given. Finally, practical and ethical implications are briefly addressed. Copyright © 2011 Elsevier B.V. All rights reserved.
Wang, Xue; Casadio, Maura; Weber, Kenneth A; Mussa-Ivaldi, Ferdinando A; Parrish, Todd B
2014-03-01
The purpose of this study is to identify white matter microstructure changes following bilateral upper extremity motor skill training to increase our understanding of learning-induced structural plasticity and enhance clinical strategies in physical rehabilitation. Eleven healthy subjects performed two visuo-spatial motor training tasks over 9 sessions (2-3 sessions per week). Subjects controlled a cursor with bilateral simultaneous movements of the shoulders and upper arms using a body machine interface. Before the start and within 2days of the completion of training, whole brain diffusion tensor MR imaging data were acquired. Motor training increased fractional anisotropy (FA) values in the posterior and anterior limbs of the internal capsule, the corona radiata, and the body of the corpus callosum by 4.19% on average indicating white matter microstructure changes induced by activity-dependent modulation of axon number, axon diameter, or myelin thickness. These changes may underlie the functional reorganization associated with motor skill learning. Copyright © 2013 Elsevier Inc. All rights reserved.
Analysis and asynchronous detection of gradually unfolding errors during monitoring tasks
NASA Astrophysics Data System (ADS)
Omedes, Jason; Iturrate, Iñaki; Minguez, Javier; Montesano, Luis
2015-10-01
Human studies on cognitive control processes rely on tasks involving sudden-onset stimuli, which allow the analysis of these neural imprints to be time-locked and relative to the stimuli onset. Human perceptual decisions, however, comprise continuous processes where evidence accumulates until reaching a boundary. Surpassing the boundary leads to a decision where measured brain responses are associated to an internal, unknown onset. The lack of this onset for gradual stimuli hinders both the analyses of brain activity and the training of detectors. This paper studies electroencephalographic (EEG)-measurable signatures of human processing for sudden and gradual cognitive processes represented as a trajectory mismatch under a monitoring task. Time-locked potentials and brain-source analysis of the EEG of sudden mismatches revealed the typical components of event-related potentials and the involvement of brain structures related to cognitive control processing. For gradual mismatch events, time-locked analyses did not show any discernible EEG scalp pattern, despite related brain areas being, to a lesser extent, activated. However, and thanks to the use of non-linear pattern recognition algorithms, it is possible to train an asynchronous detector on sudden events and use it to detect gradual mismatches, as well as obtaining an estimate of their unknown onset. Post-hoc time-locked scalp and brain-source analyses revealed that the EEG patterns of detected gradual mismatches originated in brain areas related to cognitive control processing. This indicates that gradual events induce latency in the evaluation process but that similar brain mechanisms are present in sudden and gradual mismatch events. Furthermore, the proposed asynchronous detection model widens the scope of applications of brain-machine interfaces to other gradual processes.
A Hybrid CMOS-Memristor Neuromorphic Synapse.
Azghadi, Mostafa Rahimi; Linares-Barranco, Bernabe; Abbott, Derek; Leong, Philip H W
2017-04-01
Although data processing technology continues to advance at an astonishing rate, computers with brain-like processing capabilities still elude us. It is envisioned that such computers may be achieved by the fusion of neuroscience and nano-electronics to realize a brain-inspired platform. This paper proposes a high-performance nano-scale Complementary Metal Oxide Semiconductor (CMOS)-memristive circuit, which mimics a number of essential learning properties of biological synapses. The proposed synaptic circuit that is composed of memristors and CMOS transistors, alters its memristance in response to timing differences among its pre- and post-synaptic action potentials, giving rise to a family of Spike Timing Dependent Plasticity (STDP). The presented design advances preceding memristive synapse designs with regards to the ability to replicate essential behaviours characterised in a number of electrophysiological experiments performed in the animal brain, which involve higher order spike interactions. Furthermore, the proposed hybrid device CMOS area is estimated as [Formula: see text] in a [Formula: see text] process-this represents a factor of ten reduction in area with respect to prior CMOS art. The new design is integrated with silicon neurons in a crossbar array structure amenable to large-scale neuromorphic architectures and may pave the way for future neuromorphic systems with spike timing-dependent learning features. These systems are emerging for deployment in various applications ranging from basic neuroscience research, to pattern recognition, to Brain-Machine-Interfaces.
Human factors in space telepresence
NASA Technical Reports Server (NTRS)
Akin, D. L.; Howard, R. D.; Oliveria, J. S.
1983-01-01
The problems of interfacing a human with a teleoperation system, for work in space are discussed. Much of the information presented here is the result of experience gained by the M.I.T. Space Systems Laboratory during the past two years of work on the ARAMIS (Automation, Robotics, and Machine Intelligence Systems) project. Many factors impact the design of the man-machine interface for a teleoperator. The effects of each are described in turn. An annotated bibliography gives the key references that were used. No conclusions are presented as a best design, since much depends on the particular application desired, and the relevant technology is swiftly changing.
Solazzi, Massimiliano; Loconsole, Claudio; Barsotti, Michele
2016-01-01
This paper illustrates the application of emerging technologies and human-machine interfaces to the neurorehabilitation and motor assistance fields. The contribution focuses on wearable technologies and in particular on robotic exoskeleton as tools for increasing freedom to move and performing Activities of Daily Living (ADLs). This would result in a deep improvement in quality of life, also in terms of improved function of internal organs and general health status. Furthermore, the integration of these robotic systems with advanced bio-signal driven human-machine interface can increase the degree of participation of patient in robotic training allowing to recognize user's intention and assisting the patient in rehabilitation tasks, thus representing a fundamental aspect to elicit motor learning PMID:28484314
NASA Technical Reports Server (NTRS)
Malone, T. B.
1972-01-01
Requirements were determined analytically for the man machine interface for a teleoperator system performing on-orbit satellite retrieval and servicing. Requirements are basically of two types; mission/system requirements, and design requirements or design criteria. Two types of teleoperator systems were considered: a free flying vehicle, and a shuttle attached manipulator. No attempt was made to evaluate the relative effectiveness or efficiency of the two system concepts. The methodology used entailed an application of the Essex Man-Systems analysis technique as well as a complete familiarization with relevant work being performed at government agencies and by private industry.
Natural Language Processing: Toward Large-Scale, Robust Systems.
ERIC Educational Resources Information Center
Haas, Stephanie W.
1996-01-01
Natural language processing (NLP) is concerned with getting computers to do useful things with natural language. Major applications include machine translation, text generation, information retrieval, and natural language interfaces. Reviews important developments since 1987 that have led to advances in NLP; current NLP applications; and problems…
NASA Technical Reports Server (NTRS)
Malone, T. B.; Micocci, A.
1975-01-01
The alternate methods of conducting a man-machine interface evaluation are classified as static and dynamic, and are evaluated. A dynamic evaluation tool is presented to provide for a determination of the effectiveness of the man-machine interface in terms of the sequence of operations (task and task sequences) and in terms of the physical characteristics of the interface. This dynamic checklist approach is recommended for shuttle and shuttle payload man-machine interface evaluations based on reduced preparation time, reduced data, and increased sensitivity of critical problems.
Neural control of finger movement via intracortical brain-machine interface
NASA Astrophysics Data System (ADS)
Irwin, Z. T.; Schroeder, K. E.; Vu, P. P.; Bullard, A. J.; Tat, D. M.; Nu, C. S.; Vaskov, A.; Nason, S. R.; Thompson, D. E.; Bentley, J. N.; Patil, P. G.; Chestek, C. A.
2017-12-01
Objective. Intracortical brain-machine interfaces (BMIs) are a promising source of prosthesis control signals for individuals with severe motor disabilities. Previous BMI studies have primarily focused on predicting and controlling whole-arm movements; precise control of hand kinematics, however, has not been fully demonstrated. Here, we investigate the continuous decoding of precise finger movements in rhesus macaques. Approach. In order to elicit precise and repeatable finger movements, we have developed a novel behavioral task paradigm which requires the subject to acquire virtual fingertip position targets. In the physical control condition, four rhesus macaques performed this task by moving all four fingers together in order to acquire a single target. This movement was equivalent to controlling the aperture of a power grasp. During this task performance, we recorded neural spikes from intracortical electrode arrays in primary motor cortex. Main results. Using a standard Kalman filter, we could reconstruct continuous finger movement offline with an average correlation of ρ = 0.78 between actual and predicted position across four rhesus macaques. For two of the monkeys, this movement prediction was performed in real-time to enable direct brain control of the virtual hand. Compared to physical control, neural control performance was slightly degraded; however, the monkeys were still able to successfully perform the task with an average target acquisition rate of 83.1%. The monkeys’ ability to arbitrarily specify fingertip position was also quantified using an information throughput metric. During brain control task performance, the monkeys achieved an average 1.01 bits s-1 throughput, similar to that achieved in previous studies which decoded upper-arm movements to control computer cursors using a standard Kalman filter. Significance. This is, to our knowledge, the first demonstration of brain control of finger-level fine motor skills. We believe that these results represent an important step towards full and dexterous control of neural prosthetic devices.
Pohlmeyer, Eric A.; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline W.; Sanchez, Justin C.
2014-01-01
Brain-machine interface (BMI) systems give users direct neural control of robotic, communication, or functional electrical stimulation systems. As BMI systems begin transitioning from laboratory settings into activities of daily living, an important goal is to develop neural decoding algorithms that can be calibrated with a minimal burden on the user, provide stable control for long periods of time, and can be responsive to fluctuations in the decoder’s neural input space (e.g. neurons appearing or being lost amongst electrode recordings). These are significant challenges for static neural decoding algorithms that assume stationary input/output relationships. Here we use an actor-critic reinforcement learning architecture to provide an adaptive BMI controller that can successfully adapt to dramatic neural reorganizations, can maintain its performance over long time periods, and which does not require the user to produce specific kinetic or kinematic activities to calibrate the BMI. Two marmoset monkeys used the Reinforcement Learning BMI (RLBMI) to successfully control a robotic arm during a two-target reaching task. The RLBMI was initialized using random initial conditions, and it quickly learned to control the robot from brain states using only a binary evaluative feedback regarding whether previously chosen robot actions were good or bad. The RLBMI was able to maintain control over the system throughout sessions spanning multiple weeks. Furthermore, the RLBMI was able to quickly adapt and maintain control of the robot despite dramatic perturbations to the neural inputs, including a series of tests in which the neuron input space was deliberately halved or doubled. PMID:24498055
Creating the brain and interacting with the brain: an integrated approach to understanding the brain
Morimoto, Jun; Kawato, Mitsuo
2015-01-01
In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the ‘understanding the brain by creating the brain’ approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain–machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. PMID:25589568
Learning a common dictionary for subject-transfer decoding with resting calibration.
Morioka, Hiroshi; Kanemura, Atsunori; Hirayama, Jun-ichiro; Shikauchi, Manabu; Ogawa, Takeshi; Ikeda, Shigeyuki; Kawanabe, Motoaki; Ishii, Shin
2015-05-01
Brain signals measured over a series of experiments have inherent variability because of different physical and mental conditions among multiple subjects and sessions. Such variability complicates the analysis of data from multiple subjects and sessions in a consistent way, and degrades the performance of subject-transfer decoding in a brain-machine interface (BMI). To accommodate the variability in brain signals, we propose 1) a method for extracting spatial bases (or a dictionary) shared by multiple subjects, by employing a signal-processing technique of dictionary learning modified to compensate for variations between subjects and sessions, and 2) an approach to subject-transfer decoding that uses the resting-state activity of a previously unseen target subject as calibration data for compensating for variations, eliminating the need for a standard calibration based on task sessions. Applying our methodology to a dataset of electroencephalography (EEG) recordings during a selective visual-spatial attention task from multiple subjects and sessions, where the variability compensation was essential for reducing the redundancy of the dictionary, we found that the extracted common brain activities were reasonable in the light of neuroscience knowledge. The applicability to subject-transfer decoding was confirmed by improved performance over existing decoding methods. These results suggest that analyzing multisubject brain activities on common bases by the proposed method enables information sharing across subjects with low-burden resting calibration, and is effective for practical use of BMI in variable environments. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Chowdhury, Gobinda G.
2003-01-01
Discusses issues related to natural language processing, including theoretical developments; natural language understanding; tools and techniques; natural language text processing systems; abstracting; information extraction; information retrieval; interfaces; software; Internet, Web, and digital library applications; machine translation for…
Integration Telegram Bot on E-Complaint Applications in College
NASA Astrophysics Data System (ADS)
Rosid, M. A.; Rachmadany, A.; Multazam, M. T.; Nandiyanto, A. B. D.; Abdullah, A. G.; Widiaty, I.
2018-01-01
Internet of Things (IoT) has influenced human life where IoT internet connectivity extending from human-to-humans to human-to-machine or machine-to-machine. With this research field, it will be created a technology and concepts that allow humans to communicate with machines for a specific purpose. This research aimed to integrate between application service of the telegram sender with application of e-complaint at a college. With this application, users do not need to visit the Url of the E-compliant application; but, they can be accessed simply by submitting a complaint via Telegram, and then the complaint will be forwarded to the E-complaint Application. From the test results, e-complaint integration with Telegram Bot has been run in accordance with the design. Telegram Bot is made able to provide convenience to the user in this academician to submit a complaint, besides the telegram bot provides the user interaction with the usual interface used by people everyday on their smartphones. Thus, with this system, the complained work unit can immediately make improvements since all the complaints process can be delivered rapidly.
Decoding-Accuracy-Based Sequential Dimensionality Reduction of Spatio-Temporal Neural Activities
NASA Astrophysics Data System (ADS)
Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu
Performance of a brain machine interface (BMI) critically depends on selection of input data because information embedded in the neural activities is highly redundant. In addition, properly selected input data with a reduced dimension leads to improvement of decoding generalization ability and decrease of computational efforts, both of which are significant advantages for the clinical applications. In the present paper, we propose an algorithm of sequential dimensionality reduction (SDR) that effectively extracts motor/sensory related spatio-temporal neural activities. The algorithm gradually reduces input data dimension by dropping neural data spatio-temporally so as not to undermine the decoding accuracy as far as possible. Support vector machine (SVM) was used as the decoder, and tone-induced neural activities in rat auditory cortices were decoded into the test tone frequencies. SDR reduced the input data dimension to a quarter and significantly improved the accuracy of decoding of novel data. Moreover, spatio-temporal neural activity patterns selected by SDR resulted in significantly higher accuracy than high spike rate patterns or conventionally used spatial patterns. These results suggest that the proposed algorithm can improve the generalization ability and decrease the computational effort of decoding.
CESAR research in intelligent machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weisbin, C.R.
1986-01-01
The Center for Engineering Systems Advanced Research (CESAR) was established in 1983 as a national center for multidisciplinary, long-range research and development in machine intelligence and advanced control theory for energy-related applications. Intelligent machines of interest here are artificially created operational systems that are capable of autonomous decision making and action. The initial emphasis for research is remote operations, with specific application to dexterous manipulation in unstructured dangerous environments where explosives, toxic chemicals, or radioactivity may be present, or in other environments with significant risk such as coal mining or oceanographic missions. Potential benefits include reduced risk to man inmore » hazardous situations, machine replication of scarce expertise, minimization of human error due to fear or fatigue, and enhanced capability using high resolution sensors and powerful computers. A CESAR goal is to explore the interface between the advanced teleoperation capability of today, and the autonomous machines of the future.« less
A Discussion of Possibility of Reinforcement Learning Using Event-Related Potential in BCI
NASA Astrophysics Data System (ADS)
Yamagishi, Yuya; Tsubone, Tadashi; Wada, Yasuhiro
Recently, Brain computer interface (BCI) which is a direct connecting pathway an external device such as a computer or a robot and a human brain have gotten a lot of attention. Since BCI can control the machines as robots by using the brain activity without using the voluntary muscle, the BCI may become a useful communication tool for handicapped persons, for instance, amyotrophic lateral sclerosis patients. However, in order to realize the BCI system which can perform precise tasks on various environments, it is necessary to design the control rules to adapt to the dynamic environments. Reinforcement learning is one approach of the design of the control rule. If this reinforcement leaning can be performed by the brain activity, it leads to the attainment of BCI that has general versatility. In this research, we paid attention to P300 of event-related potential as an alternative signal of the reward of reinforcement learning. We discriminated between the success and the failure trials from P300 of the EEG of the single trial by using the proposed discrimination algorithm based on Support vector machine. The possibility of reinforcement learning was examined from the viewpoint of the number of discriminated trials. It was shown that there was a possibility to be able to learn in most subjects.
Illusory movement perception improves motor control for prosthetic hands.
Marasco, Paul D; Hebert, Jacqueline S; Sensinger, Jon W; Shell, Courtney E; Schofield, Jonathon S; Thumser, Zachary C; Nataraj, Raviraj; Beckler, Dylan T; Dawson, Michael R; Blustein, Dan H; Gill, Satinder; Mensh, Brett D; Granja-Vazquez, Rafael; Newcomb, Madeline D; Carey, Jason P; Orzell, Beth M
2018-03-14
To effortlessly complete an intentional movement, the brain needs feedback from the body regarding the movement's progress. This largely nonconscious kinesthetic sense helps the brain to learn relationships between motor commands and outcomes to correct movement errors. Prosthetic systems for restoring function have predominantly focused on controlling motorized joint movement. Without the kinesthetic sense, however, these devices do not become intuitively controllable. We report a method for endowing human amputees with a kinesthetic perception of dexterous robotic hands. Vibrating the muscles used for prosthetic control via a neural-machine interface produced the illusory perception of complex grip movements. Within minutes, three amputees integrated this kinesthetic feedback and improved movement control. Combining intent, kinesthesia, and vision instilled participants with a sense of agency over the robotic movements. This feedback approach for closed-loop control opens a pathway to seamless integration of minds and machines. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Joint Spatial-Spectral Feature Space Clustering for Speech Activity Detection from ECoG Signals
Kanas, Vasileios G.; Mporas, Iosif; Benz, Heather L.; Sgarbas, Kyriakos N.; Bezerianos, Anastasios; Crone, Nathan E.
2014-01-01
Brain machine interfaces for speech restoration have been extensively studied for more than two decades. The success of such a system will depend in part on selecting the best brain recording sites and signal features corresponding to speech production. The purpose of this study was to detect speech activity automatically from electrocorticographic signals based on joint spatial-frequency clustering of the ECoG feature space. For this study, the ECoG signals were recorded while a subject performed two different syllable repetition tasks. We found that the optimal frequency resolution to detect speech activity from ECoG signals was 8 Hz, achieving 98.8% accuracy by employing support vector machines (SVM) as a classifier. We also defined the cortical areas that held the most information about the discrimination of speech and non-speech time intervals. Additionally, the results shed light on the distinct cortical areas associated with the two syllable repetition tasks and may contribute to the development of portable ECoG-based communication. PMID:24658248
The Berlin Brain-Computer Interface: Progress Beyond Communication and Control
Blankertz, Benjamin; Acqualagna, Laura; Dähne, Sven; Haufe, Stefan; Schultze-Kraft, Matthias; Sturm, Irene; Ušćumlic, Marija; Wenzel, Markus A.; Curio, Gabriel; Müller, Klaus-Robert
2016-01-01
The combined effect of fundamental results about neurocognitive processes and advancements in decoding mental states from ongoing brain signals has brought forth a whole range of potential neurotechnological applications. In this article, we review our developments in this area and put them into perspective. These examples cover a wide range of maturity levels with respect to their applicability. While we assume we are still a long way away from integrating Brain-Computer Interface (BCI) technology in general interaction with computers, or from implementing neurotechnological measures in safety-critical workplaces, results have already now been obtained involving a BCI as research tool. In this article, we discuss the reasons why, in some of the prospective application domains, considerable effort is still required to make the systems ready to deal with the full complexity of the real world. PMID:27917107
The Berlin Brain-Computer Interface: Progress Beyond Communication and Control.
Blankertz, Benjamin; Acqualagna, Laura; Dähne, Sven; Haufe, Stefan; Schultze-Kraft, Matthias; Sturm, Irene; Ušćumlic, Marija; Wenzel, Markus A; Curio, Gabriel; Müller, Klaus-Robert
2016-01-01
The combined effect of fundamental results about neurocognitive processes and advancements in decoding mental states from ongoing brain signals has brought forth a whole range of potential neurotechnological applications. In this article, we review our developments in this area and put them into perspective. These examples cover a wide range of maturity levels with respect to their applicability. While we assume we are still a long way away from integrating Brain-Computer Interface (BCI) technology in general interaction with computers, or from implementing neurotechnological measures in safety-critical workplaces, results have already now been obtained involving a BCI as research tool. In this article, we discuss the reasons why, in some of the prospective application domains, considerable effort is still required to make the systems ready to deal with the full complexity of the real world.
A square root ensemble Kalman filter application to a motor-imagery brain-computer interface.
Kamrunnahar, M; Schiff, S J
2011-01-01
We here investigated a non-linear ensemble Kalman filter (SPKF) application to a motor imagery brain computer interface (BCI). A square root central difference Kalman filter (SR-CDKF) was used as an approach for brain state estimation in motor imagery task performance, using scalp electroencephalography (EEG) signals. Healthy human subjects imagined left vs. right hand movements and tongue vs. bilateral toe movements while scalp EEG signals were recorded. Offline data analysis was conducted for training the model as well as for decoding the imagery movements. Preliminary results indicate the feasibility of this approach with a decoding accuracy of 78%-90% for the hand movements and 70%-90% for the tongue-toes movements. Ongoing research includes online BCI applications of this approach as well as combined state and parameter estimation using this algorithm with different system dynamic models.
UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces
NASA Technical Reports Server (NTRS)
Shiffman, Smadar; Degani, Asaf; Heymann, Michael
2004-01-01
In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.
LaFleur, Karl; Cassady, Kaitlin; Doud, Alexander; Shades, Kaleb; Rogin, Eitan; He, Bin
2013-01-01
Objective At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional physical space using noninvasive scalp EEG in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that operation of a real world device has on subjects’ control with comparison to a two-dimensional virtual cursor task. Approach Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a three-dimensional physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m/s. Significance Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user’s ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in the three-dimensional physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG based BCI systems to accomplish complex control in three-dimensional physical space. The present study may serve as a framework for the investigation of multidimensional non-invasive brain-computer interface control in a physical environment using telepresence robotics. PMID:23735712
Kim, Yong-Ku; Na, Kyoung-Sae
2018-01-03
Mood disorders are a highly prevalent group of mental disorders causing substantial socioeconomic burden. There are various methodological approaches for identifying the underlying mechanisms of the etiology, symptomatology, and therapeutics of mood disorders; however, neuroimaging studies have provided the most direct evidence for mood disorder neural substrates by visualizing the brains of living individuals. The prefrontal cortex, hippocampus, amygdala, thalamus, ventral striatum, and corpus callosum are associated with depression and bipolar disorder. Identifying the distinct and common contributions of these anatomical regions to depression and bipolar disorder have broadened and deepened our understanding of mood disorders. However, the extent to which neuroimaging research findings contribute to clinical practice in the real-world setting is unclear. As traditional or non-machine learning MRI studies have analyzed group-level differences, it is not possible to directly translate findings from research to clinical practice; the knowledge gained pertains to the disorder, but not to individuals. On the other hand, a machine learning approach makes it possible to provide individual-level classifications. For the past two decades, many studies have reported on the classification accuracy of machine learning-based neuroimaging studies from the perspective of diagnosis and treatment response. However, for the application of a machine learning-based brain MRI approach in real world clinical settings, several major issues should be considered. Secondary changes due to illness duration and medication, clinical subtypes and heterogeneity, comorbidities, and cost-effectiveness restrict the generalization of the current machine learning findings. Sophisticated classification of clinical and diagnostic subtypes is needed. Additionally, as the approach is inevitably limited by sample size, multi-site participation and data-sharing are needed in the future. Copyright © 2017 Elsevier Inc. All rights reserved.
Brain–computer interfaces: communication and restoration of movement in paralysis
Birbaumer, Niels; Cohen, Leonardo G
2007-01-01
The review describes the status of brain–computer or brain–machine interface research. We focus on non-invasive brain–computer interfaces (BCIs) and their clinical utility for direct brain communication in paralysis and motor restoration in stroke. A large gap between the promises of invasive animal and human BCI preparations and the clinical reality characterizes the literature: while intact monkeys learn to execute more or less complex upper limb movements with spike patterns from motor brain regions alone without concomitant peripheral motor activity usually after extensive training, clinical applications in human diseases such as amyotrophic lateral sclerosis and paralysis from stroke or spinal cord lesions show only limited success, with the exception of verbal communication in paralysed and locked-in patients. BCIs based on electroencephalographic potentials or oscillations are ready to undergo large clinical studies and commercial production as an adjunct or a major assisted communication device for paralysed and locked-in patients. However, attempts to train completely locked-in patients with BCI communication after entering the complete locked-in state with no remaining eye movement failed. We propose that a lack of contingencies between goal directed thoughts and intentions may be at the heart of this problem. Experiments with chronically curarized rats support our hypothesis; operant conditioning and voluntary control of autonomic physiological functions turned out to be impossible in this preparation. In addition to assisted communication, BCIs consisting of operant learning of EEG slow cortical potentials and sensorimotor rhythm were demonstrated to be successful in drug resistant focal epilepsy and attention deficit disorder. First studies of non-invasive BCIs using sensorimotor rhythm of the EEG and MEG in restoration of paralysed hand movements in chronic stroke and single cases of high spinal cord lesions show some promise, but need extensive evaluation in well-controlled experiments. Invasive BMIs based on neuronal spike patterns, local field potentials or electrocorticogram may constitute the strategy of choice in severe cases of stroke and spinal cord paralysis. Future directions of BCI research should include the regulation of brain metabolism and blood flow and electrical and magnetic stimulation of the human brain (invasive and non-invasive). A series of studies using BOLD response regulation with functional magnetic resonance imaging (fMRI) and near infrared spectroscopy demonstrated a tight correlation between voluntary changes in brain metabolism and behaviour. PMID:17234696
Toward a whole-body neuroprosthetic.
Lebedev, Mikhail A; Nicolelis, Miguel A L
2011-01-01
Brain-machine interfaces (BMIs) hold promise for the restoration of body mobility in patients suffering from devastating motor deficits caused by brain injury, neurological diseases, and limb loss. Considerable progress has been achieved in BMIs that enact arm movements, and initial work has been done on BMIs for lower limb and trunk control. These developments put Duke University Center for Neuroengineering in the position to develop the first BMI for whole-body control. This whole-body BMI will incorporate very large-scale brain recordings, advanced decoding algorithms, artificial sensory feedback based on electrical stimulation of somatosensory areas, virtual environment representations, and a whole-body exoskeleton. This system will be first tested in nonhuman primates and then transferred to clinical trials in humans. Copyright © 2011 Elsevier B.V. All rights reserved.
An online semi-supervised brain-computer interface.
Gu, Zhenghui; Yu, Zhuliang; Shen, Zhifang; Li, Yuanqing
2013-09-01
Practical brain-computer interface (BCI) systems should require only low training effort for the user, and the algorithms used to classify the intent of the user should be computationally efficient. However, due to inter- and intra-subject variations in EEG signal, intermittent training/calibration is often unavoidable. In this paper, we present an online semi-supervised P300 BCI speller system. After a short initial training (around or less than 1 min in our experiments), the system is switched to a mode where the user can input characters through selective attention. In this mode, a self-training least squares support vector machine (LS-SVM) classifier is gradually enhanced in back end with the unlabeled EEG data collected online after every character input. In this way, the classifier is gradually enhanced. Even though the user may experience some errors in input at the beginning due to the small initial training dataset, the accuracy approaches that of fully supervised method in a few minutes. The algorithm based on LS-SVM and its sequential update has low computational complexity; thus, it is suitable for online applications. The effectiveness of the algorithm has been validated through data analysis on BCI Competition III dataset II (P300 speller BCI data). The performance of the online system was evaluated through experimental results on eight healthy subjects, where all of them achieved the spelling accuracy of 85 % or above within an average online semi-supervised learning time of around 3 min.
Flexible and stretchable electronics for biointegrated devices.
Kim, Dae-Hyeong; Ghaffari, Roozbeh; Lu, Nanshu; Rogers, John A
2012-01-01
Advances in materials, mechanics, and manufacturing now allow construction of high-quality electronics and optoelectronics in forms that can readily integrate with the soft, curvilinear, and time-dynamic surfaces of the human body. The resulting capabilities create new opportunities for studying disease states, improving surgical procedures, monitoring health/wellness, establishing human-machine interfaces, and performing other functions. This review summarizes these technologies and illustrates their use in forms integrated with the brain, the heart, and the skin.
Craniux: A LabVIEW-Based Modular Software Framework for Brain-Machine Interface Research
2011-01-01
open-source BMI software solu- tions are currently available, we feel that the Craniux software package fills a specific need in the realm of BMI...data, such as cortical source imaging using EEG or MEG recordings. It is with these characteristics in mind that we feel the Craniux software package...S. Adee, “Dean Kamen’s ‘luke arm’ prosthesis readies for clinical trials,” IEEE Spectrum, February 2008, http://spectrum .ieee.org/biomedical
Soekadar, Surjo R; Witkowski, Matthias; Mellinger, Jürgen; Ramos, Ander; Birbaumer, Niels; Cohen, Leonardo G
2011-10-01
Event-related desynchronization (ERD) of sensori-motor rhythms (SMR) can be used for online brain-machine interface (BMI) control, but yields challenges related to the stability of ERD and feedback strategy to optimize BMI learning.Here, we compared two approaches to this challenge in 20 right-handed healthy subjects (HS, five sessions each, S1-S5) and four stroke patients (SP, 15 sessions each, S1-S15). ERD was recorded from a 275-sensor MEG system. During daily training,motor imagery-induced ERD led to visual and proprioceptive feedback delivered through an orthotic device attached to the subjects' hand and fingers. Group A trained with a heterogeneous reference value (RV) for ERD detection with binary feedback and Group B with a homogenous RV and graded feedback (10 HS and 2 SP in each group). HS in Group B showed better BMI performance than Group A (p < 0.001) and improved BMI control from S1 to S5 (p = 0.012) while Group A did not. In spite of the small n, SP in Group B showed a trend for a higher BMI performance (p = 0.06) and learning was significantly better (p < 0.05). Using a homogeneous RV and graded feedback led to improved modulation of ipsilesional activity resulting in superior BMI learning relative to use of a heterogeneous RV and binary feedback.
Harris, Alexander R; Molino, Paul J; Kapsa, Robert M I; Clark, Graeme M; Paolini, Antonio G; Wallace, Gordon G
2015-05-07
Electrode impedance is used to assess the thermal noise and signal-to-noise ratio for brain-machine interfaces. An intermediate frequency of 1 kHz is typically measured, although other frequencies may be better predictors of device performance. PEDOT-PSS, PEDOT-DBSA and PEDOT-pTs conducting polymer modified electrodes have reduced impedance at 1 kHz compared to bare metal electrodes, but have no correlation with the effective electrode area. Analytical solutions to impedance indicate that all low-intermediate frequencies can be used to compare the electrode area at a series RC circuit, typical of an ideal metal electrode in a conductive solution. More complex equivalent circuits can be used for the modified electrodes, with a simplified Randles circuit applied to PEDOT-PSS and PEDOT-pTs and a Randles circuit including a Warburg impedance element for PEDOT-DBSA at 0 V. The impedance and phase angle at low frequencies using both equivalent circuit models is dependent on the electrode area. Low frequencies may therefore provide better predictions of the thermal noise and signal-to-noise ratio at modified electrodes. The coefficient of variation of the PEDOT-pTs impedance at low frequencies was lower than the other conducting polymers, consistent with linear and steady-state electroactive area measurements. There are poor correlations between the impedance and the charge density as they are not ideal metal electrodes.
Zhuang, Katie Z.; Lebedev, Mikhail A.
2014-01-01
Correlation between cortical activity and electromyographic (EMG) activity of limb muscles has long been a subject of neurophysiological studies, especially in terms of corticospinal connectivity. Interest in this issue has recently increased due to the development of brain-machine interfaces with output signals that mimic muscle force. For this study, three monkeys were implanted with multielectrode arrays in multiple cortical areas. One monkey performed self-timed touch pad presses, whereas the other two executed arm reaching movements. We analyzed the dynamic relationship between cortical neuronal activity and arm EMGs using a joint cross-correlation (JCC) analysis that evaluated trial-by-trial correlation as a function of time intervals within a trial. JCCs revealed transient correlations between the EMGs of multiple muscles and neural activity in motor, premotor and somatosensory cortical areas. Matching results were obtained using spike-triggered averages corrected by subtracting trial-shuffled data. Compared with spike-triggered averages, JCCs more readily revealed dynamic changes in cortico-EMG correlations. JCCs showed that correlation peaks often sharpened around movement times and broadened during delay intervals. Furthermore, JCC patterns were directionally selective for the arm-reaching task. We propose that such highly dynamic, task-dependent and distributed relationships between cortical activity and EMGs should be taken into consideration for future brain-machine interfaces that generate EMG-like signals. PMID:25210153
Qin, Zhen; Zhang, Bin; Hu, Liang; Zhuang, Liujing; Hu, Ning; Wang, Ping
2016-04-15
Animals' gustatory system has been widely acknowledged as one of the most sensitive chemosensing systems, especially for its ability to detect bitterness. Since bitterness usually symbolizes inedibility, the potential to use rodent's gustatory system is investigated to detect bitter compounds. In this work, the extracellular potentials of a group of neurons are recorded by chronically coupling microelectrode array to rat's gustatory cortex with brain-machine interface (BMI) technology. Local field potentials (LFPs), which represent the electrophysiological activity of neural networks, are chosen as target signals due to stable response patterns across trials and are further divided into different oscillations. As a result, different taste qualities yield quality-specific LFPs in time domain which suggests the selectivity of this in vivo bioelectronic tongue. Meanwhile, more quantitative study in frequency domain indicates that the post-stimulation power of beta and low gamma oscillations shows dependence with concentrations of denatonium benzoate, a prototypical bitter compound, and the limit of detection is deduced to be 0.076 μM, which is two orders lower than previous in vitro bioelectronic tongues and conventional electronic tongues. According to the results, this in vivo bioelectronic tongue in combination with BMI presents a promising method in highly sensitive bitterness detection and is supposed to provide new platform in measuring bitterness degree. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Xu, Kai; Wang, Yiwen; Wang, Yueming; Wang, Fang; Hao, Yaoyao; Zhang, Shaomin; Zhang, Qiaosheng; Chen, Weidong; Zheng, Xiaoxiang
2013-04-01
Objective. The high-dimensional neural recordings bring computational challenges to movement decoding in motor brain machine interfaces (mBMI), especially for portable applications. However, not all recorded neural activities relate to the execution of a certain movement task. This paper proposes to use a local-learning-based method to perform neuron selection for the gesture prediction in a reaching and grasping task. Approach. Nonlinear neural activities are decomposed into a set of linear ones in a weighted feature space. A margin is defined to measure the distance between inter-class and intra-class neural patterns. The weights, reflecting the importance of neurons, are obtained by minimizing a margin-based exponential error function. To find the most dominant neurons in the task, 1-norm regularization is introduced to the objective function for sparse weights, where near-zero weights indicate irrelevant neurons. Main results. The signals of only 10 neurons out of 70 selected by the proposed method could achieve over 95% of the full recording's decoding accuracy of gesture predictions, no matter which different decoding methods are used (support vector machine and K-nearest neighbor). The temporal activities of the selected neurons show visually distinguishable patterns associated with various hand states. Compared with other algorithms, the proposed method can better eliminate the irrelevant neurons with near-zero weights and provides the important neuron subset with the best decoding performance in statistics. The weights of important neurons converge usually within 10-20 iterations. In addition, we study the temporal and spatial variation of neuron importance along a period of one and a half months in the same task. A high decoding performance can be maintained by updating the neuron subset. Significance. The proposed algorithm effectively ascertains the neuronal importance without assuming any coding model and provides a high performance with different decoding models. It shows better robustness of identifying the important neurons with noisy signals presented. The low demand of computational resources which, reflected by the fast convergence, indicates the feasibility of the method applied in portable BMI systems. The ascertainment of the important neurons helps to inspect neural patterns visually associated with the movement task. The elimination of irrelevant neurons greatly reduces the computational burden of mBMI systems and maintains the performance with better robustness.
Wilaiprasitporn, Theerawit; Yagi, Tohru
2015-01-01
This research demonstrates the orientation-modulated attention effect on visual evoked potential. We combined this finding with our previous findings about the motion-modulated attention effect and used the result to develop novel visual stimuli for a personal identification number (PIN) application based on a brain-computer interface (BCI) framework. An electroencephalography amplifier with a single electrode channel was sufficient for our application. A computationally inexpensive algorithm and small datasets were used in processing. Seven healthy volunteers participated in experiments to measure offline performance. Mean accuracy was 83.3% at 13.9 bits/min. Encouraged by these results, we plan to continue developing the BCI-based personal identification application toward real-time systems.
Man-systems integration and the man-machine interface
NASA Technical Reports Server (NTRS)
Hale, Joseph P.
1990-01-01
Viewgraphs on man-systems integration and the man-machine interface are presented. Man-systems integration applies the systems' approach to the integration of the user and the machine to form an effective, symbiotic Man-Machine System (MMS). A MMS is a combination of one or more human beings and one or more physical components that are integrated through the common purpose of achieving some objective. The human operator interacts with the system through the Man-Machine Interface (MMI).
The Body-Machine Interface: A new perspective on an old theme
Casadio, Maura; Ranganathan, Rajiv; Mussa-Ivaldi, Ferdinando A.
2012-01-01
Body-machine interfaces establish a way to interact with a variety of devices, allowing their users to extend the limits of their performance. Recent advances in this field, ranging from computer-interfaces to bionic limbs, have had important consequences for people with movement disorders. In this article, we provide an overview of the basic concepts underlying the body-machine interface with special emphasis on their use for rehabilitation and for operating assistive devices. We outline the steps involved in building such an interface and we highlight the critical role of body-machine interfaces in addressing theoretical issues in motor control as well as their utility in movement rehabilitation. PMID:23237465
Deng, Li; Wang, Guohua; Yu, Suihuai
2016-01-01
In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method.
Deng, Li; Wang, Guohua; Yu, Suihuai
2016-01-01
In order to consider the psychological cognitive characteristics affecting operating comfort and realize the automatic layout design, cognitive ergonomics and GA-ACA (genetic algorithm and ant colony algorithm) were introduced into the layout design of human-machine interaction interface. First, from the perspective of cognitive psychology, according to the information processing process, the cognitive model of human-machine interaction interface was established. Then, the human cognitive characteristics were analyzed, and the layout principles of human-machine interaction interface were summarized as the constraints in layout design. Again, the expression form of fitness function, pheromone, and heuristic information for the layout optimization of cabin was studied. The layout design model of human-machine interaction interface was established based on GA-ACA. At last, a layout design system was developed based on this model. For validation, the human-machine interaction interface layout design of drilling rig control room was taken as an example, and the optimization result showed the feasibility and effectiveness of the proposed method. PMID:26884745
The future of the provision process for mobility assistive technology: a survey of providers.
Dicianno, Brad E; Joseph, James; Eckstein, Stacy; Zigler, Christina K; Quinby, Eleanor J; Schmeler, Mark R; Schein, Richard M; Pearlman, Jon; Cooper, Rory A
2018-03-20
The purpose of this study was to evaluate the opinions of providers of mobility assistive technologies to help inform a research agenda and set priorities. This survey study was anonymous and gathered opinions of individuals who participate in the process to provide wheelchairs and other assistive technologies to clients. Participants were asked to rank the importance of developing various technologies and rank items against each other in terms of order of importance. Participants were also asked to respond to several open-ended questions or statements. A total of 161 providers from 35 states within the USA consented to participation and completed the survey. This survey revealed themes of advanced wheelchair design, assistive robotics and intelligent systems, human machine interfaces and smart device applications. It also outlined priorities for researchers to provide continuing education to clients and providers. These themes will be used to develop research and development priorities. Implications for Rehabilitation • Research in advanced wheelchair design is needed to facilitate travel and environmental access with wheelchairs and to develop alternative power sources for wheelchairs.• New assistive robotics and intelligent systems are needed to help wheelchairs overcome obstacles or self-adjust, assist wheelchair navigation in the community, assist caregivers and transfers, and aid ambulation.• Innovations in human machine interfaces may help advance the control of mobility devices and robots with the brain, eye movements, facial gesture recognition or other systems.• Development of new smart devices is needed for better control of the environment, monitoring activity and promoting healthy behaviours.
Graphene-Based Interfaces Do Not Alter Target Nerve Cells.
Fabbro, Alessandra; Scaini, Denis; León, Verónica; Vázquez, Ester; Cellot, Giada; Privitera, Giulia; Lombardi, Lucia; Torrisi, Felice; Tomarchio, Flavia; Bonaccorso, Francesco; Bosi, Susanna; Ferrari, Andrea C; Ballerini, Laura; Prato, Maurizio
2016-01-26
Neural-interfaces rely on the ability of electrodes to transduce stimuli into electrical patterns delivered to the brain. In addition to sensitivity to the stimuli, stability in the operating conditions and efficient charge transfer to neurons, the electrodes should not alter the physiological properties of the target tissue. Graphene is emerging as a promising material for neuro-interfacing applications, given its outstanding physico-chemical properties. Here, we use graphene-based substrates (GBSs) to interface neuronal growth. We test our GBSs on brain cell cultures by measuring functional and synaptic integrity of the emerging neuronal networks. We show that GBSs are permissive interfaces, even when uncoated by cell adhesion layers, retaining unaltered neuronal signaling properties, thus being suitable for carbon-based neural prosthetic devices.
New generation emerging technologies for neurorehabilitation and motor assistance.
Frisoli, Antonio; Solazzi, Massimiliano; Loconsole, Claudio; Barsotti, Michele
2016-12-01
This paper illustrates the application of emerging technologies and human-machine interfaces to the neurorehabilitation and motor assistance fields. The contribution focuses on wearable technologies and in particular on robotic exoskeleton as tools for increasing freedom to move and performing Activities of Daily Living (ADLs). This would result in a deep improvement in quality of life, also in terms of improved function of internal organs and general health status. Furthermore, the integration of these robotic systems with advanced bio-signal driven human-machine interface can increase the degree of participation of patient in robotic training allowing to recognize user's intention and assisting the patient in rehabilitation tasks, thus representing a fundamental aspect to elicit motor learning.
State of the art in nuclear telerobotics: focus on the man/machine connection
NASA Astrophysics Data System (ADS)
Greaves, Amna E.
1995-12-01
The interface between the human controller and remotely operated device is a crux of telerobotic investigation today. This human-to-machine connection is the means by which we communicate our commands to the device, as well as the medium for decision-critical feedback to the operator. The amount of information transferred through the user interface is growing. This can be seen as a direct result of our need to support added complexities, as well as a rapidly expanding domain of applications. A user interface, or UI, is therefore subject to increasing demands to present information in a meaningful manner to the user. Virtual reality, and multi degree-of-freedom input devices lend us the ability to augment the man/machine interface, and handle burgeoning amounts of data in a more intuitive and anthropomorphically correct manner. Along with the aid of 3-D input and output devices, there are several visual tools that can be employed as part of a graphical UI that enhance and accelerate our comprehension of the data being presented. Thus an advanced UI that features these improvements would reduce the amount of fatigue on the teleoperator, increase his level of safety, facilitate learning, augment his control, and potentially reduce task time. This paper investigates the cutting edge concepts and enhancements that lead to the next generation of telerobotic interface systems.
Aricò, P; Borghini, G; Di Flumeri, G; Colosimo, A; Pozzi, S; Babiloni, F
2016-01-01
In the last decades, it has been a fast-growing concept in the neuroscience field. The passive brain-computer interface (p-BCI) systems allow to improve the human-machine interaction (HMI) in operational environments, by using the covert brain activity (eg, mental workload) of the operator. However, p-BCI technology could suffer from some practical issues when used outside the laboratories. In particular, one of the most important limitations is the necessity to recalibrate the p-BCI system each time before its use, to avoid a significant reduction of its reliability in the detection of the considered mental states. The objective of the proposed study was to provide an example of p-BCIs used to evaluate the users' mental workload in a real operational environment. For this purpose, through the facilities provided by the École Nationale de l'Aviation Civile of Toulouse (France), the cerebral activity of 12 professional air traffic control officers (ATCOs) has been recorded while performing high realistic air traffic management scenarios. By the analysis of the ATCOs' brain activity (electroencephalographic signal-EEG) and the subjective workload perception (instantaneous self-assessment) provided by both the examined ATCOs and external air traffic control experts, it has been possible to estimate and evaluate the variation of the mental workload under which the controllers were operating. The results showed (i) a high significant correlation between the neurophysiological and the subjective workload assessment, and (ii) a high reliability over time (up to a month) of the proposed algorithm that was also able to maintain high discrimination accuracies by using a low number of EEG electrodes (~3 EEG channels). In conclusion, the proposed methodology demonstrated the suitability of p-BCI systems in operational environments and the advantages of the neurophysiological measures with respect to the subjective ones. © 2016 Elsevier B.V. All rights reserved.
A brain-machine interface for control of medically-induced coma.
Shanechi, Maryam M; Chemali, Jessica J; Liberman, Max; Solt, Ken; Brown, Emery N
2013-10-01
Medically-induced coma is a drug-induced state of profound brain inactivation and unconsciousness used to treat refractory intracranial hypertension and to manage treatment-resistant epilepsy. The state of coma is achieved by continually monitoring the patient's brain activity with an electroencephalogram (EEG) and manually titrating the anesthetic infusion rate to maintain a specified level of burst suppression, an EEG marker of profound brain inactivation in which bursts of electrical activity alternate with periods of quiescence or suppression. The medical coma is often required for several days. A more rational approach would be to implement a brain-machine interface (BMI) that monitors the EEG and adjusts the anesthetic infusion rate in real time to maintain the specified target level of burst suppression. We used a stochastic control framework to develop a BMI to control medically-induced coma in a rodent model. The BMI controlled an EEG-guided closed-loop infusion of the anesthetic propofol to maintain precisely specified dynamic target levels of burst suppression. We used as the control signal the burst suppression probability (BSP), the brain's instantaneous probability of being in the suppressed state. We characterized the EEG response to propofol using a two-dimensional linear compartment model and estimated the model parameters specific to each animal prior to initiating control. We derived a recursive Bayesian binary filter algorithm to compute the BSP from the EEG and controllers using a linear-quadratic-regulator and a model-predictive control strategy. Both controllers used the estimated BSP as feedback. The BMI accurately controlled burst suppression in individual rodents across dynamic target trajectories, and enabled prompt transitions between target levels while avoiding both undershoot and overshoot. The median performance error for the BMI was 3.6%, the median bias was -1.4% and the overall posterior probability of reliable control was 1 (95% Bayesian credibility interval of [0.87, 1.0]). A BMI can maintain reliable and accurate real-time control of medically-induced coma in a rodent model suggesting this strategy could be applied in patient care.
Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang
2016-01-01
Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI. PMID:26880873
Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang
2016-01-01
Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI.
The Strength of the Metal. Aluminum Oxide Interface
NASA Technical Reports Server (NTRS)
Pepper, S. V.
1984-01-01
The strength of the interface between metals and aluminum oxide is an important factor in the successful operation of devices found throughout modern technology. One finds the interface in machine tools, jet engines, and microelectronic integrated circuits. The strength of the interface, however, should be strong or weak depending on the application. The diverse technological demands have led to some general ideas concerning the origin of the interfacial strength, and have stimulated fundamental research on the problem. Present status of our understanding of the source of the strength of the metal - aluminum oxide interface in terms of interatomic bonds are reviewed. Some future directions for research are suggested.
A square root ensemble Kalman filter application to a motor-imagery brain-computer interface
Kamrunnahar, M.; Schiff, S. J.
2017-01-01
We here investigated a non-linear ensemble Kalman filter (SPKF) application to a motor imagery brain computer interface (BCI). A square root central difference Kalman filter (SR-CDKF) was used as an approach for brain state estimation in motor imagery task performance, using scalp electroencephalography (EEG) signals. Healthy human subjects imagined left vs. right hand movements and tongue vs. bilateral toe movements while scalp EEG signals were recorded. Offline data analysis was conducted for training the model as well as for decoding the imagery movements. Preliminary results indicate the feasibility of this approach with a decoding accuracy of 78%–90% for the hand movements and 70%–90% for the tongue-toes movements. Ongoing research includes online BCI applications of this approach as well as combined state and parameter estimation using this algorithm with different system dynamic models. PMID:22255799
Nanostructures: a platform for brain repair and augmentation
Vidu, Ruxandra; Rahman, Masoud; Mahmoudi, Morteza; Enachescu, Marius; Poteca, Teodor D.; Opris, Ioan
2014-01-01
Nanoscale structures have been at the core of research efforts dealing with integration of nanotechnology into novel electronic devices for the last decade. Because the size of nanomaterials is of the same order of magnitude as biomolecules, these materials are valuable tools for nanoscale manipulation in a broad range of neurobiological systems. For instance, the unique electrical and optical properties of nanowires, nanotubes, and nanocables with vertical orientation, assembled in nanoscale arrays, have been used in many device applications such as sensors that hold the potential to augment brain functions. However, the challenge in creating nanowires/nanotubes or nanocables array-based sensors lies in making individual electrical connections fitting both the features of the brain and of the nanostructures. This review discusses two of the most important applications of nanostructures in neuroscience. First, the current approaches to create nanowires and nanocable structures are reviewed to critically evaluate their potential for developing unique nanostructure based sensors to improve recording and device performance to reduce noise and the detrimental effect of the interface on the tissue. Second, the implementation of nanomaterials in neurobiological and medical applications will be considered from the brain augmentation perspective. Novel applications for diagnosis and treatment of brain diseases such as multiple sclerosis, meningitis, stroke, epilepsy, Alzheimer's disease, schizophrenia, and autism will be considered. Because the blood brain barrier (BBB) has a defensive mechanism in preventing nanomaterials arrival to the brain, various strategies to help them to pass through the BBB will be discussed. Finally, the implementation of nanomaterials in neurobiological applications is addressed from the brain repair/augmentation perspective. These nanostructures at the interface between nanotechnology and neuroscience will play a pivotal role not only in addressing the multitude of brain disorders but also to repair or augment brain functions. PMID:24999319
Implantable neurotechnologies: a review of integrated circuit neural amplifiers.
Ng, Kian Ann; Greenwald, Elliot; Xu, Yong Ping; Thakor, Nitish V
2016-01-01
Neural signal recording is critical in modern day neuroscience research and emerging neural prosthesis programs. Neural recording requires the use of precise, low-noise amplifier systems to acquire and condition the weak neural signals that are transduced through electrode interfaces. Neural amplifiers and amplifier-based systems are available commercially or can be designed in-house and fabricated using integrated circuit (IC) technologies, resulting in very large-scale integration or application-specific integrated circuit solutions. IC-based neural amplifiers are now used to acquire untethered/portable neural recordings, as they meet the requirements of a miniaturized form factor, light weight and low power consumption. Furthermore, such miniaturized and low-power IC neural amplifiers are now being used in emerging implantable neural prosthesis technologies. This review focuses on neural amplifier-based devices and is presented in two interrelated parts. First, neural signal recording is reviewed, and practical challenges are highlighted. Current amplifier designs with increased functionality and performance and without penalties in chip size and power are featured. Second, applications of IC-based neural amplifiers in basic science experiments (e.g., cortical studies using animal models), neural prostheses (e.g., brain/nerve machine interfaces) and treatment of neuronal diseases (e.g., DBS for treatment of epilepsy) are highlighted. The review concludes with future outlooks of this technology and important challenges with regard to neural signal amplification.
Combining Brain–Computer Interfaces and Assistive Technologies: State-of-the-Art and Challenges
Millán, J. d. R.; Rupp, R.; Müller-Putz, G. R.; Murray-Smith, R.; Giugliemma, C.; Tangermann, M.; Vidaurre, C.; Cincotti, F.; Kübler, A.; Leeb, R.; Neuper, C.; Müller, K.-R.; Mattia, D.
2010-01-01
In recent years, new research has brought the field of electroencephalogram (EEG)-based brain–computer interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely, “Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user–machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human–computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices. PMID:20877434
Implantable neurotechnologies: a review of integrated circuit neural amplifiers
Greenwald, Elliot; Xu, Yong Ping; Thakor, Nitish V.
2016-01-01
Neural signal recording is critical in modern day neuroscience research and emerging neural prosthesis programs. Neural recording requires the use of precise, low-noise amplifier systems to acquire and condition the weak neural signals that are transduced through electrode interfaces. Neural amplifiers and amplifier-based systems are available commercially or can be designed in-house and fabricated using integrated circuit (IC) technologies, resulting in very large-scale integration or application-specific integrated circuit solutions. IC-based neural amplifiers are now used to acquire untethered/portable neural recordings, as they meet the requirements of a miniaturized form factor, light weight and low power consumption. Furthermore, such miniaturized and low-power IC neural amplifiers are now being used in emerging implantable neural prosthesis technologies. This review focuses on neural amplifier-based devices and is presented in two interrelated parts. First, neural signal recording is reviewed, and practical challenges are highlighted. Current amplifier designs with increased functionality and performance and without penalties in chip size and power are featured. Second, applications of IC-based neural amplifiers in basic science experiments (e.g., cortical studies using animal models), neural prostheses (e.g., brain/nerve machine interfaces) and treatment of neuronal diseases (e.g., DBS for treatment of epilepsy) are highlighted. The review concludes with future outlooks of this technology and important challenges with regard to neural signal amplification. PMID:26798055
Tsui, Chun Sing Louis; Gan, John Q; Roberts, Stephen J
2009-03-01
Due to the non-stationarity of EEG signals, online training and adaptation are essential to EEG based brain-computer interface (BCI) systems. Self-paced BCIs offer more natural human-machine interaction than synchronous BCIs, but it is a great challenge to train and adapt a self-paced BCI online because the user's control intention and timing are usually unknown. This paper proposes a novel motor imagery based self-paced BCI paradigm for controlling a simulated robot in a specifically designed environment which is able to provide user's control intention and timing during online experiments, so that online training and adaptation of the motor imagery based self-paced BCI can be effectively investigated. We demonstrate the usefulness of the proposed paradigm with an extended Kalman filter based method to adapt the BCI classifier parameters, with experimental results of online self-paced BCI training with four subjects.
Brain-computer interface using P300 and virtual reality: a gaming approach for treating ADHD.
Rohani, Darius Adam; Sorensen, Helge B D; Puthusserypady, Sadasivan
2014-01-01
This paper presents a novel brain-computer interface (BCI) system aiming at the rehabilitation of attention-deficit/hyperactive disorder in children. It uses the P300 potential in a series of feedback games to improve the subjects' attention. We applied a support vector machine (SVM) using temporal and template-based features to detect these P300 responses. In an experimental setup using five subjects, an average error below 30% was achieved. To make it more challenging the BCI system has been embedded inside an immersive 3D virtual reality (VR) classroom with simulated distractions, which was created by combining a low-cost infrared camera and an "off-axis perspective projection" algorithm. This system is intended for kids by operating with four electrodes, as well as a non-intrusive VR setting. With the promising results, and considering the simplicity of the scheme, we hope to encourage future studies to adapt the techniques presented in this study.
fMRI Brain-Computer Interface: A Tool for Neuroscientific Research and Treatment
Sitaram, Ranganatha; Caria, Andrea; Veit, Ralf; Gaber, Tilman; Rota, Giuseppina; Kuebler, Andrea; Birbaumer, Niels
2007-01-01
Brain-computer interfaces based on functional magnetic resonance imaging (fMRI-BCI) allow volitional control of anatomically specific regions of the brain. Technological advancement in higher field MRI scanners, fast data acquisition sequences, preprocessing algorithms, and robust statistical analysis are anticipated to make fMRI-BCI more widely available and applicable. This noninvasive technique could potentially complement the traditional neuroscientific experimental methods by varying the activity of the neural substrates of a region of interest as an independent variable to study its effects on behavior. If the neurobiological basis of a disorder (e.g., chronic pain, motor diseases, psychopathy, social phobia, depression) is known in terms of abnormal activity in certain regions of the brain, fMRI-BCI can be targeted to modify activity in those regions with high specificity for treatment. In this paper, we review recent results of the application of fMRI-BCI to neuroscientific research and psychophysiological treatment. PMID:18274615
Zeng, Hong; Wang, Yanxin; Wu, Changcheng; Song, Aiguo; Liu, Jia; Ji, Peng; Xu, Baoguo; Zhu, Lifeng; Li, Huijun; Wen, Pengcheng
2017-01-01
Brain-machine interface (BMI) can be used to control the robotic arm to assist paralysis people for performing activities of daily living. However, it is still a complex task for the BMI users to control the process of objects grasping and lifting with the robotic arm. It is hard to achieve high efficiency and accuracy even after extensive trainings. One important reason is lacking of sufficient feedback information for the user to perform the closed-loop control. In this study, we proposed a method of augmented reality (AR) guiding assistance to provide the enhanced visual feedback to the user for a closed-loop control with a hybrid Gaze-BMI, which combines the electroencephalography (EEG) signals based BMI and the eye tracking for an intuitive and effective control of the robotic arm. Experiments for the objects manipulation tasks while avoiding the obstacle in the workspace are designed to evaluate the performance of our method for controlling the robotic arm. According to the experimental results obtained from eight subjects, the advantages of the proposed closed-loop system (with AR feedback) over the open-loop system (with visual inspection only) have been verified. The number of trigger commands used for controlling the robotic arm to grasp and lift the objects with AR feedback has reduced significantly and the height gaps of the gripper in the lifting process have decreased more than 50% compared to those trials with normal visual inspection only. The results reveal that the hybrid Gaze-BMI user can benefit from the information provided by the AR interface, improving the efficiency and reducing the cognitive load during the grasping and lifting processes. PMID:29163123
Assessment of brain-machine interfaces from the perspective of people with paralysis.
Blabe, Christine H; Gilja, Vikash; Chestek, Cindy A; Shenoy, Krishna V; Anderson, Kim D; Henderson, Jaimie M
2015-08-01
One of the main goals of brain-machine interface (BMI) research is to restore function to people with paralysis. Currently, multiple BMI design features are being investigated, based on various input modalities (externally applied and surgically implantable sensors) and output modalities (e.g. control of computer systems, prosthetic arms, and functional electrical stimulation systems). While these technologies may eventually provide some level of benefit, they each carry associated burdens for end-users. We sought to assess the attitudes of people with paralysis toward using various technologies to achieve particular benefits, given the burdens currently associated with the use of each system. We designed and distributed a technology survey to determine the level of benefit necessary for people with tetraplegia due to spinal cord injury to consider using different technologies, given the burdens currently associated with them. The survey queried user preferences for 8 BMI technologies including electroencephalography, electrocorticography, and intracortical microelectrode arrays, as well as a commercially available eye tracking system for comparison. Participants used a 5-point scale to rate their likelihood to adopt these technologies for 13 potential control capabilities. Survey respondents were most likely to adopt BMI technology to restore some of their natural upper extremity function, including restoration of hand grasp and/or some degree of natural arm movement. High speed typing and control of a fast robot arm were also of interest to this population. Surgically implanted wireless technologies were twice as 'likely' to be adopted as their wired equivalents. Assessing end-user preferences is an essential prerequisite to the design and implementation of any assistive technology. The results of this survey suggest that people with tetraplegia would adopt an unobtrusive, autonomous BMI system for both restoration of upper extremity function and control of external devices such as communication interfaces.
Kumar, Deepesh; Das, Abhijit; Lahiri, Uttama; Dutta, Anirban
2016-04-12
A stroke is caused when an artery carrying blood from heart to an area in the brain bursts or a clot obstructs the blood flow to brain thereby preventing delivery of oxygen and nutrients. About half of the stroke survivors are left with some degree of disability. Innovative methodologies for restorative neurorehabilitation are urgently required to reduce long-term disability. The ability of the nervous system to reorganize its structure, function and connections as a response to intrinsic or extrinsic stimuli is called neuroplasticity. Neuroplasticity is involved in post-stroke functional disturbances, but also in rehabilitation. Beneficial neuroplastic changes may be facilitated with non-invasive electrotherapy, such as neuromuscular electrical stimulation (NMES) and sensory electrical stimulation (SES). NMES involves coordinated electrical stimulation of motor nerves and muscles to activate them with continuous short pulses of electrical current while SES involves stimulation of sensory nerves with electrical current resulting in sensations that vary from barely perceivable to highly unpleasant. Here, active cortical participation in rehabilitation procedures may be facilitated by driving the non-invasive electrotherapy with biosignals (electromyogram (EMG), electroencephalogram (EEG), electrooculogram (EOG)) that represent simultaneous active perception and volitional effort. To achieve this in a resource-poor setting, e.g., in low- and middle-income countries, we present a low-cost human-machine-interface (HMI) by leveraging recent advances in off-the-shelf video game sensor technology. In this paper, we discuss the open-source software interface that integrates low-cost off-the-shelf sensors for visual-auditory biofeedback with non-invasive electrotherapy to assist postural control during balance rehabilitation. We demonstrate the proof-of-concept on healthy volunteers.
Kumar, Deepesh; Das, Abhijit; Lahiri, Uttama; Dutta, Anirban
2016-01-01
A stroke is caused when an artery carrying blood from heart to an area in the brain bursts or a clot obstructs the blood flow to brain thereby preventing delivery of oxygen and nutrients. About half of the stroke survivors are left with some degree of disability. Innovative methodologies for restorative neurorehabilitation are urgently required to reduce long-term disability. The ability of the nervous system to reorganize its structure, function and connections as a response to intrinsic or extrinsic stimuli is called neuroplasticity. Neuroplasticity is involved in post-stroke functional disturbances, but also in rehabilitation. Beneficial neuroplastic changes may be facilitated with non-invasive electrotherapy, such as neuromuscular electrical stimulation (NMES) and sensory electrical stimulation (SES). NMES involves coordinated electrical stimulation of motor nerves and muscles to activate them with continuous short pulses of electrical current while SES involves stimulation of sensory nerves with electrical current resulting in sensations that vary from barely perceivable to highly unpleasant. Here, active cortical participation in rehabilitation procedures may be facilitated by driving the non-invasive electrotherapy with biosignals (electromyogram (EMG), electroencephalogram (EEG), electrooculogram (EOG)) that represent simultaneous active perception and volitional effort. To achieve this in a resource-poor setting, e.g., in low- and middle-income countries, we present a low-cost human-machine-interface (HMI) by leveraging recent advances in off-the-shelf video game sensor technology. In this paper, we discuss the open-source software interface that integrates low-cost off-the-shelf sensors for visual-auditory biofeedback with non-invasive electrotherapy to assist postural control during balance rehabilitation. We demonstrate the proof-of-concept on healthy volunteers. PMID:27166666
Brain-Computer Interfaces Using Sensorimotor Rhythms: Current State and Future Perspectives
Yuan, Han; He, Bin
2014-01-01
Many studies over the past two decades have shown that people can use brain signals to convey their intent to a computer using brain-computer interfaces (BCIs). BCI systems extract specific features of brain activity and translate them into control signals that drive an output. Recently, a category of BCIs that are built on the rhythmic activity recorded over the sensorimotor cortex, i.e. the sensorimotor rhythm (SMR), has attracted considerable attention among the BCIs that use noninvasive neural recordings, e.g. electroencephalography (EEG), and have demonstrated the capability of multi-dimensional prosthesis control. This article reviews the current state and future perspectives of SMR-based BCI and its clinical applications, in particular focusing on the EEG SMR. The characteristic features of SMR from the human brain are described and their underlying neural sources are discussed. The functional components of SMR-based BCI, together with its current clinical applications are reviewed. Lastly, limitations of SMR-BCIs and future outlooks are also discussed. PMID:24759276
A Novel Wearable Forehead EOG Measurement System for Human Computer Interfaces.
Heo, Jeong; Yoon, Heenam; Park, Kwang Suk
2017-06-23
Amyotrophic lateral sclerosis (ALS) patients whose voluntary muscles are paralyzed commonly communicate with the outside world using eye movement. There have been many efforts to support this method of communication by tracking or detecting eye movement. An electrooculogram (EOG), an electro-physiological signal, is generated by eye movements and can be measured with electrodes placed around the eye. In this study, we proposed a new practical electrode position on the forehead to measure EOG signals, and we developed a wearable forehead EOG measurement system for use in Human Computer/Machine interfaces (HCIs/HMIs). Four electrodes, including the ground electrode, were placed on the forehead. The two channels were arranged vertically and horizontally, sharing a positive electrode. Additionally, a real-time eye movement classification algorithm was developed based on the characteristics of the forehead EOG. Three applications were employed to evaluate the proposed system: a virtual keyboard using a modified Bremen BCI speller and an automatic sequential row-column scanner, and a drivable power wheelchair. The mean typing speeds of the modified Bremen brain-computer interface (BCI) speller and automatic row-column scanner were 10.81 and 7.74 letters per minute, and the mean classification accuracies were 91.25% and 95.12%, respectively. In the power wheelchair demonstration, the user drove the wheelchair through an 8-shape course without collision with obstacles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, Karl D., E-mail: karl.price@sickkids.ca
Purpose: Current treatment of intraventricular hemorrhage (IVH) involves cerebral shunt placement or an invasive brain surgery. Magnetic resonance-guided focused ultrasound (MRgFUS) applied to the brains of pediatric patients presents an opportunity to treat IVH in a noninvasive manner, termed “incision-less surgery.” Current clinical and research focused ultrasound systems lack the capability to perform neonatal transcranial surgeries due to either range of motion or dexterity requirements. A novel robotic system is proposed to position a focused ultrasound transducer accurately above the head of a neonatal patient inside an MRI machine to deliver the therapy. Methods: A clinical Philips Sonalleve MRgFUS systemmore » was expanded to perform transcranial treatment. A five degree-of-freedom MR-conditional robot was designed and manufactured using MR compatible materials. The robot electronics and control were integrated into existing Philips electronics and software interfaces. The user commands the position of the robot with a graphical user interface, and is presented with real-time MR imaging of the patient throughout the surgery. The robot is validated through a series of experiments that characterize accuracy, signal-to-noise ratio degeneration of an MR image as a result of the robot, MR imaging artifacts generated by the robot, and the robot’s ability to operate in a representative surgical environment inside an MR machine. Results: Experimental results show the robot responds reliably within an MR environment, has achieved 0.59 ± 0.25 mm accuracy, does not produce severe MR-imaging artifacts, has a workspace providing sufficient coverage of a neonatal brain, and can manipulate a 5 kg payload. A full system demonstration shows these characteristics apply in an application environment. Conclusions: This paper presents a comprehensive look at the process of designing and validating a new robot from concept to implementation for use in an MR environment. An MR conditional robot has been designed and manufactured to design specifications. The system has demonstrated its feasibility as a platform for MRgFUS interventions for neonatal patients. The success of the system in experimental trials suggests that it is ready to be used for validation of the transcranial intervention in animal studies.« less
Chang, Hochan; Kim, Sungwoong; Jin, Sumin; Lee, Seung-Woo; Yang, Gil-Tae; Lee, Ki-Young; Yi, Hyunjung
2018-01-10
Flexible piezoresistive sensors have huge potential for health monitoring, human-machine interfaces, prosthetic limbs, and intelligent robotics. A variety of nanomaterials and structural schemes have been proposed for realizing ultrasensitive flexible piezoresistive sensors. However, despite the success of recent efforts, high sensitivity within narrower pressure ranges and/or the challenging adhesion and stability issues still potentially limit their broad applications. Herein, we introduce a biomaterial-based scheme for the development of flexible pressure sensors that are ultrasensitive (resistance change by 5 orders) over a broad pressure range of 0.1-100 kPa, promptly responsive (20 ms), and yet highly stable. We show that employing biomaterial-incorporated conductive networks of single-walled carbon nanotubes as interfacial layers of contact-based resistive pressure sensors significantly enhances piezoresistive response via effective modulation of the interlayer resistance and provides stable interfaces for the pressure sensors. The developed flexible sensor is capable of real-time monitoring of wrist pulse waves under external medium pressure levels and providing pressure profiles applied by a thumb and a forefinger during object manipulation at a low voltage (1 V) and power consumption (<12 μW). This work provides a new insight into the material candidates and approaches for the development of wearable health-monitoring and human-machine interfaces.
Computing Arm Movements with a Monkey Brainet.
Ramakrishnan, Arjun; Ifft, Peter J; Pais-Vieira, Miguel; Byun, Yoon Woo; Zhuang, Katie Z; Lebedev, Mikhail A; Nicolelis, Miguel A L
2015-07-09
Traditionally, brain-machine interfaces (BMIs) extract motor commands from a single brain to control the movements of artificial devices. Here, we introduce a Brainet that utilizes very-large-scale brain activity (VLSBA) from two (B2) or three (B3) nonhuman primates to engage in a common motor behaviour. A B2 generated 2D movements of an avatar arm where each monkey contributed equally to X and Y coordinates; or one monkey fully controlled the X-coordinate and the other controlled the Y-coordinate. A B3 produced arm movements in 3D space, while each monkey generated movements in 2D subspaces (X-Y, Y-Z, or X-Z). With long-term training we observed increased coordination of behavior, increased correlations in neuronal activity between different brains, and modifications to neuronal representation of the motor plan. Overall, performance of the Brainet improved owing to collective monkey behaviour. These results suggest that primate brains can be integrated into a Brainet, which self-adapts to achieve a common motor goal.
Computing Arm Movements with a Monkey Brainet
Ramakrishnan, Arjun; Ifft, Peter J.; Pais-Vieira, Miguel; Woo Byun, Yoon; Zhuang, Katie Z.; Lebedev, Mikhail A.; Nicolelis, Miguel A.L.
2015-01-01
Traditionally, brain-machine interfaces (BMIs) extract motor commands from a single brain to control the movements of artificial devices. Here, we introduce a Brainet that utilizes very-large-scale brain activity (VLSBA) from two (B2) or three (B3) nonhuman primates to engage in a common motor behaviour. A B2 generated 2D movements of an avatar arm where each monkey contributed equally to X and Y coordinates; or one monkey fully controlled the X-coordinate and the other controlled the Y-coordinate. A B3 produced arm movements in 3D space, while each monkey generated movements in 2D subspaces (X-Y, Y-Z, or X-Z). With long-term training we observed increased coordination of behavior, increased correlations in neuronal activity between different brains, and modifications to neuronal representation of the motor plan. Overall, performance of the Brainet improved owing to collective monkey behaviour. These results suggest that primate brains can be integrated into a Brainet, which self-adapts to achieve a common motor goal. PMID:26158523
Downey, John E; Weiss, Jeffrey M; Muelling, Katharina; Venkatraman, Arun; Valois, Jean-Sebastien; Hebert, Martial; Bagnell, J Andrew; Schwartz, Andrew B; Collinger, Jennifer L
2016-03-18
Recent studies have shown that brain-machine interfaces (BMIs) offer great potential for restoring upper limb function. However, grasping objects is a complicated task and the signals extracted from the brain may not always be capable of driving these movements reliably. Vision-guided robotic assistance is one possible way to improve BMI performance. We describe a method of shared control where the user controls a prosthetic arm using a BMI and receives assistance with positioning the hand when it approaches an object. Two human subjects with tetraplegia used a robotic arm to complete object transport tasks with and without shared control. The shared control system was designed to provide a balance between BMI-derived intention and computer assistance. An autonomous robotic grasping system identified and tracked objects and defined stable grasp positions for these objects. The system identified when the user intended to interact with an object based on the BMI-controlled movements of the robotic arm. Using shared control, BMI controlled movements and autonomous grasping commands were blended to ensure secure grasps. Both subjects were more successful on object transfer tasks when using shared control compared to BMI control alone. Movements made using shared control were more accurate, more efficient, and less difficult. One participant attempted a task with multiple objects and successfully lifted one of two closely spaced objects in 92 % of trials, demonstrating the potential for users to accurately execute their intention while using shared control. Integration of BMI control with vision-guided robotic assistance led to improved performance on object transfer tasks. Providing assistance while maintaining generalizability will make BMI systems more attractive to potential users. NCT01364480 and NCT01894802 .
Bulea, Thomas C.; Kilicarslan, Atilla; Ozdemir, Recep; Paloski, William H.; Contreras-Vidal, Jose L.
2013-01-01
Recent studies support the involvement of supraspinal networks in control of bipedal human walking. Part of this evidence encompasses studies, including our previous work, demonstrating that gait kinematics and limb coordination during treadmill walking can be inferred from the scalp electroencephalogram (EEG) with reasonably high decoding accuracies. These results provide impetus for development of non-invasive brain-machine-interface (BMI) systems for use in restoration and/or augmentation of gait- a primary goal of rehabilitation research. To date, studies examining EEG decoding of activity during gait have been limited to treadmill walking in a controlled environment. However, to be practically viable a BMI system must be applicable for use in everyday locomotor tasks such as over ground walking and turning. Here, we present a novel protocol for non-invasive collection of brain activity (EEG), muscle activity (electromyography (EMG)), and whole-body kinematic data (head, torso, and limb trajectories) during both treadmill and over ground walking tasks. By collecting these data in the uncontrolled environment insight can be gained regarding the feasibility of decoding unconstrained gait and surface EMG from scalp EEG. PMID:23912203
Integrating robotic action with biologic perception: A brain-machine symbiosis theory
NASA Astrophysics Data System (ADS)
Mahmoudi, Babak
In patients with motor disability the natural cyclic flow of information between the brain and external environment is disrupted by their limb impairment. Brain-Machine Interfaces (BMIs) aim to provide new communication channels between the brain and environment by direct translation of brain's internal states into actions. For enabling the user in a wide range of daily life activities, the challenge is designing neural decoders that autonomously adapt to different tasks, environments, and to changes in the pattern of neural activity. In this dissertation, a novel decoding framework for BMIs is developed in which a computational agent autonomously learns how to translate neural states into action based on maximization of a measure of shared goal between user and the agent. Since the agent and brain share the same goal, a symbiotic relationship between them will evolve therefore this decoding paradigm is called a Brain-Machine Symbiosis (BMS) framework. A decoding agent was implemented within the BMS framework based on the Actor-Critic method of Reinforcement Learning. The rule of the Actor as a neural decoder was to find mapping between the neural representation of motor states in the primary motor cortex (MI) and robot actions in order to solve reaching tasks. The Actor learned the optimal control policy using an evaluative feedback that was estimated by the Critic directly from the user's neural activity of the Nucleus Accumbens (NAcc). Through a series of computational neuroscience studies in a cohort of rats it was demonstrated that NAcc could provide a useful evaluative feedback by predicting the increase or decrease in the probability of earning reward based on the environmental conditions. Using a closed-loop BMI simulator it was demonstrated the Actor-Critic decoding architecture was able to adapt to different tasks as well as changes in the pattern of neural activity. The custom design of a dual micro-wire array enabled simultaneous implantation of MI and NAcc for the development of a full closed-loop system. The Actor-Critic decoding architecture was able to solve the brain-controlled reaching task using a robotic arm by capturing the interdependency between the simultaneous action representation in MI and reward expectation in NAcc.
[A novel serial port auto trigger system for MOSFET dose acquisition].
Luo, Guangwen; Qi, Zhenyu
2013-01-01
To synchronize the radiation of microSelectron-HDR (Nucletron afterloading machine) and measurement of MOSFET dose system, a trigger system based on interface circuit was designed and corresponding monitor and trigger program were developed on Qt platform. This interface and control system was tested and showed stable operate and reliable work. This adopted serial port detect technique may expand to trigger application of other medical devices.
Signal detection using support vector machines in the presence of ultrasonic speckle
NASA Astrophysics Data System (ADS)
Kotropoulos, Constantine L.; Pitas, Ioannis
2002-04-01
Support Vector Machines are a general algorithm based on guaranteed risk bounds of statistical learning theory. They have found numerous applications, such as in classification of brain PET images, optical character recognition, object detection, face verification, text categorization and so on. In this paper we propose the use of support vector machines to segment lesions in ultrasound images and we assess thoroughly their lesion detection ability. We demonstrate that trained support vector machines with a Radial Basis Function kernel segment satisfactorily (unseen) ultrasound B-mode images as well as clinical ultrasonic images.
Application of machine learning on brain cancer multiclass classification
NASA Astrophysics Data System (ADS)
Panca, V.; Rustam, Z.
2017-07-01
Classification of brain cancer is a problem of multiclass classification. One approach to solve this problem is by first transforming it into several binary problems. The microarray gene expression dataset has the two main characteristics of medical data: extremely many features (genes) and only a few number of samples. The application of machine learning on microarray gene expression dataset mainly consists of two steps: feature selection and classification. In this paper, the features are selected using a method based on support vector machine recursive feature elimination (SVM-RFE) principle which is improved to solve multiclass classification, called multiple multiclass SVM-RFE. Instead of using only the selected features on a single classifier, this method combines the result of multiple classifiers. The features are divided into subsets and SVM-RFE is used on each subset. Then, the selected features on each subset are put on separate classifiers. This method enhances the feature selection ability of each single SVM-RFE. Twin support vector machine (TWSVM) is used as the method of the classifier to reduce computational complexity. While ordinary SVM finds single optimum hyperplane, the main objective Twin SVM is to find two non-parallel optimum hyperplanes. The experiment on the brain cancer microarray gene expression dataset shows this method could classify 71,4% of the overall test data correctly, using 100 and 1000 genes selected from multiple multiclass SVM-RFE feature selection method. Furthermore, the per class results show that this method could classify data of normal and MD class with 100% accuracy.
Post-acute stroke patients use brain-computer interface to activate electrical stimulation.
Tan, H G; Kong, K H; Shee, C Y; Wang, C C; Guan, C T; Ang, W T
2010-01-01
Through certain mental actions, our electroencephalogram (EEG) can be regulated to operate a brain-computer interface (BCI), which translates the EEG patterns into commands that can be used to operate devices such as prostheses. This allows paralyzed persons to gain direct brain control of the paretic limb, which could open up many possibilities for rehabilitative and assistive applications. When using a BCI neuroprosthesis in stroke, one question that has surfaced is whether stroke patients are able to produce a sufficient change in EEG that can be used as a control signal to operate a prosthesis.
Fromherz, Peter
2006-12-01
We consider the direct electrical interfacing of semiconductor chips with individual nerve cells and brain tissue. At first, the structure of the cell-chip contact is studied. Then we characterize the electrical coupling of ion channels--the electrical elements of nerve cells--with transistors and capacitors in silicon chips. On that basis it is possible to implement signal transmission between microelectronics and the microionics of nerve cells in both directions. Simple hybrid neuroelectronic systems are assembled with neuron pairs and with small neuronal networks. Finally, the interfacing with capacitors and transistors is extended to brain tissue cultured on silicon chips. The application of highly integrated silicon chips allows an imaging of neuronal activity with high spatiotemporal resolution. The goal of the work is an integration of neuronal network dynamics with digital electronics on a microscopic level with respect to experiments in brain research, medical prosthetics, and information technology.
The human role in space (THURIS) applications study. Final briefing
NASA Technical Reports Server (NTRS)
Maybee, George W.
1987-01-01
The THURIS (The Human Role in Space) application is an iterative process involving successive assessments of man/machine mixes in terms of performance, cost and technology to arrive at an optimum man/machine mode for the mission application. The process begins with user inputs which define the mission in terms of an event sequence and performance time requirements. The desired initial operational capability date is also an input requirement. THURIS terms and definitions (e.g., generic activities) are applied to the input data converting it into a form which can be analyzed using the THURIS cost model outputs. The cost model produces tabular and graphical outputs for determining the relative cost-effectiveness of a given man/machine mode and generic activity. A technology database is provided to enable assessment of support equipment availability for selected man/machine modes. If technology gaps exist for an application, the database contains information supportive of further investigation into the relevant technologies. The present study concentrated on testing and enhancing the THURIS cost model and subordinate data files and developing a technology database which interfaces directly with the user via technology readiness displays. This effort has resulted in a more powerful, easy-to-use applications system for optimization of man/machine roles. Volume 1 is an executive summary.
Current trends in hardware and software for brain-computer interfaces (BCIs)
NASA Astrophysics Data System (ADS)
Brunner, P.; Bianchi, L.; Guger, C.; Cincotti, F.; Schalk, G.
2011-04-01
A brain-computer interface (BCI) provides a non-muscular communication channel to people with and without disabilities. BCI devices consist of hardware and software. BCI hardware records signals from the brain, either invasively or non-invasively, using a series of device components. BCI software then translates these signals into device output commands and provides feedback. One may categorize different types of BCI applications into the following four categories: basic research, clinical/translational research, consumer products, and emerging applications. These four categories use BCI hardware and software, but have different sets of requirements. For example, while basic research needs to explore a wide range of system configurations, and thus requires a wide range of hardware and software capabilities, applications in the other three categories may be designed for relatively narrow purposes and thus may only need a very limited subset of capabilities. This paper summarizes technical aspects for each of these four categories of BCI applications. The results indicate that BCI technology is in transition from isolated demonstrations to systematic research and commercial development. This process requires several multidisciplinary efforts, including the development of better integrated and more robust BCI hardware and software, the definition of standardized interfaces, and the development of certification, dissemination and reimbursement procedures.
Bascil, M Serdar; Tesneli, Ahmet Y; Temurtas, Feyzullah
2016-09-01
Brain computer interface (BCI) is a new communication way between man and machine. It identifies mental task patterns stored in electroencephalogram (EEG). So, it extracts brain electrical activities recorded by EEG and transforms them machine control commands. The main goal of BCI is to make available assistive environmental devices for paralyzed people such as computers and makes their life easier. This study deals with feature extraction and mental task pattern recognition on 2-D cursor control from EEG as offline analysis approach. The hemispherical power density changes are computed and compared on alpha-beta frequency bands with only mental imagination of cursor movements. First of all, power spectral density (PSD) features of EEG signals are extracted and high dimensional data reduced by principle component analysis (PCA) and independent component analysis (ICA) which are statistical algorithms. In the last stage, all features are classified with two types of support vector machine (SVM) which are linear and least squares (LS-SVM) and three different artificial neural network (ANN) structures which are learning vector quantization (LVQ), multilayer neural network (MLNN) and probabilistic neural network (PNN) and mental task patterns are successfully identified via k-fold cross validation technique.
Temporary-tattoo for long-term high fidelity biopotential recordings
NASA Astrophysics Data System (ADS)
Bareket, Lilach; Inzelberg, Lilah; Rand, David; David-Pur, Moshe; Rabinovich, David; Brandes, Barak; Hanein, Yael
2016-05-01
Electromyography is a non-invasive method widely used to map muscle activation. For decades, it was commonly accepted that dry metallic electrodes establish poor electrode-skin contact, making them impractical for skin electromyography applications. Gelled electrodes are therefore the standard in electromyography with their use confined, almost entirely, to laboratory settings. Here we present novel dry electrodes, exhibiting outstanding electromyography recording along with excellent user comfort. The electrodes were realized using screen-printing of carbon ink on a soft support. The conformity of the electrodes helps establish direct contact with the skin, making the use of a gel superfluous. Plasma polymerized 3,4-ethylenedioxythiophene was used to enhance the impedance of the electrodes. Cyclic voltammetry measurements revealed an increase in electrode capacitance by a factor of up to 100 in wet conditions. Impedance measurements show a reduction factor of 10 in electrode impedance on human skin. The suitability of the electrodes for long-term electromyography recordings from the hand and from the face is demonstrated. The presented electrodes are ideally-suited for many applications, such as brain-machine interfacing, muscle diagnostics, post-injury rehabilitation, and gaming.
A Component-Based FPGA Design Framework for Neuronal Ion Channel Dynamics Simulations
Mak, Terrence S. T.; Rachmuth, Guy; Lam, Kai-Pui; Poon, Chi-Sang
2008-01-01
Neuron-machine interfaces such as dynamic clamp and brain-implantable neuroprosthetic devices require real-time simulations of neuronal ion channel dynamics. Field Programmable Gate Array (FPGA) has emerged as a high-speed digital platform ideal for such application-specific computations. We propose an efficient and flexible component-based FPGA design framework for neuronal ion channel dynamics simulations, which overcomes certain limitations of the recently proposed memory-based approach. A parallel processing strategy is used to minimize computational delay, and a hardware-efficient factoring approach for calculating exponential and division functions in neuronal ion channel models is used to conserve resource consumption. Performances of the various FPGA design approaches are compared theoretically and experimentally in corresponding implementations of the AMPA and NMDA synaptic ion channel models. Our results suggest that the component-based design framework provides a more memory economic solution as well as more efficient logic utilization for large word lengths, whereas the memory-based approach may be suitable for time-critical applications where a higher throughput rate is desired. PMID:17190033
Inferring imagined speech using EEG signals: a new approach using Riemannian manifold features
NASA Astrophysics Data System (ADS)
Nguyen, Chuong H.; Karavas, George K.; Artemiadis, Panagiotis
2018-02-01
Objective. In this paper, we investigate the suitability of imagined speech for brain-computer interface (BCI) applications. Approach. A novel method based on covariance matrix descriptors, which lie in Riemannian manifold, and the relevance vector machines classifier is proposed. The method is applied on electroencephalographic (EEG) signals and tested in multiple subjects. Main results. The method is shown to outperform other approaches in the field with respect to accuracy and robustness. The algorithm is validated on various categories of speech, such as imagined pronunciation of vowels, short words and long words. The classification accuracy of our methodology is in all cases significantly above chance level, reaching a maximum of 70% for cases where we classify three words and 95% for cases of two words. Significance. The results reveal certain aspects that may affect the success of speech imagery classification from EEG signals, such as sound, meaning and word complexity. This can potentially extend the capability of utilizing speech imagery in future BCI applications. The dataset of speech imagery collected from total 15 subjects is also published.
Temporary-tattoo for long-term high fidelity biopotential recordings
Bareket, Lilach; Inzelberg, Lilah; Rand, David; David-Pur, Moshe; Rabinovich, David; Brandes, Barak; Hanein, Yael
2016-01-01
Electromyography is a non-invasive method widely used to map muscle activation. For decades, it was commonly accepted that dry metallic electrodes establish poor electrode-skin contact, making them impractical for skin electromyography applications. Gelled electrodes are therefore the standard in electromyography with their use confined, almost entirely, to laboratory settings. Here we present novel dry electrodes, exhibiting outstanding electromyography recording along with excellent user comfort. The electrodes were realized using screen-printing of carbon ink on a soft support. The conformity of the electrodes helps establish direct contact with the skin, making the use of a gel superfluous. Plasma polymerized 3,4-ethylenedioxythiophene was used to enhance the impedance of the electrodes. Cyclic voltammetry measurements revealed an increase in electrode capacitance by a factor of up to 100 in wet conditions. Impedance measurements show a reduction factor of 10 in electrode impedance on human skin. The suitability of the electrodes for long-term electromyography recordings from the hand and from the face is demonstrated. The presented electrodes are ideally-suited for many applications, such as brain-machine interfacing, muscle diagnostics, post-injury rehabilitation, and gaming. PMID:27169387
Kashihara, Koji
2014-01-01
Unlike assistive technology for verbal communication, the brain-machine or brain-computer interface (BMI/BCI) has not been established as a non-verbal communication tool for amyotrophic lateral sclerosis (ALS) patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG) signals can be used to detect patients' emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based non-verbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600–700 ms) after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus (FG). This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals. A classification method based on a support vector machine enables the easy classification of neutral faces that trigger specific individual emotions. In accordance with this classification, a face on a computer morphs into a sad or displeased countenance. The proposed method could be incorporated as a part of non-verbal communication tools to enable emotional expression. PMID:25206321
Costa, Álvaro; Hortal, Enrique; Iáñez, Eduardo; Azorín, José M
2014-01-01
Non-invasive Brain-Machine Interfaces (BMIs) are being used more and more these days to design systems focused on helping people with motor disabilities. Spontaneous BMIs translate user's brain signals into commands to control devices. On these systems, by and large, 2 different mental tasks can be detected with enough accuracy. However, a large training time is required and the system needs to be adjusted on each session. This paper presents a supplementary system that employs BMI sensors, allowing the use of 2 systems (the BMI system and the supplementary system) with the same data acquisition device. This supplementary system is designed to control a robotic arm in two dimensions using electromyographical (EMG) signals extracted from the electroencephalographical (EEG) recordings. These signals are voluntarily produced by users clenching their jaws. EEG signals (with EMG contributions) were registered and analyzed to obtain the electrodes and the range of frequencies which provide the best classification results for 5 different clenching tasks. A training stage, based on the 2-dimensional control of a cursor, was designed and used by the volunteers to get used to this control. Afterwards, the control was extrapolated to a robotic arm in a 2-dimensional workspace. Although the training performed by volunteers requires 70 minutes, the final results suggest that in a shorter period of time (45 min), users should be able to control the robotic arm in 2 dimensions with their jaws. The designed system is compared with a similar 2-dimensional system based on spontaneous BMIs, and our system shows faster and more accurate performance. This is due to the nature of the control signals. Brain potentials are much more difficult to control than the electromyographical signals produced by jaw clenches. Additionally, the presented system also shows an improvement in the results compared with an electrooculographic system in a similar environment.
Bhagat, Nikunj A.; Venkatakrishnan, Anusha; Abibullaev, Berdakh; Artz, Edward J.; Yozbatiran, Nuray; Blank, Amy A.; French, James; Karmonik, Christof; Grossman, Robert G.; O'Malley, Marcia K.; Francisco, Gerard E.; Contreras-Vidal, Jose L.
2016-01-01
This study demonstrates the feasibility of detecting motor intent from brain activity of chronic stroke patients using an asynchronous electroencephalography (EEG)-based brain machine interface (BMI). Intent was inferred from movement related cortical potentials (MRCPs) measured over an optimized set of EEG electrodes. Successful intent detection triggered the motion of an upper-limb exoskeleton (MAHI Exo-II), to guide movement and to encourage active user participation by providing instantaneous sensory feedback. Several BMI design features were optimized to increase system performance in the presence of single-trial variability of MRCPs in the injured brain: (1) an adaptive time window was used for extracting features during BMI calibration; (2) training data from two consecutive days were pooled for BMI calibration to increase robustness to handle the day-to-day variations typical of EEG, and (3) BMI predictions were gated by residual electromyography (EMG) activity from the impaired arm, to reduce the number of false positives. This patient-specific BMI calibration approach can accommodate a broad spectrum of stroke patients with diverse motor capabilities. Following BMI optimization on day 3, testing of the closed-loop BMI-MAHI exoskeleton, on 4th and 5th days of the study, showed consistent BMI performance with overall mean true positive rate (TPR) = 62.7 ± 21.4% on day 4 and 67.1 ± 14.6% on day 5. The overall false positive rate (FPR) across subjects was 27.74 ± 37.46% on day 4 and 27.5 ± 35.64% on day 5; however for two subjects who had residual motor function and could benefit from the EMG-gated BMI, the mean FPR was quite low (< 10%). On average, motor intent was detected −367 ± 328 ms before movement onset during closed-loop operation. These findings provide evidence that closed-loop EEG-based BMI for stroke patients can be designed and optimized to perform well across multiple days without system recalibration. PMID:27065787
Costa, Álvaro; Hortal, Enrique; Iáñez, Eduardo; Azorín, José M.
2014-01-01
Non-invasive Brain-Machine Interfaces (BMIs) are being used more and more these days to design systems focused on helping people with motor disabilities. Spontaneous BMIs translate user's brain signals into commands to control devices. On these systems, by and large, 2 different mental tasks can be detected with enough accuracy. However, a large training time is required and the system needs to be adjusted on each session. This paper presents a supplementary system that employs BMI sensors, allowing the use of 2 systems (the BMI system and the supplementary system) with the same data acquisition device. This supplementary system is designed to control a robotic arm in two dimensions using electromyographical (EMG) signals extracted from the electroencephalographical (EEG) recordings. These signals are voluntarily produced by users clenching their jaws. EEG signals (with EMG contributions) were registered and analyzed to obtain the electrodes and the range of frequencies which provide the best classification results for 5 different clenching tasks. A training stage, based on the 2-dimensional control of a cursor, was designed and used by the volunteers to get used to this control. Afterwards, the control was extrapolated to a robotic arm in a 2-dimensional workspace. Although the training performed by volunteers requires 70 minutes, the final results suggest that in a shorter period of time (45 min), users should be able to control the robotic arm in 2 dimensions with their jaws. The designed system is compared with a similar 2-dimensional system based on spontaneous BMIs, and our system shows faster and more accurate performance. This is due to the nature of the control signals. Brain potentials are much more difficult to control than the electromyographical signals produced by jaw clenches. Additionally, the presented system also shows an improvement in the results compared with an electrooculographic system in a similar environment. PMID:25390372
Kashihara, Koji
2014-01-01
Unlike assistive technology for verbal communication, the brain-machine or brain-computer interface (BMI/BCI) has not been established as a non-verbal communication tool for amyotrophic lateral sclerosis (ALS) patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG) signals can be used to detect patients' emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based non-verbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600-700 ms) after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus (FG). This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals. A classification method based on a support vector machine enables the easy classification of neutral faces that trigger specific individual emotions. In accordance with this classification, a face on a computer morphs into a sad or displeased countenance. The proposed method could be incorporated as a part of non-verbal communication tools to enable emotional expression.
Godshall, N.A.; Koehler, D.R.; Liang, A.Y.; Smith, B.K.
1993-03-30
A micro-machined resonator, typically quartz, with upper and lower micro-machinable support members, or covers, having etched wells which may be lined with conductive electrode material, between the support members is a quartz resonator having an energy trapping quartz mesa capacitively coupled to the electrode through a diaphragm; the quartz resonator is supported by either micro-machined cantilever springs or by thin layers extending over the surfaces of the support. If the diaphragm is rigid, clock applications are available, and if the diaphragm is resilient, then transducer applications can be achieved. Either the thin support layers or the conductive electrode material can be integral with the diaphragm. In any event, the covers are bonded to form a hermetic seal and the interior volume may be filled with a gas or may be evacuated. In addition, one or both of the covers may include oscillator and interface circuitry for the resonator.
Godshall, Ned A.; Koehler, Dale R.; Liang, Alan Y.; Smith, Bradley K.
1993-01-01
A micro-machined resonator, typically quartz, with upper and lower micro-machinable support members, or covers, having etched wells which may be lined with conductive electrode material, between the support members is a quartz resonator having an energy trapping quartz mesa capacitively coupled to the electrode through a diaphragm; the quartz resonator is supported by either micro-machined cantilever springs or by thin layers extending over the surfaces of the support. If the diaphragm is rigid, clock applications are available, and if the diaphragm is resilient, then transducer applications can be achieved. Either the thin support layers or the conductive electrode material can be integral with the diaphragm. In any event, the covers are bonded to form a hermetic seal and the interior volume may be filled with a gas or may be evacuated. In addition, one or both of the covers may include oscillator and interface circuitry for the resonator.
PyMVPA: A Unifying Approach to the Analysis of Neuroscientific Data
Hanke, Michael; Halchenko, Yaroslav O.; Sederberg, Per B.; Olivetti, Emanuele; Fründ, Ingo; Rieger, Jochem W.; Herrmann, Christoph S.; Haxby, James V.; Hanson, Stephen José; Pollmann, Stefan
2008-01-01
The Python programming language is steadily increasing in popularity as the language of choice for scientific computing. The ability of this scripting environment to access a huge code base in various languages, combined with its syntactical simplicity, make it the ideal tool for implementing and sharing ideas among scientists from numerous fields and with heterogeneous methodological backgrounds. The recent rise of reciprocal interest between the machine learning (ML) and neuroscience communities is an example of the desire for an inter-disciplinary transfer of computational methods that can benefit from a Python-based framework. For many years, a large fraction of both research communities have addressed, almost independently, very high-dimensional problems with almost completely non-overlapping methods. However, a number of recently published studies that applied ML methods to neuroscience research questions attracted a lot of attention from researchers from both fields, as well as the general public, and showed that this approach can provide novel and fruitful insights into the functioning of the brain. In this article we show how PyMVPA, a specialized Python framework for machine learning based data analysis, can help to facilitate this inter-disciplinary technology transfer by providing a single interface to a wide array of machine learning libraries and neural data-processing methods. We demonstrate the general applicability and power of PyMVPA via analyses of a number of neural data modalities, including fMRI, EEG, MEG, and extracellular recordings. PMID:19212459
NASA Astrophysics Data System (ADS)
Sadeghi, Saman; MacKay, William A.; van Dam, R. Michael; Thompson, Michael
2011-02-01
Real-time analysis of multi-channel spatio-temporal sensor data presents a considerable technical challenge for a number of applications. For example, in brain-computer interfaces, signal patterns originating on a time-dependent basis from an array of electrodes on the scalp (i.e. electroencephalography) must be analyzed in real time to recognize mental states and translate these to commands which control operations in a machine. In this paper we describe a new technique for recognition of spatio-temporal patterns based on performing online discrimination of time-resolved events through the use of correlation of phase dynamics between various channels in a multi-channel system. The algorithm extracts unique sensor signature patterns associated with each event during a training period and ranks importance of sensor pairs in order to distinguish between time-resolved stimuli to which the system may be exposed during real-time operation. We apply the algorithm to electroencephalographic signals obtained from subjects tested in the neurophysiology laboratories at the University of Toronto. The extension of this algorithm for rapid detection of patterns in other sensing applications, including chemical identification via chemical or bio-chemical sensor arrays, is also discussed.
Wang, Fang; Han, Yong; Wang, Bingyu; Peng, Qian; Huang, Xiaoqun; Miller, Karol; Wittek, Adam
2018-05-12
In this study, we investigate the effects of modelling choices for the brain-skull interface (layers of tissues between the brain and skull that determine boundary conditions for the brain) and the constitutive model of brain parenchyma on the brain responses under violent impact as predicted using computational biomechanics model. We used the head/brain model from Total HUman Model for Safety (THUMS)-extensively validated finite element model of the human body that has been applied in numerous injury biomechanics studies. The computations were conducted using a well-established nonlinear explicit dynamics finite element code LS-DYNA. We employed four approaches for modelling the brain-skull interface and four constitutive models for the brain tissue in the numerical simulations of the experiments on post-mortem human subjects exposed to violent impacts reported in the literature. The brain-skull interface models included direct representation of the brain meninges and cerebrospinal fluid, outer brain surface rigidly attached to the skull, frictionless sliding contact between the brain and skull, and a layer of spring-type cohesive elements between the brain and skull. We considered Ogden hyperviscoelastic, Mooney-Rivlin hyperviscoelastic, neo-Hookean hyperviscoelastic and linear viscoelastic constitutive models of the brain tissue. Our study indicates that the predicted deformations within the brain and related brain injury criteria are strongly affected by both the approach of modelling the brain-skull interface and the constitutive model of the brain parenchyma tissues. The results suggest that accurate prediction of deformations within the brain and risk of brain injury due to violent impact using computational biomechanics models may require representation of the meninges and subarachnoidal space with cerebrospinal fluid in the model and application of hyperviscoelastic (preferably Ogden-type) constitutive model for the brain tissue.
Kraus, Dominic; Naros, Georgios; Guggenberger, Robert; Leão, Maria Teresa; Ziemann, Ulf; Gharabaghi, Alireza
2018-02-07
Standard brain stimulation protocols modify human motor cortex excitability by modulating the gain of the activated corticospinal pathways. However, the restoration of motor function following lesions of the corticospinal tract requires also the recruitment of additional neurons to increase the net corticospinal output. For this purpose, we investigated a novel protocol based on brain state-dependent paired associative stimulation.Motor imagery (MI)-related electroencephalography was recorded in healthy males and females for brain state-dependent control of both cortical and peripheral stimulation in a brain-machine interface environment. State-dependency was investigated with concurrent, delayed, and independent stimulation relative to the MI task. Specifically, sensorimotor event-related desynchronization (ERD) in the β-band (16-22 Hz) triggered peripheral stimulation through passive hand opening by a robotic orthosis and transcranial magnetic stimulation to the respective cortical motor representation, either synchronously or subsequently. These MI-related paradigms were compared with paired cortical and peripheral input applied independent of the brain state. Cortical stimulation resulted in a significant increase in corticospinal excitability only when applied brain state-dependently and synchronously to peripheral input. These gains were resistant to a depotentiation task, revealed a nonlinear evolution of plasticity, and were mediated via the recruitment of additional corticospinal neurons rather than via synchronization of neuronal firing. Recruitment of additional corticospinal pathways may be achieved when cortical and peripheral inputs are applied concurrently, and during β-ERD. These findings resemble a gating mechanism and are potentially important for developing closed-loop brain stimulation for the treatment of hand paralysis following lesions of the corticospinal tract. SIGNIFICANCE STATEMENT The activity state of the motor system influences the excitability of corticospinal pathways to external input. State-dependent interventions harness this property to increase the connectivity between motor cortex and muscles. These stimulation protocols modulate the gain of the activated pathways, but not the overall corticospinal recruitment. In this study, a brain-machine interface paired peripheral stimulation through passive hand opening with transcranial magnetic stimulation to the respective cortical motor representation during volitional β-band desynchronization. Cortical stimulation resulted in the recruitment of additional corticospinal pathways, but only when applied brain state-dependently and synchronously to peripheral input. These effects resemble a gating mechanism and may be important for the restoration of motor function following lesions of the corticospinal tract. Copyright © 2018 the authors 0270-6474/18/381397-12$15.00/0.
Dissolvable films of silk fibroin for ultrathin conformal bio-integrated electronics.
Kim, Dae-Hyeong; Viventi, Jonathan; Amsden, Jason J; Xiao, Jianliang; Vigeland, Leif; Kim, Yun-Soung; Blanco, Justin A; Panilaitis, Bruce; Frechette, Eric S; Contreras, Diego; Kaplan, David L; Omenetto, Fiorenzo G; Huang, Yonggang; Hwang, Keh-Chih; Zakin, Mitchell R; Litt, Brian; Rogers, John A
2010-06-01
Electronics that are capable of intimate, non-invasive integration with the soft, curvilinear surfaces of biological tissues offer important opportunities for diagnosing and treating disease and for improving brain/machine interfaces. This article describes a material strategy for a type of bio-interfaced system that relies on ultrathin electronics supported by bioresorbable substrates of silk fibroin. Mounting such devices on tissue and then allowing the silk to dissolve and resorb initiates a spontaneous, conformal wrapping process driven by capillary forces at the biotic/abiotic interface. Specialized mesh designs and ultrathin forms for the electronics ensure minimal stresses on the tissue and highly conformal coverage, even for complex curvilinear surfaces, as confirmed by experimental and theoretical studies. In vivo, neural mapping experiments on feline animal models illustrate one mode of use for this class of technology. These concepts provide new capabilities for implantable and surgical devices.
Dissolvable Films of Silk Fibroin for Ultrathin, Conformal Bio-Integrated Electronics
Kim, Dae-Hyeong; Viventi, Jonathan; Amsden, Jason J.; Xiao, Jianliang; Vigeland, Leif; Kim, Yun-Soung; Blanco, Justin A.; Panilaitis, Bruce; Frechette, Eric S.; Contreras, Diego; Kaplan, David L.; Omenetto, Fiorenzo G.; Huang, Yonggang; Hwang, Keh-Chih; Zakin, Mitchell R.; Litt, Brian; Rogers, John A.
2011-01-01
Electronics that are capable of intimate, non-invasive integration with the soft, curvilinear surfaces of biological tissues offer important opportunities for diagnosing and treating disease and for improving brain-machine interfaces. This paper describes a material strategy for a type of bio-interfaced system that relies on ultrathin electronics supported by bioresorbable substrates of silk fibroin. Mounting such devices on tissue and then allowing the silk to dissolve and resorb initiates a spontaneous, conformal wrapping process driven by capillary forces at the biotic/abiotic interface. Specialized mesh designs and ultrathin forms for the electronics ensure minimal stresses on the tissue and highly conformal coverage, even for complex curvilinear surfaces, as confirmed by experimental and theoretical studies. In vivo, neural mapping experiments on feline animal models illustrate one mode of use for this class of technology. These concepts provide new capabilities for implantable or surgical devices. PMID:20400953
Dissolvable films of silk fibroin for ultrathin conformal bio-integrated electronics
NASA Astrophysics Data System (ADS)
Kim, Dae-Hyeong; Viventi, Jonathan; Amsden, Jason J.; Xiao, Jianliang; Vigeland, Leif; Kim, Yun-Soung; Blanco, Justin A.; Panilaitis, Bruce; Frechette, Eric S.; Contreras, Diego; Kaplan, David L.; Omenetto, Fiorenzo G.; Huang, Yonggang; Hwang, Keh-Chih; Zakin, Mitchell R.; Litt, Brian; Rogers, John A.
2010-06-01
Electronics that are capable of intimate, non-invasive integration with the soft, curvilinear surfaces of biological tissues offer important opportunities for diagnosing and treating disease and for improving brain/machine interfaces. This article describes a material strategy for a type of bio-interfaced system that relies on ultrathin electronics supported by bioresorbable substrates of silk fibroin. Mounting such devices on tissue and then allowing the silk to dissolve and resorb initiates a spontaneous, conformal wrapping process driven by capillary forces at the biotic/abiotic interface. Specialized mesh designs and ultrathin forms for the electronics ensure minimal stresses on the tissue and highly conformal coverage, even for complex curvilinear surfaces, as confirmed by experimental and theoretical studies. In vivo, neural mapping experiments on feline animal models illustrate one mode of use for this class of technology. These concepts provide new capabilities for implantable and surgical devices.
Quantum information, cognition, and music.
Dalla Chiara, Maria L; Giuntini, Roberto; Leporini, Roberto; Negri, Eleonora; Sergioli, Giuseppe
2015-01-01
Parallelism represents an essential aspect of human mind/brain activities. One can recognize some common features between psychological parallelism and the characteristic parallel structures that arise in quantum theory and in quantum computation. The article is devoted to a discussion of the following questions: a comparison between classical probabilistic Turing machines and quantum Turing machines.possible applications of the quantum computational semantics to cognitive problems.parallelism in music.
Quantum information, cognition, and music
Dalla Chiara, Maria L.; Giuntini, Roberto; Leporini, Roberto; Negri, Eleonora; Sergioli, Giuseppe
2015-01-01
Parallelism represents an essential aspect of human mind/brain activities. One can recognize some common features between psychological parallelism and the characteristic parallel structures that arise in quantum theory and in quantum computation. The article is devoted to a discussion of the following questions: a comparison between classical probabilistic Turing machines and quantum Turing machines.possible applications of the quantum computational semantics to cognitive problems.parallelism in music. PMID:26539139
Applications of artificial intelligence to rotorcraft
NASA Technical Reports Server (NTRS)
Abbott, Kathy H.
1987-01-01
The application of AI technology may have significant potential payoff for rotorcraft. In the near term, the status of the technology will limit its applicability to decision aids rather than total automation. The specific application areas are categorized into onboard and nonflight aids. The onboard applications include: fault monitoring, diagnosis, and reconfiguration; mission and tactics planning; situation assessment; navigation aids, especially in nap-of-the-earth flight; and adaptive man-machine interfaces. The nonflight applications include training and maintenance diagnostics.
Gesture-controlled interfaces for self-service machines and other applications
NASA Technical Reports Server (NTRS)
Cohen, Charles J. (Inventor); Jacobus, Charles J. (Inventor); Paul, George (Inventor); Beach, Glenn (Inventor); Foulk, Gene (Inventor); Obermark, Jay (Inventor); Cavell, Brook (Inventor)
2004-01-01
A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body/object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines.
TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (DEC VAX ULTRIX VERSION)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. Data-driven graphical objects such as dials, thermometers, and strip charts are also included. TAE Plus updates the strip chart as the data values change. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. The Silicon Graphics version of TAE Plus now has a font caching scheme and a color caching scheme to make color allocation more efficient. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides an extremely powerful means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif Toolkit 1.1 or 1.1.1. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus comes with InterViews and idraw, two software packages developed by Stanford University and integrated in TAE Plus. TAE Plus was developed in 1989 and version 5.1 was released in 1991. TAE Plus is currently available on media suitable for eight different machine platforms: 1) DEC VAX computers running VMS 5.3 or higher (TK50 cartridge in VAX BACKUP format), 2) DEC VAXstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX 8.0 (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX 8.05 (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun3 series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), 7) Sun4 (SPARC) series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), and 8) SGI Indigo computers running IRIX 4.0.1 and IRIX/Motif 1.0.1 (.25 inch IRIS tape cartridge in UNIX tar format). An optional Motif Object Code License is available for either Sun version. TAE is a trademark of the National Aeronautics and Space Administration. X Window System is a trademark of the Massachusetts Institute of Technology. Motif is a trademark of the Open Software Foundation. DEC, VAX, VMS, TK50 and ULTRIX are trademarks of Digital Equipment Corporation. HP9000 and HP-UX are trademarks of Hewlett-Packard Co. Sun3, Sun4, SunOS, and SPARC are trademarks of Sun Microsystems, Inc. SGI and IRIS are registered trademarks of Silicon Graphics, Inc.
TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (SUN3 VERSION)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. Data-driven graphical objects such as dials, thermometers, and strip charts are also included. TAE Plus updates the strip chart as the data values change. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. The Silicon Graphics version of TAE Plus now has a font caching scheme and a color caching scheme to make color allocation more efficient. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides an extremely powerful means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif Toolkit 1.1 or 1.1.1. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus comes with InterViews and idraw, two software packages developed by Stanford University and integrated in TAE Plus. TAE Plus was developed in 1989 and version 5.1 was released in 1991. TAE Plus is currently available on media suitable for eight different machine platforms: 1) DEC VAX computers running VMS 5.3 or higher (TK50 cartridge in VAX BACKUP format), 2) DEC VAXstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX 8.0 (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX 8.05 (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun3 series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), 7) Sun4 (SPARC) series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), and 8) SGI Indigo computers running IRIX 4.0.1 and IRIX/Motif 1.0.1 (.25 inch IRIS tape cartridge in UNIX tar format). An optional Motif Object Code License is available for either Sun version. TAE is a trademark of the National Aeronautics and Space Administration. X Window System is a trademark of the Massachusetts Institute of Technology. Motif is a trademark of the Open Software Foundation. DEC, VAX, VMS, TK50 and ULTRIX are trademarks of Digital Equipment Corporation. HP9000 and HP-UX are trademarks of Hewlett-Packard Co. Sun3, Sun4, SunOS, and SPARC are trademarks of Sun Microsystems, Inc. SGI and IRIS are registered trademarks of Silicon Graphics, Inc.
TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (SUN3 VERSION WITH MOTIF)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. Data-driven graphical objects such as dials, thermometers, and strip charts are also included. TAE Plus updates the strip chart as the data values change. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. The Silicon Graphics version of TAE Plus now has a font caching scheme and a color caching scheme to make color allocation more efficient. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides an extremely powerful means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif Toolkit 1.1 or 1.1.1. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus comes with InterViews and idraw, two software packages developed by Stanford University and integrated in TAE Plus. TAE Plus was developed in 1989 and version 5.1 was released in 1991. TAE Plus is currently available on media suitable for eight different machine platforms: 1) DEC VAX computers running VMS 5.3 or higher (TK50 cartridge in VAX BACKUP format), 2) DEC VAXstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX 8.0 (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX 8.05 (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun3 series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), 7) Sun4 (SPARC) series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), and 8) SGI Indigo computers running IRIX 4.0.1 and IRIX/Motif 1.0.1 (.25 inch IRIS tape cartridge in UNIX tar format). An optional Motif Object Code License is available for either Sun version. TAE is a trademark of the National Aeronautics and Space Administration. X Window System is a trademark of the Massachusetts Institute of Technology. Motif is a trademark of the Open Software Foundation. DEC, VAX, VMS, TK50 and ULTRIX are trademarks of Digital Equipment Corporation. HP9000 and HP-UX are trademarks of Hewlett-Packard Co. Sun3, Sun4, SunOS, and SPARC are trademarks of Sun Microsystems, Inc. SGI and IRIS are registered trademarks of Silicon Graphics, Inc.
Near infrared spectroscopy based brain-computer interface
NASA Astrophysics Data System (ADS)
Ranganatha, Sitaram; Hoshi, Yoko; Guan, Cuntai
2005-04-01
A brain-computer interface (BCI) provides users with an alternative output channel other than the normal output path of the brain. BCI is being given much attention recently as an alternate mode of communication and control for the disabled, such as patients suffering from Amyotrophic Lateral Sclerosis (ALS) or "locked-in". BCI may also find applications in military, education and entertainment. Most of the existing BCI systems which rely on the brain's electrical activity use scalp EEG signals. The scalp EEG is an inherently noisy and non-linear signal. The signal is detrimentally affected by various artifacts such as the EOG, EMG, ECG and so forth. EEG is cumbersome to use in practice, because of the need for applying conductive gel, and the need for the subject to be immobile. There is an urgent need for a more accessible interface that uses a more direct measure of cognitive function to control an output device. The optical response of Near Infrared Spectroscopy (NIRS) denoting brain activation can be used as an alternative to electrical signals, with the intention of developing a more practical and user-friendly BCI. In this paper, a new method of brain-computer interface (BCI) based on NIRS is proposed. Preliminary results of our experiments towards developing this system are reported.
Kranzfelder, Michael; Schneider, Armin; Fiolka, Adam; Koller, Sebastian; Wilhelm, Dirk; Reiser, Silvano; Meining, Alexander; Feussner, Hubertus
2015-08-01
To investigate why natural orifice translumenal endoscopic surgery (NOTES) has not yet become widely accepted and to prove whether the main reason is still the lack of appropriate platforms due to the deficiency of applicable interfaces. To assess expectations of a suitable interface design, we performed a survey on human-machine interfaces for NOTES mechatronic support systems among surgeons, gastroenterologists, and medical engineers. Of 120 distributed questionnaires, each consisting of 14 distinct questions, 100 (83%) were eligible for analysis. A mechatronic platform for NOTES was considered "important" by 71% of surgeons, 83% of gastroenterologist,s and 56% of medical engineers. "Intuitivity" and "simple to use" were the most favored aspects (33% to 51%). Haptic feedback was considered "important" by 70% of participants. In all, 53% of surgeons, 50% of gastroenterologists, and 33% of medical engineers already had experience with NOTES platforms or other surgical robots; however, current interfaces only met expectations in just more than 50%. Whereas surgeons did not favor a certain working posture, gastroenterologists and medical engineers preferred a sitting position. Three-dimensional visualization was generally considered "nice to have" (67% to 72%); however, for 26% of surgeons, 17% of gastroenterologists, and 7% of medical engineers it did not matter (P = 0.018). Requests and expectations of human-machine interfaces for NOTES seem to be generally similar for surgeons, gastroenterologist, and medical engineers. Consensus exists on the importance of developing interfaces that should be both intuitive and simple to use, are similar to preexisting familiar instruments, and exceed current available systems. © The Author(s) 2014.
Brain-Machine Interface Enables Bimanual Arm Movements in Monkeys
Ifft, Peter J.; Shokur, Solaiman; Li, Zheng; Lebedev, Mikhail A.; Nicolelis, Miguel A. L.
2014-01-01
Brain-machine interfaces (BMIs) are artificial systems that aim to restore sensation and movement to severely paralyzed patients. However, previous BMIs enabled only single arm functionality, and control of bimanual movements was a major challenge. Here, we developed and tested a bimanual BMI that enabled rhesus monkeys to control two avatar arms simultaneously. The bimanual BMI was based on the extracellular activity of 374–497 neurons recorded from several frontal and parietal cortical areas of both cerebral hemispheres. Cortical activity was transformed into movements of the two arms with a decoding algorithm called a 5th order unscented Kalman filter (UKF). The UKF is well-suited for BMI decoding because it accounts for both characteristics of reaching movements and their representation by cortical neurons. The UKF was trained either during a manual task performed with two joysticks or by having the monkeys passively observe the movements of avatar arms. Most cortical neurons changed their modulation patterns when both arms were engaged simultaneously. Representing the two arms jointly in a single UKF decoder resulted in improved decoding performance compared with using separate decoders for each arm. As the animals’ performance in bimanual BMI control improved over time, we observed widespread plasticity in frontal and parietal cortical areas. Neuronal representation of the avatar and reach targets was enhanced with learning, whereas pairwise correlations between neurons initially increased and then decreased. These results suggest that cortical networks may assimilate the two avatar arms through BMI control. PMID:24197735
A brain-machine interface enables bimanual arm movements in monkeys.
Ifft, Peter J; Shokur, Solaiman; Li, Zheng; Lebedev, Mikhail A; Nicolelis, Miguel A L
2013-11-06
Brain-machine interfaces (BMIs) are artificial systems that aim to restore sensation and movement to paralyzed patients. So far, BMIs have enabled only one arm to be moved at a time. Control of bimanual arm movements remains a major challenge. We have developed and tested a bimanual BMI that enables rhesus monkeys to control two avatar arms simultaneously. The bimanual BMI was based on the extracellular activity of 374 to 497 neurons recorded from several frontal and parietal cortical areas of both cerebral hemispheres. Cortical activity was transformed into movements of the two arms with a decoding algorithm called a fifth-order unscented Kalman filter (UKF). The UKF was trained either during a manual task performed with two joysticks or by having the monkeys passively observe the movements of avatar arms. Most cortical neurons changed their modulation patterns when both arms were engaged simultaneously. Representing the two arms jointly in a single UKF decoder resulted in improved decoding performance compared with using separate decoders for each arm. As the animals' performance in bimanual BMI control improved over time, we observed widespread plasticity in frontal and parietal cortical areas. Neuronal representation of the avatar and reach targets was enhanced with learning, whereas pairwise correlations between neurons initially increased and then decreased. These results suggest that cortical networks may assimilate the two avatar arms through BMI control. These findings should help in the design of more sophisticated BMIs capable of enabling bimanual motor control in human patients.
Ultimate computing. Biomolecular consciousness and nano Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hameroff, S.R.
1987-01-01
The book advances the premise that the cytoskeleton is the cell's nervous system, the biological controller/computer. If indeed cytoskeletal dynamics in the nanoscale (billionth meter, billionth second) are the texture of intracellular information processing, emerging ''NanoTechnologies'' (scanning tunneling microscopy, Feynman machines, von Neumann replicators, etc.) should enable direct monitoring, decoding and interfacing between biological and technological information devices. This in turn could result in important biomedical applications and perhaps a merger of mind and machine: Ultimate Computing.
Distinct neural patterns enable grasp types decoding in monkey dorsal premotor cortex.
Hao, Yaoyao; Zhang, Qiaosheng; Controzzi, Marco; Cipriani, Christian; Li, Yue; Li, Juncheng; Zhang, Shaomin; Wang, Yiwen; Chen, Weidong; Chiara Carrozza, Maria; Zheng, Xiaoxiang
2014-12-01
Recent studies have shown that dorsal premotor cortex (PMd), a cortical area in the dorsomedial grasp pathway, is involved in grasp movements. However, the neural ensemble firing property of PMd during grasp movements and the extent to which it can be used for grasp decoding are still unclear. To address these issues, we used multielectrode arrays to record both spike and local field potential (LFP) signals in PMd in macaque monkeys performing reaching and grasping of one of four differently shaped objects. Single and population neuronal activity showed distinct patterns during execution of different grip types. Cluster analysis of neural ensemble signals indicated that the grasp related patterns emerged soon (200-300 ms) after the go cue signal, and faded away during the hold period. The timing and duration of the patterns varied depending on the behaviors of individual monkey. Application of support vector machine model to stable activity patterns revealed classification accuracies of 94% and 89% for each of the two monkeys, indicating a robust, decodable grasp pattern encoded in the PMd. Grasp decoding using LFPs, especially the high-frequency bands, also produced high decoding accuracies. This study is the first to specify the neuronal population encoding of grasp during the time course of grasp. We demonstrate high grasp decoding performance in PMd. These findings, combined with previous evidence for reach related modulation studies, suggest that PMd may play an important role in generation and maintenance of grasp action and may be a suitable locus for brain-machine interface applications.
The RACE (Research and Development in Advanced Technologies for Europe) Program: A 1989 Update
1989-12-15
Definition TV (HDTV) Expcrimcntal Usage . A......a.d..r Dist special 1081 - Broadband User Network Interface (BUNI)..................... 4 1082 ...develop man/machine which will provide a traffic analyzer and generator. interfaces that are consistent across a wide range of ap-plications. 1082 ... 1082 are to provide usage reference models for the different types of e Define IBC quality of service rquiremnts by usage design issue. It deals with
Computer interface for mechanical arm
NASA Technical Reports Server (NTRS)
Derocher, W. L.; Zermuehlen, R. O.
1978-01-01
Man/machine interface commands computer-controlled mechanical arm. Remotely-controlled arm has six degrees of freedom and is controlled through "supervisory-control" mode, in which all motions of arm follow set of preprogramed sequences. For simplicity, few prescribed commands are required to accomplish entire operation. Applications include operating computer-controlled arm to handle radioactive of explosive materials or commanding arm to perform functions in hostile environments. Modified version using displays may be applied in medicine.
1990-03-01
decided to have three kinds of sessions: invited-paper sessions, panel discussions, and poster sessions. The invited papers were divided into papers...soon followed. Applications in medicine, involving exploration and operation within the human body, are now receiving increased attention . Early... attention toward issues that may be important for the design of auditory interfaces. The importance of appropriate auditory inputs to observers with normal
Wavelet multiresolution complex network for decoding brain fatigued behavior from P300 signals
NASA Astrophysics Data System (ADS)
Gao, Zhong-Ke; Wang, Zi-Bo; Yang, Yu-Xuan; Li, Shan; Dang, Wei-Dong; Mao, Xiao-Qian
2018-09-01
Brain-computer interface (BCI) enables users to interact with the environment without relying on neural pathways and muscles. P300 based BCI systems have been extensively used to achieve human-machine interaction. However, the appearance of fatigue symptoms during operation process leads to the decline in classification accuracy of P300. Characterizing brain cognitive process underlying normal and fatigue conditions constitutes a problem of vital importance in the field of brain science. We in this paper propose a novel wavelet decomposition based complex network method to efficiently analyze the P300 signals recorded in the image stimulus test based on classical 'Oddball' paradigm. Initially, multichannel EEG signals are decomposed into wavelet coefficient series. Then we construct complex network by treating electrodes as nodes and determining the connections according to the 2-norm distances between wavelet coefficient series. The analysis of topological structure and statistical index indicates that the properties of brain network demonstrate significant distinctions between normal status and fatigue status. More specifically, the brain network reconfiguration in response to the cognitive task in fatigue status is reflected as the enhancement of the small-worldness.
Halder, Sebastian; Bensch, Michael; Mellinger, Jürgen; Bogdan, Martin; Kübler, Andrea; Birbaumer, Niels; Rosenstiel, Wolfgang
2007-01-01
We propose a combination of blind source separation (BSS) and independent component analysis (ICA) (signal decomposition into artifacts and nonartifacts) with support vector machines (SVMs) (automatic classification) that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA) and one BSS algorithm (AMUSE) are evaluated to determine their ability to isolate electromyographic (EMG) and electrooculographic (EOG) artifacts into individual components. An implementation of the selected BSS/ICA method with SVMs trained to classify EMG and EOG artifacts, which enables the usage of the method as a filter in measurements with online feedback, is described. This filter is evaluated on three BCI datasets as a proof-of-concept of the method. PMID:18288259
Halder, Sebastian; Bensch, Michael; Mellinger, Jürgen; Bogdan, Martin; Kübler, Andrea; Birbaumer, Niels; Rosenstiel, Wolfgang
2007-01-01
We propose a combination of blind source separation (BSS) and independent component analysis (ICA) (signal decomposition into artifacts and nonartifacts) with support vector machines (SVMs) (automatic classification) that are designed for online usage. In order to select a suitable BSS/ICA method, three ICA algorithms (JADE, Infomax, and FastICA) and one BSS algorithm (AMUSE) are evaluated to determine their ability to isolate electromyographic (EMG) and electrooculographic (EOG) artifacts into individual components. An implementation of the selected BSS/ICA method with SVMs trained to classify EMG and EOG artifacts, which enables the usage of the method as a filter in measurements with online feedback, is described. This filter is evaluated on three BCI datasets as a proof-of-concept of the method.
Decoding semantic information from human electrocorticographic (ECoG) signals.
Wang, Wei; Degenhart, Alan D; Sudre, Gustavo P; Pomerleau, Dean A; Tyler-Kabara, Elizabeth C
2011-01-01
This study examined the feasibility of decoding semantic information from human cortical activity. Four human subjects undergoing presurgical brain mapping and seizure foci localization participated in this study. Electrocorticographic (ECoG) signals were recorded while the subjects performed simple language tasks involving semantic information processing, such as a picture naming task where subjects named pictures of objects belonging to different semantic categories. Robust high-gamma band (60-120 Hz) activation was observed at the left inferior frontal gyrus (LIFG) and the posterior portion of the superior temporal gyrus (pSTG) with a temporal sequence corresponding to speech production and perception. Furthermore, Gaussian Naïve Bayes and Support Vector Machine classifiers, two commonly used machine learning algorithms for pattern recognition, were able to predict the semantic category of an object using cortical activity captured by ECoG electrodes covering the frontal, temporal and parietal cortices. These findings have implications for both basic neuroscience research and development of semantic-based brain-computer interface systems (BCI) that can help individuals with severe motor or communication disorders to express their intention and thoughts.
Yuan, Yaxia; Zheng, Fang; Zhan, Chang-Guo
2018-03-21
Blood-brain barrier (BBB) permeability of a compound determines whether the compound can effectively enter the brain. It is an essential property which must be accounted for in drug discovery with a target in the brain. Several computational methods have been used to predict the BBB permeability. In particular, support vector machine (SVM), which is a kernel-based machine learning method, has been used popularly in this field. For SVM training and prediction, the compounds are characterized by molecular descriptors. Some SVM models were based on the use of molecular property-based descriptors (including 1D, 2D, and 3D descriptors) or fragment-based descriptors (known as the fingerprints of a molecule). The selection of descriptors is critical for the performance of a SVM model. In this study, we aimed to develop a generally applicable new SVM model by combining all of the features of the molecular property-based descriptors and fingerprints to improve the accuracy for the BBB permeability prediction. The results indicate that our SVM model has improved accuracy compared to the currently available models of the BBB permeability prediction.
BIRD: A general interface for sparse distributed memory simulators
NASA Technical Reports Server (NTRS)
Rogers, David
1990-01-01
Kanerva's sparse distributed memory (SDM) has now been implemented for at least six different computers, including SUN3 workstations, the Apple Macintosh, and the Connection Machine. A common interface for input of commands would both aid testing of programs on a broad range of computer architectures and assist users in transferring results from research environments to applications. A common interface also allows secondary programs to generate command sequences for a sparse distributed memory, which may then be executed on the appropriate hardware. The BIRD program is an attempt to create such an interface. Simplifying access to different simulators should assist developers in finding appropriate uses for SDM.
A fresh look at functional link neural network for motor imagery-based brain-computer interface.
Hettiarachchi, Imali T; Babaei, Toktam; Nguyen, Thanh; Lim, Chee P; Nahavandi, Saeid
2018-05-04
Artificial neural networks (ANNs) are one of the widely used classifiers in the brain-computer interface (BCI) systems-based on noninvasive electroencephalography (EEG) signals. Among the different ANN architectures, the most commonly applied for BCI classifiers is the multilayer perceptron (MLP). When appropriately designed with optimal number of neuron layers and number of neurons per layer, the ANN can act as a universal approximator. However, due to the low signal-to-noise ratio of EEG signal data, overtraining problem may become an inherent issue, causing these universal approximators to fail in real-time applications. In this study we introduce a higher order neural network, namely the functional link neural network (FLNN) as a classifier for motor imagery (MI)-based BCI systems, to remedy the drawbacks in MLP. We compare the proposed method with competing classifiers such as linear decomposition analysis, naïve Bayes, k-nearest neighbours, support vector machine and three MLP architectures. Two multi-class benchmark datasets from the BCI competitions are used. Common spatial pattern algorithm is utilized for feature extraction to build classification models. FLNN reports the highest average Kappa value over multiple subjects for both the BCI competition datasets, under similarly preprocessed data and extracted features. Further, statistical comparison results over multiple subjects show that the proposed FLNN classification method yields the best performance among the competing classifiers. Findings from this study imply that the proposed method, which has less computational complexity compared to the MLP, can be implemented effectively in practical MI-based BCI systems. Copyright © 2018 Elsevier B.V. All rights reserved.
ODISEES: A New Paradigm in Data Access
NASA Astrophysics Data System (ADS)
Huffer, E.; Little, M. M.; Kusterer, J.
2013-12-01
As part of its ongoing efforts to improve access to data, the Atmospheric Science Data Center has developed a high-precision Earth Science domain ontology (the 'ES Ontology') implemented in a graph database ('the Semantic Metadata Repository') that is used to store detailed, semantically-enhanced, parameter-level metadata for ASDC data products. The ES Ontology provides the semantic infrastructure needed to drive the ASDC's Ontology-Driven Interactive Search Environment for Earth Science ('ODISEES'), a data discovery and access tool, and will support additional data services such as analytics and visualization. The ES ontology is designed on the premise that naming conventions alone are not adequate to provide the information needed by prospective data consumers to assess the suitability of a given dataset for their research requirements; nor are current metadata conventions adequate to support seamless machine-to-machine interactions between file servers and end-user applications. Data consumers need information not only about what two data elements have in common, but also about how they are different. End-user applications need consistent, detailed metadata to support real-time data interoperability. The ES ontology is a highly precise, bottom-up, queriable model of the Earth Science domain that focuses on critical details about the measurable phenomena, instrument techniques, data processing methods, and data file structures. Earth Science parameters are described in detail in the ES Ontology and mapped to the corresponding variables that occur in ASDC datasets. Variables are in turn mapped to well-annotated representations of the datasets that they occur in, the instrument(s) used to create them, the instrument platforms, the processing methods, etc., creating a linked-data structure that allows both human and machine users to access a wealth of information critical to understanding and manipulating the data. The mappings are recorded in the Semantic Metadata Repository as RDF-triples. An off-the-shelf Ontology Development Environment and a custom Metadata Conversion Tool comprise a human-machine/machine-machine hybrid tool that partially automates the creation of metadata as RDF-triples by interfacing with existing metadata repositories and providing a user interface that solicits input from a human user, when needed. RDF-triples are pushed to the Ontology Development Environment, where a reasoning engine executes a series of inference rules whose antecedent conditions can be satisfied by the initial set of RDF-triples, thereby generating the additional detailed metadata that is missing in existing repositories. A SPARQL Endpoint, a web-based query service and a Graphical User Interface allow prospective data consumers - even those with no familiarity with NASA data products - to search the metadata repository to find and order data products that meet their exact specifications. A web-based API will provide an interface for machine-to-machine transactions.
Creating Web-Based Scientific Applications Using Java Servlets
NASA Technical Reports Server (NTRS)
Palmer, Grant; Arnold, James O. (Technical Monitor)
2001-01-01
There are many advantages to developing web-based scientific applications. Any number of people can access the application concurrently. The application can be accessed from a remote location. The application becomes essentially platform-independent because it can be run from any machine that has internet access and can run a web browser. Maintenance and upgrades to the application are simplified since only one copy of the application exists in a centralized location. This paper details the creation of web-based applications using Java servlets. Java is a powerful, versatile programming language that is well suited to developing web-based programs. A Java servlet provides the interface between the central server and the remote client machines. The servlet accepts input data from the client, runs the application on the server, and sends the output back to the client machine. The type of servlet that supports the HTTP protocol will be discussed in depth. Among the topics the paper will discuss are how to write an http servlet, how the servlet can run applications written in Java and other languages, and how to set up a Java web server. The entire process will be demonstrated by building a web-based application to compute stagnation point heat transfer.
Visual Feedback Dominates the Sense of Agency for Brain-Machine Actions
Evans, Nathan; Gale, Steven; Schurger, Aaron; Blanke, Olaf
2015-01-01
Recent advances in neuroscience and engineering have led to the development of technologies that permit the control of external devices through real-time decoding of brain activity (brain-machine interfaces; BMI). Though the feeling of controlling bodily movements (sense of agency; SOA) has been well studied and a number of well-defined sensorimotor and cognitive mechanisms have been put forth, very little is known about the SOA for BMI-actions. Using an on-line BMI, and verifying that our subjects achieved a reasonable level of control, we sought to describe the SOA for BMI-mediated actions. Our results demonstrate that discrepancies between decoded neural activity and its resultant real-time sensory feedback are associated with a decrease in the SOA, similar to SOA mechanisms proposed for bodily actions. However, if the feedback discrepancy serves to correct a poorly controlled BMI-action, then the SOA can be high and can increase with increasing discrepancy, demonstrating the dominance of visual feedback on the SOA. Taken together, our results suggest that bodily and BMI-actions rely on common mechanisms of sensorimotor integration for agency judgments, but that visual feedback dominates the SOA in the absence of overt bodily movements or proprioceptive feedback, however erroneous the visual feedback may be. PMID:26066840
NASA Astrophysics Data System (ADS)
Riccio, A.; Leotta, F.; Bianchi, L.; Aloise, F.; Zickler, C.; Hoogerwerf, E.-J.; Kübler, A.; Mattia, D.; Cincotti, F.
2011-04-01
Advancing the brain-computer interface (BCI) towards practical applications in technology-based assistive solutions for people with disabilities requires coping with problems of accessibility and usability to increase user acceptance and satisfaction. The main objective of this study was to introduce a usability-oriented approach in the assessment of BCI technology development by focusing on evaluation of the user's subjective workload and satisfaction. The secondary aim was to compare two applications for a P300-based BCI. Eight healthy subjects were asked to use an assistive technology solution which integrates the P300-based BCI with commercially available software under two conditions—visual stimuli needed to evoke the P300 response were either overlaid onto the application's graphical user interface or presented on a separate screen. The two conditions were compared for effectiveness (level of performance), efficiency (subjective workload measured by means of NASA-TXL) and satisfaction of the user. Although no significant difference in usability could be detected between the two conditions, the methodology proved to be an effective tool to highlight weaknesses in the technical solution.
Towards Zero Training for Brain-Computer Interfacing
Krauledat, Matthias; Tangermann, Michael; Blankertz, Benjamin; Müller, Klaus-Robert
2008-01-01
Electroencephalogram (EEG) signals are highly subject-specific and vary considerably even between recording sessions of the same user within the same experimental paradigm. This challenges a stable operation of Brain-Computer Interface (BCI) systems. The classical approach is to train users by neurofeedback to produce fixed stereotypical patterns of brain activity. In the machine learning approach, a widely adapted method for dealing with those variances is to record a so called calibration measurement on the beginning of each session in order to optimize spatial filters and classifiers specifically for each subject and each day. This adaptation of the system to the individual brain signature of each user relieves from the need of extensive user training. In this paper we suggest a new method that overcomes the requirement of these time-consuming calibration recordings for long-term BCI users. The method takes advantage of knowledge collected in previous sessions: By a novel technique, prototypical spatial filters are determined which have better generalization properties compared to single-session filters. In particular, they can be used in follow-up sessions without the need to recalibrate the system. This way the calibration periods can be dramatically shortened or even completely omitted for these ‘experienced’ BCI users. The feasibility of our novel approach is demonstrated with a series of online BCI experiments. Although performed without any calibration measurement at all, no loss of classification performance was observed. PMID:18698427
A brain-spine interface alleviating gait deficits after spinal cord injury in primates.
Capogrosso, Marco; Milekovic, Tomislav; Borton, David; Wagner, Fabien; Moraud, Eduardo Martin; Mignardot, Jean-Baptiste; Buse, Nicolas; Gandar, Jerome; Barraud, Quentin; Xing, David; Rey, Elodie; Duis, Simone; Jianzhong, Yang; Ko, Wai Kin D; Li, Qin; Detemple, Peter; Denison, Tim; Micera, Silvestro; Bezard, Erwan; Bloch, Jocelyne; Courtine, Grégoire
2016-11-10
Spinal cord injury disrupts the communication between the brain and the spinal circuits that orchestrate movement. To bypass the lesion, brain-computer interfaces have directly linked cortical activity to electrical stimulation of muscles, and have thus restored grasping abilities after hand paralysis. Theoretically, this strategy could also restore control over leg muscle activity for walking. However, replicating the complex sequence of individual muscle activation patterns underlying natural and adaptive locomotor movements poses formidable conceptual and technological challenges. Recently, it was shown in rats that epidural electrical stimulation of the lumbar spinal cord can reproduce the natural activation of synergistic muscle groups producing locomotion. Here we interface leg motor cortex activity with epidural electrical stimulation protocols to establish a brain-spine interface that alleviated gait deficits after a spinal cord injury in non-human primates. Rhesus monkeys (Macaca mulatta) were implanted with an intracortical microelectrode array in the leg area of the motor cortex and with a spinal cord stimulation system composed of a spatially selective epidural implant and a pulse generator with real-time triggering capabilities. We designed and implemented wireless control systems that linked online neural decoding of extension and flexion motor states with stimulation protocols promoting these movements. These systems allowed the monkeys to behave freely without any restrictions or constraining tethered electronics. After validation of the brain-spine interface in intact (uninjured) monkeys, we performed a unilateral corticospinal tract lesion at the thoracic level. As early as six days post-injury and without prior training of the monkeys, the brain-spine interface restored weight-bearing locomotion of the paralysed leg on a treadmill and overground. The implantable components integrated in the brain-spine interface have all been approved for investigational applications in similar human research, suggesting a practical translational pathway for proof-of-concept studies in people with spinal cord injury.
Large-scale recording of neuronal ensembles.
Buzsáki, György
2004-05-01
How does the brain orchestrate perceptions, thoughts and actions from the spiking activity of its neurons? Early single-neuron recording research treated spike pattern variability as noise that needed to be averaged out to reveal the brain's representation of invariant input. Another view is that variability of spikes is centrally coordinated and that this brain-generated ensemble pattern in cortical structures is itself a potential source of cognition. Large-scale recordings from neuronal ensembles now offer the opportunity to test these competing theoretical frameworks. Currently, wire and micro-machined silicon electrode arrays can record from large numbers of neurons and monitor local neural circuits at work. Achieving the full potential of massively parallel neuronal recordings, however, will require further development of the neuron-electrode interface, automated and efficient spike-sorting algorithms for effective isolation and identification of single neurons, and new mathematical insights for the analysis of network properties.
Machine intelligence and autonomy for aerospace systems
NASA Technical Reports Server (NTRS)
Heer, Ewald (Editor); Lum, Henry (Editor)
1988-01-01
The present volume discusses progress toward intelligent robot systems in aerospace applications, NASA Space Program automation and robotics efforts, the supervisory control of telerobotics in space, machine intelligence and crew/vehicle interfaces, expert-system terms and building tools, and knowledge-acquisition for autonomous systems. Also discussed are methods for validation of knowledge-based systems, a design methodology for knowledge-based management systems, knowledge-based simulation for aerospace systems, knowledge-based diagnosis, planning and scheduling methods in AI, the treatment of uncertainty in AI, vision-sensing techniques in aerospace applications, image-understanding techniques, tactile sensing for robots, distributed sensor integration, and the control of articulated and deformable space structures.
The Virtual Brain: a simulator of primate brain network dynamics.
Sanz Leon, Paula; Knock, Stuart A; Woodman, M Marmaduke; Domide, Lia; Mersmann, Jochen; McIntosh, Anthony R; Jirsa, Viktor
2013-01-01
We present The Virtual Brain (TVB), a neuroinformatics platform for full brain network simulations using biologically realistic connectivity. This simulation environment enables the model-based inference of neurophysiological mechanisms across different brain scales that underlie the generation of macroscopic neuroimaging signals including functional MRI (fMRI), EEG and MEG. Researchers from different backgrounds can benefit from an integrative software platform including a supporting framework for data management (generation, organization, storage, integration and sharing) and a simulation core written in Python. TVB allows the reproduction and evaluation of personalized configurations of the brain by using individual subject data. This personalization facilitates an exploration of the consequences of pathological changes in the system, permitting to investigate potential ways to counteract such unfavorable processes. The architecture of TVB supports interaction with MATLAB packages, for example, the well known Brain Connectivity Toolbox. TVB can be used in a client-server configuration, such that it can be remotely accessed through the Internet thanks to its web-based HTML5, JS, and WebGL graphical user interface. TVB is also accessible as a standalone cross-platform Python library and application, and users can interact with the scientific core through the scripting interface IDLE, enabling easy modeling, development and debugging of the scientific kernel. This second interface makes TVB extensible by combining it with other libraries and modules developed by the Python scientific community. In this article, we describe the theoretical background and foundations that led to the development of TVB, the architecture and features of its major software components as well as potential neuroscience applications.
The Virtual Brain: a simulator of primate brain network dynamics
Sanz Leon, Paula; Knock, Stuart A.; Woodman, M. Marmaduke; Domide, Lia; Mersmann, Jochen; McIntosh, Anthony R.; Jirsa, Viktor
2013-01-01
We present The Virtual Brain (TVB), a neuroinformatics platform for full brain network simulations using biologically realistic connectivity. This simulation environment enables the model-based inference of neurophysiological mechanisms across different brain scales that underlie the generation of macroscopic neuroimaging signals including functional MRI (fMRI), EEG and MEG. Researchers from different backgrounds can benefit from an integrative software platform including a supporting framework for data management (generation, organization, storage, integration and sharing) and a simulation core written in Python. TVB allows the reproduction and evaluation of personalized configurations of the brain by using individual subject data. This personalization facilitates an exploration of the consequences of pathological changes in the system, permitting to investigate potential ways to counteract such unfavorable processes. The architecture of TVB supports interaction with MATLAB packages, for example, the well known Brain Connectivity Toolbox. TVB can be used in a client-server configuration, such that it can be remotely accessed through the Internet thanks to its web-based HTML5, JS, and WebGL graphical user interface. TVB is also accessible as a standalone cross-platform Python library and application, and users can interact with the scientific core through the scripting interface IDLE, enabling easy modeling, development and debugging of the scientific kernel. This second interface makes TVB extensible by combining it with other libraries and modules developed by the Python scientific community. In this article, we describe the theoretical background and foundations that led to the development of TVB, the architecture and features of its major software components as well as potential neuroscience applications. PMID:23781198
Whole brain white matter connectivity analysis using machine learning: An application to autism.
Zhang, Fan; Savadjiev, Peter; Cai, Weidong; Song, Yang; Rathi, Yogesh; Tunç, Birkan; Parker, Drew; Kapur, Tina; Schultz, Robert T; Makris, Nikos; Verma, Ragini; O'Donnell, Lauren J
2018-05-15
In this paper, we propose an automated white matter connectivity analysis method for machine learning classification and characterization of white matter abnormality via identification of discriminative fiber tracts. The proposed method uses diffusion MRI tractography and a data-driven approach to find fiber clusters corresponding to subdivisions of the white matter anatomy. Features extracted from each fiber cluster describe its diffusion properties and are used for machine learning. The method is demonstrated by application to a pediatric neuroimaging dataset from 149 individuals, including 70 children with autism spectrum disorder (ASD) and 79 typically developing controls (TDC). A classification accuracy of 78.33% is achieved in this cross-validation study. We investigate the discriminative diffusion features based on a two-tensor fiber tracking model. We observe that the mean fractional anisotropy from the second tensor (associated with crossing fibers) is most affected in ASD. We also find that local along-tract (central cores and endpoint regions) differences between ASD and TDC are helpful in differentiating the two groups. These altered diffusion properties in ASD are associated with multiple robustly discriminative fiber clusters, which belong to several major white matter tracts including the corpus callosum, arcuate fasciculus, uncinate fasciculus and aslant tract; and the white matter structures related to the cerebellum, brain stem, and ventral diencephalon. These discriminative fiber clusters, a small part of the whole brain tractography, represent the white matter connections that could be most affected in ASD. Our results indicate the potential of a machine learning pipeline based on white matter fiber clustering. Copyright © 2017 Elsevier Inc. All rights reserved.
Bisenius, Sandrine; Mueller, Karsten; Diehl-Schmid, Janine; Fassbender, Klaus; Grimmer, Timo; Jessen, Frank; Kassubek, Jan; Kornhuber, Johannes; Landwehrmeyer, Bernhard; Ludolph, Albert; Schneider, Anja; Anderl-Straub, Sarah; Stuke, Katharina; Danek, Adrian; Otto, Markus; Schroeter, Matthias L
2017-01-01
Primary progressive aphasia (PPA) encompasses the three subtypes nonfluent/agrammatic variant PPA, semantic variant PPA, and the logopenic variant PPA, which are characterized by distinct patterns of language difficulties and regional brain atrophy. To validate the potential of structural magnetic resonance imaging data for early individual diagnosis, we used support vector machine classification on grey matter density maps obtained by voxel-based morphometry analysis to discriminate PPA subtypes (44 patients: 16 nonfluent/agrammatic variant PPA, 17 semantic variant PPA, 11 logopenic variant PPA) from 20 healthy controls (matched for sample size, age, and gender) in the cohort of the multi-center study of the German consortium for frontotemporal lobar degeneration. Here, we compared a whole-brain with a meta-analysis-based disease-specific regions-of-interest approach for support vector machine classification. We also used support vector machine classification to discriminate the three PPA subtypes from each other. Whole brain support vector machine classification enabled a very high accuracy between 91 and 97% for identifying specific PPA subtypes vs. healthy controls, and 78/95% for the discrimination between semantic variant vs. nonfluent/agrammatic or logopenic PPA variants. Only for the discrimination between nonfluent/agrammatic and logopenic PPA variants accuracy was low with 55%. Interestingly, the regions that contributed the most to the support vector machine classification of patients corresponded largely to the regions that were atrophic in these patients as revealed by group comparisons. Although the whole brain approach took also into account regions that were not covered in the regions-of-interest approach, both approaches showed similar accuracies due to the disease-specificity of the selected networks. Conclusion, support vector machine classification of multi-center structural magnetic resonance imaging data enables prediction of PPA subtypes with a very high accuracy paving the road for its application in clinical settings.
Automation and robotics technology for intelligent mining systems
NASA Technical Reports Server (NTRS)
Welsh, Jeffrey H.
1989-01-01
The U.S. Bureau of Mines is approaching the problems of accidents and efficiency in the mining industry through the application of automation and robotics to mining systems. This technology can increase safety by removing workers from hazardous areas of the mines or from performing hazardous tasks. The short-term goal of the Automation and Robotics program is to develop technology that can be implemented in the form of an autonomous mining machine using current continuous mining machine equipment. In the longer term, the goal is to conduct research that will lead to new intelligent mining systems that capitalize on the capabilities of robotics. The Bureau of Mines Automation and Robotics program has been structured to produce the technology required for the short- and long-term goals. The short-term goal of application of automation and robotics to an existing mining machine, resulting in autonomous operation, is expected to be accomplished within five years. Key technology elements required for an autonomous continuous mining machine are well underway and include machine navigation systems, coal-rock interface detectors, machine condition monitoring, and intelligent computer systems. The Bureau of Mines program is described, including status of key technology elements for an autonomous continuous mining machine, the program schedule, and future work. Although the program is directed toward underground mining, much of the technology being developed may have applications for space systems or mining on the Moon or other planets.
Visual gate for brain-computer interfaces.
Dias, N S; Jacinto, L R; Mendes, P M; Correia, J H
2009-01-01
Brain-Computer Interfaces (BCI) based on event related potentials (ERP) have been successfully developed for applications like virtual spellers and navigation systems. This study tests the use of visual stimuli unbalanced in the subject's field of view to simultaneously cue mental imagery tasks (left vs. right hand movement) and detect subject attention. The responses to unbalanced cues were compared with the responses to balanced cues in terms of classification accuracy. Subject specific ERP spatial filters were calculated for optimal group separation. The unbalanced cues appear to enhance early ERPs related to cue visuospatial processing that improved the classification accuracy (as low as 6%) of ERPs in response to left vs. right cues soon (150-200 ms) after the cue presentation. This work suggests that such visual interface may be of interest in BCI applications as a gate mechanism for attention estimation and validation of control decisions.
A COTS-MQS shipborne EO/IR imaging system
NASA Astrophysics Data System (ADS)
Hutchinson, Mark A.; Miller, John L.; Weaver, James
2005-05-01
The Sea Star SAFIRE is a commercially developed, off the shelf, military qualified system (COTS-MQS) consisting of a 640 by 480 InSb infrared imager, laser rangefinder and visible imager in a gyro-stabilized platform designed for shipborne applications. These applications include search and rescue, surveillance, fire control, fisheries patrol, harbor security, and own-vessel perimeter security and self protection. Particularly challenging considerations unique to shipborne systems include the demanding environment conditions, man-machine interfaces, and effects of atmospheric conditions on sensor performance. Shipborne environmental conditions requiring special attention include electromagnetic fields, as well as resistance to rain, ice and snow, shock, vibration, and salt. Features have been implemented to withstand exposure to water and high humidity; anti-ice/de-ice capability for exposure to snow and ice; wash/wipe of external windows; corrosion resistance for exposure to water and salt spray. A variety of system controller configurations provide man-machine interfaces suitable for operation on ships. EO sensor developments that address areas of haze penetration, glint, and scintillation will be presented.
Optimization of SSVEP brain responses with application to eight-command Brain-Computer Interface.
Bakardjian, Hovagim; Tanaka, Toshihisa; Cichocki, Andrzej
2010-01-18
This study pursues the optimization of the brain responses to small reversing patterns in a Steady-State Visual Evoked Potentials (SSVEP) paradigm, which could be used to maximize the efficiency of applications such as Brain-Computer Interfaces (BCI). We investigated the SSVEP frequency response for 32 frequencies (5-84 Hz), and the time dynamics of the brain response at 8, 14 and 28 Hz, to aid the definition of the optimal neurophysiological parameters and to outline the onset-delay and other limitations of SSVEP stimuli in applications such as our previously described four-command BCI system. Our results showed that the 5.6-15.3 Hz pattern reversal stimulation evoked the strongest responses, peaking at 12 Hz, and exhibiting weaker local maxima at 28 and 42 Hz. After stimulation onset, the long-term SSVEP response was highly non-stationary and the dynamics, including the first peak, was frequency-dependent. The evaluation of the performance of a frequency-optimized eight-command BCI system with dynamic neurofeedback showed a mean success rate of 98%, and a time delay of 3.4s. Robust BCI performance was achieved by all subjects even when using numerous small patterns clustered very close to each other and moving rapidly in 2D space. These results emphasize the need for SSVEP applications to optimize not only the analysis algorithms but also the stimuli in order to maximize the brain responses they rely on. (c) 2009 Elsevier Ireland Ltd. All rights reserved.
Eugster, Manuel J. A.; Ruotsalo, Tuukka; Spapé, Michiel M.; Barral, Oswald; Ravaja, Niklas; Jacucci, Giulio; Kaski, Samuel
2016-01-01
Finding relevant information from large document collections such as the World Wide Web is a common task in our daily lives. Estimation of a user’s interest or search intention is necessary to recommend and retrieve relevant information from these collections. We introduce a brain-information interface used for recommending information by relevance inferred directly from brain signals. In experiments, participants were asked to read Wikipedia documents about a selection of topics while their EEG was recorded. Based on the prediction of word relevance, the individual’s search intent was modeled and successfully used for retrieving new relevant documents from the whole English Wikipedia corpus. The results show that the users’ interests toward digital content can be modeled from the brain signals evoked by reading. The introduced brain-relevance paradigm enables the recommendation of information without any explicit user interaction and may be applied across diverse information-intensive applications. PMID:27929077
NASA Astrophysics Data System (ADS)
Eugster, Manuel J. A.; Ruotsalo, Tuukka; Spapé, Michiel M.; Barral, Oswald; Ravaja, Niklas; Jacucci, Giulio; Kaski, Samuel
2016-12-01
Finding relevant information from large document collections such as the World Wide Web is a common task in our daily lives. Estimation of a user’s interest or search intention is necessary to recommend and retrieve relevant information from these collections. We introduce a brain-information interface used for recommending information by relevance inferred directly from brain signals. In experiments, participants were asked to read Wikipedia documents about a selection of topics while their EEG was recorded. Based on the prediction of word relevance, the individual’s search intent was modeled and successfully used for retrieving new relevant documents from the whole English Wikipedia corpus. The results show that the users’ interests toward digital content can be modeled from the brain signals evoked by reading. The introduced brain-relevance paradigm enables the recommendation of information without any explicit user interaction and may be applied across diverse information-intensive applications.
The Formal Specification of a Visual display Device: Design and Implementation.
1985-06-01
The use of these data structures with their defined operations, give the programmer a very powerful instructions set. Like the DPU code generator in...which any AM hosted machine could faithfully display. 27 In- general , most applications have no need to create images from a data structure representing...formation of standard functional interfaces to these resources. OS’s generally do not provide a functional interface to either the processor or the display2
Machine learning for Big Data analytics in plants.
Ma, Chuang; Zhang, Hao Helen; Wang, Xiangfeng
2014-12-01
Rapid advances in high-throughput genomic technology have enabled biology to enter the era of 'Big Data' (large datasets). The plant science community not only needs to build its own Big-Data-compatible parallel computing and data management infrastructures, but also to seek novel analytical paradigms to extract information from the overwhelming amounts of data. Machine learning offers promising computational and analytical solutions for the integrative analysis of large, heterogeneous and unstructured datasets on the Big-Data scale, and is gradually gaining popularity in biology. This review introduces the basic concepts and procedures of machine-learning applications and envisages how machine learning could interface with Big Data technology to facilitate basic research and biotechnology in the plant sciences. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Heer, E.
1973-01-01
Free-flying teleoperator systems are discussed, giving attention to earth-orbit mission considerations and Space Tug requirements, free-flying teleoperator requirements and conceptual design, system requirements for a free-flying teleoperator to despin, and the experimental evaluation of remote manipulator systems. Shuttle-Attached Manipulator Systems are considered, together with remote surface vehicle systems, manipulator systems technology, remote sensor and display technology, the man-machine interface, and control and machine intelligence. Nonspace applications are also explored, taking into account implications of nonspace applications, naval applications of remote manipulators, and hand tools and mechanical accessories for a deep submersible. Individual items are announced in this issue.
Brain-computer interfaces for EEG neurofeedback: peculiarities and solutions.
Huster, René J; Mokom, Zacharais N; Enriquez-Geppert, Stefanie; Herrmann, Christoph S
2014-01-01
Neurofeedback training procedures designed to alter a person's brain activity have been in use for nearly four decades now and represent one of the earliest applications of brain-computer interfaces (BCI). The majority of studies using neurofeedback technology relies on recordings of the electroencephalogram (EEG) and applies neurofeedback in clinical contexts, exploring its potential as treatment for psychopathological syndromes. This clinical focus significantly affects the technology behind neurofeedback BCIs. For example, in contrast to other BCI applications, neurofeedback BCIs usually rely on EEG-derived features with only a minimum of additional processing steps being employed. Here, we highlight the peculiarities of EEG-based neurofeedback BCIs and consider their relevance for software implementations. Having reviewed already existing packages for the implementation of BCIs, we introduce our own solution which specifically considers the relevance of multi-subject handling for experimental and clinical trials, for example by implementing ready-to-use solutions for pseudo-/sham-neurofeedback. © 2013.
Intravascular Neural Interface with Nanowire Electrode
Watanabe, Hirobumi; Takahashi, Hirokazu; Nakao, Masayuki; Walton, Kerry; Llinás, Rodolfo R.
2010-01-01
Summary A minimally invasive electrical recording and stimulating technique capable of simultaneously monitoring the activity of a significant number (e.g., 103 to 104) of neurons is an absolute prerequisite in developing an effective brain–machine interface. Although there are many excellent methodologies for recording single or multiple neurons, there has been no methodology for accessing large numbers of cells in a behaving experimental animal or human individual. Brain vascular parenchyma is a promising candidate for addressing this problem. It has been proposed [1, 2] that a multitude of nanowire electrodes introduced into the central nervous system through the vascular system to address any brain area may be a possible solution. In this study we implement a design for such microcatheter for ex vivo experiments. Using Wollaston platinum wire, we design a submicron-scale electrode and develop a fabrication method. We then evaluate the mechanical properties of the electrode in a flow when passing through the intricacies of the capillary bed in ex vivo Xenopus laevis experiments. Furthermore, we demonstrate the feasibility of intravascular recording in the spinal cord of Xenopus laevis. PMID:21572940
Exploiting co-adaptation for the design of symbiotic neuroprosthetic assistants.
Sanchez, Justin C; Mahmoudi, Babak; DiGiovanna, Jack; Principe, Jose C
2009-04-01
The success of brain-machine interfaces (BMI) is enabled by the remarkable ability of the brain to incorporate the artificial neuroprosthetic 'tool' into its own cognitive space and use it as an extension of the user's body. Unlike other tools, neuroprosthetics create a shared space that seamlessly spans the user's internal goal representation of the world and the external physical environment enabling a much deeper human-tool symbiosis. A key factor in the transformation of 'simple tools' into 'intelligent tools' is the concept of co-adaptation where the tool becomes functionally involved in the extraction and definition of the user's goals. Recent advancements in the neuroscience and engineering of neuroprosthetics are providing a blueprint for how new co-adaptive designs based on reinforcement learning change the nature of a user's ability to accomplish tasks that were not possible using conventional methodologies. By designing adaptive controls and artificial intelligence into the neural interface, tools can become active assistants in goal-directed behavior and further enhance human performance in particular for the disabled population. This paper presents recent advances in computational and neural systems supporting the development of symbiotic neuroprosthetic assistants.
New KF-PP-SVM classification method for EEG in brain-computer interfaces.
Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian
2014-01-01
Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.
Tuning Up the Old Brain with New Tricks: Attention Training via Neurofeedback
Jiang, Yang; Abiri, Reza; Zhao, Xiaopeng
2017-01-01
Neurofeedback (NF) is a form of biofeedback that uses real-time (RT) modulation of brain activity to enhance brain function and behavioral performance. Recent advances in Brain-Computer Interfaces (BCI) and cognitive training (CT) have provided new tools and evidence that NF improves cognitive functions, such as attention and working memory (WM), beyond what is provided by traditional CT. More published studies have demonstrated the efficacy of NF, particularly for treating attention deficit hyperactivity disorder (ADHD) in children. In contrast, there have been fewer studies done in older adults with or without cognitive impairment, with some notable exceptions. The focus of this review is to summarize current success in RT NF training of older brains aiming to match those of younger brains during attention/WM tasks. We also outline potential future advances in RT brainwave-based NF for improving attention training in older populations. The rapid growth in wireless recording of brain activity, machine learning classification and brain network analysis provides new tools for combating cognitive decline and brain aging in older adults. We optimistically conclude that NF, combined with new neuro-markers (event-related potentials and connectivity) and traditional features, promises to provide new hope for brain and CT in the growing older population. PMID:28348527
Huggins, Jane E.; Guger, Christoph; Ziat, Mounia; Zander, Thorsten O.; Taylor, Denise; Tangermann, Michael; Soria-Frisch, Aureli; Simeral, John; Scherer, Reinhold; Rupp, Rüdiger; Ruffini, Giulio; Robinson, Douglas K. R.; Ramsey, Nick F.; Nijholt, Anton; Müller-Putz, Gernot; McFarland, Dennis J.; Mattia, Donatella; Lance, Brent J.; Kindermans, Pieter-Jan; Iturrate, Iñaki; Herff, Christian; Gupta, Disha; Do, An H.; Collinger, Jennifer L.; Chavarriaga, Ricardo; Chase, Steven M.; Bleichner, Martin G.; Batista, Aaron; Anderson, Charles W.; Aarnoutse, Erik J.
2017-01-01
The Sixth International Brain–Computer Interface (BCI) Meeting was held 30 May–3 June 2016 at the Asilomar Conference Grounds, Pacific Grove, California, USA. The conference included 28 workshops covering topics in BCI and brain–machine interface research. Topics included BCI for specific populations or applications, advancing BCI research through use of specific signals or technological advances, and translational and commercial issues to bring both implanted and non-invasive BCIs to market. BCI research is growing and expanding in the breadth of its applications, the depth of knowledge it can produce, and the practical benefit it can provide both for those with physical impairments and the general public. Here we provide summaries of each workshop, illustrating the breadth and depth of BCI research and highlighting important issues and calls for action to support future research and development. PMID:29152523
Spatial co-adaptation of cortical control columns in a micro-ECoG brain-computer interface
NASA Astrophysics Data System (ADS)
Rouse, A. G.; Williams, J. J.; Wheeler, J. J.; Moran, D. W.
2016-10-01
Objective. Electrocorticography (ECoG) has been used for a range of applications including electrophysiological mapping, epilepsy monitoring, and more recently as a recording modality for brain-computer interfaces (BCIs). Studies that examine ECoG electrodes designed and implanted chronically solely for BCI applications remain limited. The present study explored how two key factors influence chronic, closed-loop ECoG BCI: (i) the effect of inter-electrode distance on BCI performance and (ii) the differences in neural adaptation and performance when fixed versus adaptive BCI decoding weights are used. Approach. The amplitudes of epidural micro-ECoG signals between 75 and 105 Hz with 300 μm diameter electrodes were used for one-dimensional and two-dimensional BCI tasks. The effect of inter-electrode distance on BCI control was tested between 3 and 15 mm. Additionally, the performance and cortical modulation differences between constant, fixed decoding using a small subset of channels versus adaptive decoding weights using the entire array were explored. Main results. Successful BCI control was possible with two electrodes separated by 9 and 15 mm. Performance decreased and the signals became more correlated when the electrodes were only 3 mm apart. BCI performance in a 2D BCI task improved significantly when using adaptive decoding weights (80%-90%) compared to using constant, fixed weights (50%-60%). Additionally, modulation increased for channels previously unavailable for BCI control under the fixed decoding scheme upon switching to the adaptive, all-channel scheme. Significance. Our results clearly show that neural activity under a BCI recording electrode (which we define as a ‘cortical control column’) readily adapts to generate an appropriate control signal. These results show that the practical minimal spatial resolution of these control columns with micro-ECoG BCI is likely on the order of 3 mm. Additionally, they show that the combination and interaction between neural adaptation and machine learning are critical to optimizing ECoG BCI performance.
FwWebViewPlus: integration of web technologies into WinCC OA based Human-Machine Interfaces at CERN
NASA Astrophysics Data System (ADS)
Golonka, Piotr; Fabian, Wojciech; Gonzalez-Berges, Manuel; Jasiun, Piotr; Varela-Rodriguez, Fernando
2014-06-01
The rapid growth in popularity of web applications gives rise to a plethora of reusable graphical components, such as Google Chart Tools and JQuery Sparklines, implemented in JavaScript and run inside a web browser. In the paper we describe the tool that allows for seamless integration of web-based widgets into WinCC Open Architecture, the SCADA system used commonly at CERN to build complex Human-Machine Interfaces. Reuse of widely available widget libraries and pushing the development efforts to a higher abstraction layer based on a scripting language allow for significant reduction in maintenance of the code in multi-platform environments compared to those currently used in C++ visualization plugins. Adequately designed interfaces allow for rapid integration of new web widgets into WinCC OA. At the same time, the mechanisms familiar to HMI developers are preserved, making the use of new widgets "native". Perspectives for further integration between the realms of WinCC OA and Web development are also discussed.
Neural Coding for Effective Rehabilitation
2014-01-01
Successful neurological rehabilitation depends on accurate diagnosis, effective treatment, and quantitative evaluation. Neural coding, a technology for interpretation of functional and structural information of the nervous system, has contributed to the advancements in neuroimaging, brain-machine interface (BMI), and design of training devices for rehabilitation purposes. In this review, we summarized the latest breakthroughs in neuroimaging from microscale to macroscale levels with potential diagnostic applications for rehabilitation. We also reviewed the achievements in electrocorticography (ECoG) coding with both animal models and human beings for BMI design, electromyography (EMG) interpretation for interaction with external robotic systems, and robot-assisted quantitative evaluation on the progress of rehabilitation programs. Future rehabilitation would be more home-based, automatic, and self-served by patients. Further investigations and breakthroughs are mainly needed in aspects of improving the computational efficiency in neuroimaging and multichannel ECoG by selection of localized neuroinformatics, validation of the effectiveness in BMI guided rehabilitation programs, and simplification of the system operation in training devices. PMID:25258708
Williams, Alex H; Kim, Tony Hyun; Wang, Forea; Vyas, Saurabh; Ryu, Stephen I; Shenoy, Krishna V; Schnitzer, Mark; Kolda, Tamara G; Ganguli, Surya
2018-06-27
Perceptions, thoughts, and actions unfold over millisecond timescales, while learned behaviors can require many days to mature. While recent experimental advances enable large-scale and long-term neural recordings with high temporal fidelity, it remains a formidable challenge to extract unbiased and interpretable descriptions of how rapid single-trial circuit dynamics change slowly over many trials to mediate learning. We demonstrate a simple tensor component analysis (TCA) can meet this challenge by extracting three interconnected, low-dimensional descriptions of neural data: neuron factors, reflecting cell assemblies; temporal factors, reflecting rapid circuit dynamics mediating perceptions, thoughts, and actions within each trial; and trial factors, describing both long-term learning and trial-to-trial changes in cognitive state. We demonstrate the broad applicability of TCA by revealing insights into diverse datasets derived from artificial neural networks, large-scale calcium imaging of rodent prefrontal cortex during maze navigation, and multielectrode recordings of macaque motor cortex during brain machine interface learning. Copyright © 2018 Elsevier Inc. All rights reserved.
Kernel Temporal Differences for Neural Decoding
Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2015-01-01
We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504
Bayesian decoding using unsorted spikes in the rat hippocampus
Layton, Stuart P.; Chen, Zhe; Wilson, Matthew A.
2013-01-01
A fundamental task in neuroscience is to understand how neural ensembles represent information. Population decoding is a useful tool to extract information from neuronal populations based on the ensemble spiking activity. We propose a novel Bayesian decoding paradigm to decode unsorted spikes in the rat hippocampus. Our approach uses a direct mapping between spike waveform features and covariates of interest and avoids accumulation of spike sorting errors. Our decoding paradigm is nonparametric, encoding model-free for representing stimuli, and extracts information from all available spikes and their waveform features. We apply the proposed Bayesian decoding algorithm to a position reconstruction task for freely behaving rats based on tetrode recordings of rat hippocampal neuronal activity. Our detailed decoding analyses demonstrate that our approach is efficient and better utilizes the available information in the nonsortable hash than the standard sorting-based decoding algorithm. Our approach can be adapted to an online encoding/decoding framework for applications that require real-time decoding, such as brain-machine interfaces. PMID:24089403
Decoding Saccadic Directions Using Epidural ECoG in Non-Human Primates
2017-01-01
A brain-computer interface (BCI) can be used to restore some communication as an alternative interface for patients suffering from locked-in syndrome. However, most BCI systems are based on SSVEP, P300, or motor imagery, and a diversity of BCI protocols would be needed for various types of patients. In this paper, we trained the choice saccade (CS) task in 2 non-human primate monkeys and recorded the brain signal using an epidural electrocorticogram (eECoG) to predict eye movement direction. We successfully predicted the direction of the upcoming eye movement using a support vector machine (SVM) with the brain signals after the directional cue onset and before the saccade execution. The mean accuracies were 80% for 2 directions and 43% for 4 directions. We also quantified the spatial-spectro-temporal contribution ratio using SVM recursive feature elimination (RFE). The channels over the frontal eye field (FEF), supplementary eye field (SEF), and superior parietal lobule (SPL) area were dominantly used for classification. The α-band in the spectral domain and the time bins just after the directional cue onset and just before the saccadic execution were mainly useful for prediction. A saccade based BCI paradigm can be projected in the 2D space, and will hopefully provide an intuitive and convenient communication platform for users. PMID:28665058