Neuromodulation as a Robot Controller: A Brain Inspired Strategy for Controlling Autonomous Robots
2009-09-01
To Appear in IEEE Robotics and Automation Magazine PREPRINT 1 Neuromodulation as a Robot Controller: A Brain Inspired Strategy for Controlling...Introduction We present a strategy for controlling autonomous robots that is based on principles of neuromodulation in the mammalian brain...object, ignore irrelevant distractions, and respond quickly and appropriately to the event [1]. There are separate neuromodulators that alter responses to
My thoughts through a robot's eyes: an augmented reality-brain-machine interface.
Kansaku, Kenji; Hata, Naoki; Takano, Kouji
2010-02-01
A brain-machine interface (BMI) uses neurophysiological signals from the brain to control external devices, such as robot arms or computer cursors. Combining augmented reality with a BMI, we show that the user's brain signals successfully controlled an agent robot and operated devices in the robot's environment. The user's thoughts became reality through the robot's eyes, enabling the augmentation of real environments outside the anatomy of the human body.
SSVEP-based Experimental Procedure for Brain-Robot Interaction with Humanoid Robots.
Zhao, Jing; Li, Wei; Mao, Xiaoqian; Li, Mengfan
2015-11-24
Brain-Robot Interaction (BRI), which provides an innovative communication pathway between human and a robotic device via brain signals, is prospective in helping the disabled in their daily lives. The overall goal of our method is to establish an SSVEP-based experimental procedure by integrating multiple software programs, such as OpenViBE, Choregraph, and Central software as well as user developed programs written in C++ and MATLAB, to enable the study of brain-robot interaction with humanoid robots. This is achieved by first placing EEG electrodes on a human subject to measure the brain responses through an EEG data acquisition system. A user interface is used to elicit SSVEP responses and to display video feedback in the closed-loop control experiments. The second step is to record the EEG signals of first-time subjects, to analyze their SSVEP features offline, and to train the classifier for each subject. Next, the Online Signal Processor and the Robot Controller are configured for the online control of a humanoid robot. As the final step, the subject completes three specific closed-loop control experiments within different environments to evaluate the brain-robot interaction performance. The advantage of this approach is its reliability and flexibility because it is developed by integrating multiple software programs. The results show that using this approach, the subject is capable of interacting with the humanoid robot via brain signals. This allows the mind-controlled humanoid robot to perform typical tasks that are popular in robotic research and are helpful in assisting the disabled.
SSVEP-based Experimental Procedure for Brain-Robot Interaction with Humanoid Robots
Zhao, Jing; Li, Wei; Mao, Xiaoqian; Li, Mengfan
2015-01-01
Brain-Robot Interaction (BRI), which provides an innovative communication pathway between human and a robotic device via brain signals, is prospective in helping the disabled in their daily lives. The overall goal of our method is to establish an SSVEP-based experimental procedure by integrating multiple software programs, such as OpenViBE, Choregraph, and Central software as well as user developed programs written in C++ and MATLAB, to enable the study of brain-robot interaction with humanoid robots. This is achieved by first placing EEG electrodes on a human subject to measure the brain responses through an EEG data acquisition system. A user interface is used to elicit SSVEP responses and to display video feedback in the closed-loop control experiments. The second step is to record the EEG signals of first-time subjects, to analyze their SSVEP features offline, and to train the classifier for each subject. Next, the Online Signal Processor and the Robot Controller are configured for the online control of a humanoid robot. As the final step, the subject completes three specific closed-loop control experiments within different environments to evaluate the brain-robot interaction performance. The advantage of this approach is its reliability and flexibility because it is developed by integrating multiple software programs. The results show that using this approach, the subject is capable of interacting with the humanoid robot via brain signals. This allows the mind-controlled humanoid robot to perform typical tasks that are popular in robotic research and are helpful in assisting the disabled. PMID:26650051
Electroencephalography(EEG)-based instinctive brain-control of a quadruped locomotion robot.
Jia, Wenchuan; Huang, Dandan; Luo, Xin; Pu, Huayan; Chen, Xuedong; Bai, Ou
2012-01-01
Artificial intelligence and bionic control have been applied in electroencephalography (EEG)-based robot system, to execute complex brain-control task. Nevertheless, due to technical limitations of the EEG decoding, the brain-computer interface (BCI) protocol is often complex, and the mapping between the EEG signal and the practical instructions lack of logic associated, which restrict the user's actual use. This paper presents a strategy that can be used to control a quadruped locomotion robot by user's instinctive action, based on five kinds of movement related neurophysiological signal. In actual use, the user drives or imagines the limbs/wrists action to generate EEG signal to adjust the real movement of the robot according to his/her own motor reflex of the robot locomotion. This method is easy for real use, as the user generates the brain-control signal through the instinctive reaction. By adopting the behavioral control of learning and evolution based on the proposed strategy, complex movement task may be realized by instinctive brain-control.
Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms.
Rutkowski, Tomasz M
2016-01-01
The paper reviews nine robotic and virtual reality (VR) brain-computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.
Morimoto, Jun; Kawato, Mitsuo
2015-03-06
In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the 'understanding the brain by creating the brain' approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain-machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Wireless brain-machine interface using EEG and EOG: brain wave classification and robot control
NASA Astrophysics Data System (ADS)
Oh, Sechang; Kumar, Prashanth S.; Kwon, Hyeokjun; Varadan, Vijay K.
2012-04-01
A brain-machine interface (BMI) links a user's brain activity directly to an external device. It enables a person to control devices using only thought. Hence, it has gained significant interest in the design of assistive devices and systems for people with disabilities. In addition, BMI has also been proposed to replace humans with robots in the performance of dangerous tasks like explosives handling/diffusing, hazardous materials handling, fire fighting etc. There are mainly two types of BMI based on the measurement method of brain activity; invasive and non-invasive. Invasive BMI can provide pristine signals but it is expensive and surgery may lead to undesirable side effects. Recent advances in non-invasive BMI have opened the possibility of generating robust control signals from noisy brain activity signals like EEG and EOG. A practical implementation of a non-invasive BMI such as robot control requires: acquisition of brain signals with a robust wearable unit, noise filtering and signal processing, identification and extraction of relevant brain wave features and finally, an algorithm to determine control signals based on the wave features. In this work, we developed a wireless brain-machine interface with a small platform and established a BMI that can be used to control the movement of a robot by using the extracted features of the EEG and EOG signals. The system records and classifies EEG as alpha, beta, delta, and theta waves. The classified brain waves are then used to define the level of attention. The acceleration and deceleration or stopping of the robot is controlled based on the attention level of the wearer. In addition, the left and right movements of eye ball control the direction of the robot.
Creating the brain and interacting with the brain: an integrated approach to understanding the brain
Morimoto, Jun; Kawato, Mitsuo
2015-01-01
In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the ‘understanding the brain by creating the brain’ approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain–machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. PMID:25589568
Noninvasive Electroencephalogram Based Control of a Robotic Arm for Reach and Grasp Tasks
NASA Astrophysics Data System (ADS)
Meng, Jianjun; Zhang, Shuying; Bekyo, Angeliki; Olsoe, Jaron; Baxter, Bryan; He, Bin
2016-12-01
Brain-computer interface (BCI) technologies aim to provide a bridge between the human brain and external devices. Prior research using non-invasive BCI to control virtual objects, such as computer cursors and virtual helicopters, and real-world objects, such as wheelchairs and quadcopters, has demonstrated the promise of BCI technologies. However, controlling a robotic arm to complete reach-and-grasp tasks efficiently using non-invasive BCI has yet to be shown. In this study, we found that a group of 13 human subjects could willingly modulate brain activity to control a robotic arm with high accuracy for performing tasks requiring multiple degrees of freedom by combination of two sequential low dimensional controls. Subjects were able to effectively control reaching of the robotic arm through modulation of their brain rhythms within the span of only a few training sessions and maintained the ability to control the robotic arm over multiple months. Our results demonstrate the viability of human operation of prosthetic limbs using non-invasive BCI technology.
Progress in EEG-Based Brain Robot Interaction Systems
Li, Mengfan; Niu, Linwei; Xian, Bin; Zeng, Ming; Chen, Genshe
2017-01-01
The most popular noninvasive Brain Robot Interaction (BRI) technology uses the electroencephalogram- (EEG-) based Brain Computer Interface (BCI), to serve as an additional communication channel, for robot control via brainwaves. This technology is promising for elderly or disabled patient assistance with daily life. The key issue of a BRI system is to identify human mental activities, by decoding brainwaves, acquired with an EEG device. Compared with other BCI applications, such as word speller, the development of these applications may be more challenging since control of robot systems via brainwaves must consider surrounding environment feedback in real-time, robot mechanical kinematics, and dynamics, as well as robot control architecture and behavior. This article reviews the major techniques needed for developing BRI systems. In this review article, we first briefly introduce the background and development of mind-controlled robot technologies. Second, we discuss the EEG-based brain signal models with respect to generating principles, evoking mechanisms, and experimental paradigms. Subsequently, we review in detail commonly used methods for decoding brain signals, namely, preprocessing, feature extraction, and feature classification, and summarize several typical application examples. Next, we describe a few BRI applications, including wheelchairs, manipulators, drones, and humanoid robots with respect to synchronous and asynchronous BCI-based techniques. Finally, we address some existing problems and challenges with future BRI techniques. PMID:28484488
Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms
Rutkowski, Tomasz M.
2016-01-01
The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms. PMID:27999538
Developments in brain-machine interfaces from the perspective of robotics.
Kim, Hyun K; Park, Shinsuk; Srinivasan, Mandayam A
2009-04-01
Many patients suffer from the loss of motor skills, resulting from traumatic brain and spinal cord injuries, stroke, and many other disabling conditions. Thanks to technological advances in measuring and decoding the electrical activity of cortical neurons, brain-machine interfaces (BMI) have become a promising technology that can aid paralyzed individuals. In recent studies on BMI, robotic manipulators have demonstrated their potential as neuroprostheses. Restoring motor skills through robot manipulators controlled by brain signals may improve the quality of life of people with disability. This article reviews current robotic technologies that are relevant to BMI and suggests strategies that could improve the effectiveness of a brain-operated neuroprosthesis through robotics.
Soft brain-machine interfaces for assistive robotics: A novel control approach.
Schiatti, Lucia; Tessadori, Jacopo; Barresi, Giacinto; Mattos, Leonardo S; Ajoudani, Arash
2017-07-01
Robotic systems offer the possibility of improving the life quality of people with severe motor disabilities, enhancing the individual's degree of independence and interaction with the external environment. In this direction, the operator's residual functions must be exploited for the control of the robot movements and the underlying dynamic interaction through intuitive and effective human-robot interfaces. Towards this end, this work aims at exploring the potential of a novel Soft Brain-Machine Interface (BMI), suitable for dynamic execution of remote manipulation tasks for a wide range of patients. The interface is composed of an eye-tracking system, for an intuitive and reliable control of a robotic arm system's trajectories, and a Brain-Computer Interface (BCI) unit, for the control of the robot Cartesian stiffness, which determines the interaction forces between the robot and environment. The latter control is achieved by estimating in real-time a unidimensional index from user's electroencephalographic (EEG) signals, which provides the probability of a neutral or active state. This estimated state is then translated into a stiffness value for the robotic arm, allowing a reliable modulation of the robot's impedance. A preliminary evaluation of this hybrid interface concept provided evidence on the effective execution of tasks with dynamic uncertainties, demonstrating the great potential of this control method in BMI applications for self-service and clinical care.
Brain-controlled telepresence robot by motor-disabled people.
Tonin, Luca; Carlson, Tom; Leeb, Robert; del R Millán, José
2011-01-01
In this paper we present the first results of users with disabilities in mentally controlling a telepresence robot, a rather complex task as the robot is continuously moving and the user must control it for a long period of time (over 6 minutes) to go along the whole path. These two users drove the telepresence robot from their clinic more than 100 km away. Remarkably, although the patients had never visited the location where the telepresence robot was operating, they achieve similar performances to a group of four healthy users who were familiar with the environment. In particular, the experimental results reported in this paper demonstrate the benefits of shared control for brain-controlled telepresence robots. It allows all subjects (including novel BMI subjects as our users with disabilities) to complete a complex task in similar time and with similar number of commands to those required by manual control.
Brain computer interface for operating a robot
NASA Astrophysics Data System (ADS)
Nisar, Humaira; Balasubramaniam, Hari Chand; Malik, Aamir Saeed
2013-10-01
A Brain-Computer Interface (BCI) is a hardware/software based system that translates the Electroencephalogram (EEG) signals produced by the brain activity to control computers and other external devices. In this paper, we will present a non-invasive BCI system that reads the EEG signals from a trained brain activity using a neuro-signal acquisition headset and translates it into computer readable form; to control the motion of a robot. The robot performs the actions that are instructed to it in real time. We have used the cognitive states like Push, Pull to control the motion of the robot. The sensitivity and specificity of the system is above 90 percent. Subjective results show a mixed trend of the difficulty level of the training activities. The quantitative EEG data analysis complements the subjective results. This technology may become very useful for the rehabilitation of disabled and elderly people.
A brain-controlled lower-limb exoskeleton for human gait training.
Liu, Dong; Chen, Weihai; Pei, Zhongcai; Wang, Jianhua
2017-10-01
Brain-computer interfaces have been a novel approach to translate human intentions into movement commands in robotic systems. This paper describes an electroencephalogram-based brain-controlled lower-limb exoskeleton for gait training, as a proof of concept towards rehabilitation with human-in-the-loop. Instead of using conventional single electroencephalography correlates, e.g., evoked P300 or spontaneous motor imagery, we propose a novel framework integrated two asynchronous signal modalities, i.e., sensorimotor rhythms (SMRs) and movement-related cortical potentials (MRCPs). We executed experiments in a biologically inspired and customized lower-limb exoskeleton where subjects (N = 6) actively controlled the robot using their brain signals. Each subject performed three consecutive sessions composed of offline training, online visual feedback testing, and online robot-control recordings. Post hoc evaluations were conducted including mental workload assessment, feature analysis, and statistics test. An average robot-control accuracy of 80.16% ± 5.44% was obtained with the SMR-based method, while estimation using the MRCP-based method yielded an average performance of 68.62% ± 8.55%. The experimental results showed the feasibility of the proposed framework with all subjects successfully controlled the exoskeleton. The current paradigm could be further extended to paraplegic patients in clinical trials.
A brain-controlled lower-limb exoskeleton for human gait training
NASA Astrophysics Data System (ADS)
Liu, Dong; Chen, Weihai; Pei, Zhongcai; Wang, Jianhua
2017-10-01
Brain-computer interfaces have been a novel approach to translate human intentions into movement commands in robotic systems. This paper describes an electroencephalogram-based brain-controlled lower-limb exoskeleton for gait training, as a proof of concept towards rehabilitation with human-in-the-loop. Instead of using conventional single electroencephalography correlates, e.g., evoked P300 or spontaneous motor imagery, we propose a novel framework integrated two asynchronous signal modalities, i.e., sensorimotor rhythms (SMRs) and movement-related cortical potentials (MRCPs). We executed experiments in a biologically inspired and customized lower-limb exoskeleton where subjects (N = 6) actively controlled the robot using their brain signals. Each subject performed three consecutive sessions composed of offline training, online visual feedback testing, and online robot-control recordings. Post hoc evaluations were conducted including mental workload assessment, feature analysis, and statistics test. An average robot-control accuracy of 80.16% ± 5.44% was obtained with the SMR-based method, while estimation using the MRCP-based method yielded an average performance of 68.62% ± 8.55%. The experimental results showed the feasibility of the proposed framework with all subjects successfully controlled the exoskeleton. The current paradigm could be further extended to paraplegic patients in clinical trials.
Towards Rehabilitation Robotics: Off-the-Shelf BCI Control of Anthropomorphic Robotic Arms.
Athanasiou, Alkinoos; Xygonakis, Ioannis; Pandria, Niki; Kartsidis, Panagiotis; Arfaras, George; Kavazidi, Kyriaki Rafailia; Foroglou, Nicolas; Astaras, Alexander; Bamidis, Panagiotis D
2017-01-01
Advances in neural interfaces have demonstrated remarkable results in the direction of replacing and restoring lost sensorimotor function in human patients. Noninvasive brain-computer interfaces (BCIs) are popular due to considerable advantages including simplicity, safety, and low cost, while recent advances aim at improving past technological and neurophysiological limitations. Taking into account the neurophysiological alterations of disabled individuals, investigating brain connectivity features for implementation of BCI control holds special importance. Off-the-shelf BCI systems are based on fast, reproducible detection of mental activity and can be implemented in neurorobotic applications. Moreover, social Human-Robot Interaction (HRI) is increasingly important in rehabilitation robotics development. In this paper, we present our progress and goals towards developing off-the-shelf BCI-controlled anthropomorphic robotic arms for assistive technologies and rehabilitation applications. We account for robotics development, BCI implementation, and qualitative assessment of HRI characteristics of the system. Furthermore, we present two illustrative experimental applications of the BCI-controlled arms, a study of motor imagery modalities on healthy individuals' BCI performance, and a pilot investigation on spinal cord injured patients' BCI control and brain connectivity. We discuss strengths and limitations of our design and propose further steps on development and neurophysiological study, including implementation of connectivity features as BCI modality.
Towards Rehabilitation Robotics: Off-the-Shelf BCI Control of Anthropomorphic Robotic Arms
Xygonakis, Ioannis; Pandria, Niki; Kartsidis, Panagiotis; Arfaras, George; Kavazidi, Kyriaki Rafailia; Foroglou, Nicolas
2017-01-01
Advances in neural interfaces have demonstrated remarkable results in the direction of replacing and restoring lost sensorimotor function in human patients. Noninvasive brain-computer interfaces (BCIs) are popular due to considerable advantages including simplicity, safety, and low cost, while recent advances aim at improving past technological and neurophysiological limitations. Taking into account the neurophysiological alterations of disabled individuals, investigating brain connectivity features for implementation of BCI control holds special importance. Off-the-shelf BCI systems are based on fast, reproducible detection of mental activity and can be implemented in neurorobotic applications. Moreover, social Human-Robot Interaction (HRI) is increasingly important in rehabilitation robotics development. In this paper, we present our progress and goals towards developing off-the-shelf BCI-controlled anthropomorphic robotic arms for assistive technologies and rehabilitation applications. We account for robotics development, BCI implementation, and qualitative assessment of HRI characteristics of the system. Furthermore, we present two illustrative experimental applications of the BCI-controlled arms, a study of motor imagery modalities on healthy individuals' BCI performance, and a pilot investigation on spinal cord injured patients' BCI control and brain connectivity. We discuss strengths and limitations of our design and propose further steps on development and neurophysiological study, including implementation of connectivity features as BCI modality. PMID:28948168
Brain Computer Interfaces for Enhanced Interaction with Mobile Robot Agents
2016-07-27
synergistic and complementary way. This project focused on acquiring a mobile robotic agent platform that can be used to explore these interfaces...providing a test environment where the human control of a robot agent can be experimentally validated in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...Distribution Unlimited UU UU UU UU 27-07-2016 17-Sep-2013 16-Sep-2014 Final Report: Brain Computer Interfaces for Enhanced Interactions with Mobile Robot
Neural-Network Control Of Prosthetic And Robotic Hands
NASA Technical Reports Server (NTRS)
Buckley, Theresa M.
1991-01-01
Electronic neural networks proposed for use in controlling robotic and prosthetic hands and exoskeletal or glovelike electromechanical devices aiding intact but nonfunctional hands. Specific to patient, who activates grasping motion by voice command, by mechanical switch, or by myoelectric impulse. Patient retains higher-level control, while lower-level control provided by neural network analogous to that of miniature brain. During training, patient teaches miniature brain to perform specialized, anthropomorphic movements unique to himself or herself.
Towards the development of a spring-based continuum robot for neurosurgery
NASA Astrophysics Data System (ADS)
Kim, Yeongjin; Cheng, Shing Shin; Desai, Jaydev P.
2015-03-01
Brain tumor is usually life threatening due to the uncontrolled growth of abnormal cells native to the brain or the spread of tumor cells from outside the central nervous system to the brain. The risks involved in carrying out surgery within such a complex organ can cause severe anxiety in cancer patients. However, neurosurgery, which remains one of the more effective ways of treating brain tumors focused in a confined volume, can have a tremendously increased success rate if the appropriate imaging modality is used for complete tumor removal. Magnetic resonance imaging (MRI) provides excellent soft-tissue contrast and is the imaging modality of choice for brain tumor imaging. MRI combined with continuum soft robotics has immense potential to be the revolutionary treatment technique in the field of brain cancer. It eliminates the concern of hand tremor and guarantees a more precise procedure. One of the prototypes of Minimally Invasive Neurosurgical Intracranial Robot (MINIR-II), which can be classified as a continuum soft robot, consists of a snake-like body made of three segments of rapid prototyped plastic springs. It provides improved dexterity with higher degrees of freedom and independent joint control. It is MRI-compatible, allowing surgeons to track and determine the real-time location of the robot relative to the brain tumor target. The robot was manufactured in a single piece using rapid prototyping technology at a low cost, allowing it to disposable after each use. MINIR-II has two DOFs at each segment with both joints controlled by two pairs of MRI-compatible SMA spring actuators. Preliminary motion tests have been carried out using vision-tracking method and the robot was able to move to different positions based on user commands.
Alimardani, Maryam; Nishio, Shuichi; Ishiguro, Hiroshi
2016-09-22
Body ownership illusions provide evidence that our sense of self is not coherent and can be extended to non-body objects. Studying about these illusions gives us practical tools to understand the brain mechanisms that underlie body recognition and the experience of self. We previously introduced an illusion of body ownership transfer (BOT) for operators of a very humanlike robot. This sensation of owning the robot's body was confirmed when operators controlled the robot either by performing the desired motion with their body (motion-control) or by employing a brain-computer interface (BCI) that translated motor imagery commands to robot movement (BCI-control). The interesting observation during BCI-control was that the illusion could be induced even with a noticeable delay in the BCI system. Temporal discrepancy has always shown critical weakening effects on body ownership illusions. However the delay-robustness of BOT during BCI-control raised a question about the interaction between the proprioceptive inputs and delayed visual feedback in agency-driven illusions. In this work, we compared the intensity of BOT illusion for operators in two conditions; motion-control and BCI-control. Our results revealed a significantly stronger BOT illusion for the case of BCI-control. This finding highlights BCI's potential in inducing stronger agency-driven illusions by building a direct communication between the brain and controlled body, and therefore removing awareness from the subject's own body.
Towards a real-time interface between a biomimetic model of sensorimotor cortex and a robotic arm
Dura-Bernal, Salvador; Chadderdon, George L; Neymotin, Samuel A; Francis, Joseph T; Lytton, William W
2015-01-01
Brain-machine interfaces can greatly improve the performance of prosthetics. Utilizing biomimetic neuronal modeling in brain machine interfaces (BMI) offers the possibility of providing naturalistic motor-control algorithms for control of a robotic limb. This will allow finer control of a robot, while also giving us new tools to better understand the brain’s use of electrical signals. However, the biomimetic approach presents challenges in integrating technologies across multiple hardware and software platforms, so that the different components can communicate in real-time. We present the first steps in an ongoing effort to integrate a biomimetic spiking neuronal model of motor learning with a robotic arm. The biomimetic model (BMM) was used to drive a simple kinematic two-joint virtual arm in a motor task requiring trial-and-error convergence on a single target. We utilized the output of this model in real time to drive mirroring motion of a Barrett Technology WAM robotic arm through a user datagram protocol (UDP) interface. The robotic arm sent back information on its joint positions, which was then used by a visualization tool on the remote computer to display a realistic 3D virtual model of the moving robotic arm in real time. This work paves the way towards a full closed-loop biomimetic brain-effector system that can be incorporated in a neural decoder for prosthetic control, to be used as a platform for developing biomimetic learning algorithms for controlling real-time devices. PMID:26709323
A self-paced motor imagery based brain-computer interface for robotic wheelchair control.
Tsui, Chun Sing Louis; Gan, John Q; Hu, Huosheng
2011-10-01
This paper presents a simple self-paced motor imagery based brain-computer interface (BCI) to control a robotic wheelchair. An innovative control protocol is proposed to enable a 2-class self-paced BCI for wheelchair control, in which the user makes path planning and fully controls the wheelchair except for the automatic obstacle avoidance based on a laser range finder when necessary. In order for the users to train their motor imagery control online safely and easily, simulated robot navigation in a specially designed environment was developed. This allowed the users to practice motor imagery control with the core self-paced BCI system in a simulated scenario before controlling the wheelchair. The self-paced BCI can then be applied to control a real robotic wheelchair using a protocol similar to that controlling the simulated robot. Our emphasis is on allowing more potential users to use the BCI controlled wheelchair with minimal training; a simple 2-class self paced system is adequate with the novel control protocol, resulting in a better transition from offline training to online control. Experimental results have demonstrated the usefulness of the online practice under the simulated scenario, and the effectiveness of the proposed self-paced BCI for robotic wheelchair control.
Control of a 7-DOF Robotic Arm System With an SSVEP-Based BCI.
Chen, Xiaogang; Zhao, Bing; Wang, Yijun; Xu, Shengpu; Gao, Xiaorong
2018-04-12
Although robot technology has been successfully used to empower people who suffer from motor disabilities to increase their interaction with their physical environment, it remains a challenge for individuals with severe motor impairment, who do not have the motor control ability to move robots or prosthetic devices by manual control. In this study, to mitigate this issue, a noninvasive brain-computer interface (BCI)-based robotic arm control system using gaze based steady-state visual evoked potential (SSVEP) was designed and implemented using a portable wireless electroencephalogram (EEG) system. A 15-target SSVEP-based BCI using a filter bank canonical correlation analysis (FBCCA) method allowed users to directly control the robotic arm without system calibration. The online results from 12 healthy subjects indicated that a command for the proposed brain-controlled robot system could be selected from 15 possible choices in 4[Formula: see text]s (i.e. 2[Formula: see text]s for visual stimulation and 2[Formula: see text]s for gaze shifting) with an average accuracy of 92.78%, resulting in a 15 commands/min transfer rate. Furthermore, all subjects (even naive users) were able to successfully complete the entire move-grasp-lift task without user training. These results demonstrated an SSVEP-based BCI could provide accurate and efficient high-level control of a robotic arm, showing the feasibility of a BCI-based robotic arm control system for hand-assistance.
Downey, John E; Weiss, Jeffrey M; Muelling, Katharina; Venkatraman, Arun; Valois, Jean-Sebastien; Hebert, Martial; Bagnell, J Andrew; Schwartz, Andrew B; Collinger, Jennifer L
2016-03-18
Recent studies have shown that brain-machine interfaces (BMIs) offer great potential for restoring upper limb function. However, grasping objects is a complicated task and the signals extracted from the brain may not always be capable of driving these movements reliably. Vision-guided robotic assistance is one possible way to improve BMI performance. We describe a method of shared control where the user controls a prosthetic arm using a BMI and receives assistance with positioning the hand when it approaches an object. Two human subjects with tetraplegia used a robotic arm to complete object transport tasks with and without shared control. The shared control system was designed to provide a balance between BMI-derived intention and computer assistance. An autonomous robotic grasping system identified and tracked objects and defined stable grasp positions for these objects. The system identified when the user intended to interact with an object based on the BMI-controlled movements of the robotic arm. Using shared control, BMI controlled movements and autonomous grasping commands were blended to ensure secure grasps. Both subjects were more successful on object transfer tasks when using shared control compared to BMI control alone. Movements made using shared control were more accurate, more efficient, and less difficult. One participant attempted a task with multiple objects and successfully lifted one of two closely spaced objects in 92 % of trials, demonstrating the potential for users to accurately execute their intention while using shared control. Integration of BMI control with vision-guided robotic assistance led to improved performance on object transfer tasks. Providing assistance while maintaining generalizability will make BMI systems more attractive to potential users. NCT01364480 and NCT01894802 .
Robot Control Through Brain Computer Interface For Patterns Generation
NASA Astrophysics Data System (ADS)
Belluomo, P.; Bucolo, M.; Fortuna, L.; Frasca, M.
2011-09-01
A Brain Computer Interface (BCI) system processes and translates neuronal signals, that mainly comes from EEG instruments, into commands for controlling electronic devices. This system can allow people with motor disabilities to control external devices through the real-time modulation of their brain waves. In this context an EEG-based BCI system that allows creative luminous artistic representations is here presented. The system that has been designed and realized in our laboratory interfaces the BCI2000 platform performing real-time analysis of EEG signals with a couple of moving luminescent twin robots. Experiments are also presented.
Thought-Controlled Nanoscale Robots in a Living Host.
Arnon, Shachar; Dahan, Nir; Koren, Amir; Radiano, Oz; Ronen, Matan; Yannay, Tal; Giron, Jonathan; Ben-Ami, Lee; Amir, Yaniv; Hel-Or, Yacov; Friedman, Doron; Bachelet, Ido
2016-01-01
We report a new type of brain-machine interface enabling a human operator to control nanometer-size robots inside a living animal by brain activity. Recorded EEG patterns are recognized online by an algorithm, which in turn controls the state of an electromagnetic field. The field induces the local heating of billions of mechanically-actuating DNA origami robots tethered to metal nanoparticles, leading to their reversible activation and subsequent exposure of a bioactive payload. As a proof of principle we demonstrate activation of DNA robots to cause a cellular effect inside the insect Blaberus discoidalis, by a cognitively straining task. This technology enables the online switching of a bioactive molecule on and off in response to a subject's cognitive state, with potential implications to therapeutic control in disorders such as schizophrenia, depression, and attention deficits, which are among the most challenging conditions to diagnose and treat.
Thought-Controlled Nanoscale Robots in a Living Host
Giron, Jonathan; Ben-Ami, Lee; Amir, Yaniv; Hel-Or, Yacov; Friedman, Doron; Bachelet, Ido
2016-01-01
We report a new type of brain-machine interface enabling a human operator to control nanometer-size robots inside a living animal by brain activity. Recorded EEG patterns are recognized online by an algorithm, which in turn controls the state of an electromagnetic field. The field induces the local heating of billions of mechanically-actuating DNA origami robots tethered to metal nanoparticles, leading to their reversible activation and subsequent exposure of a bioactive payload. As a proof of principle we demonstrate activation of DNA robots to cause a cellular effect inside the insect Blaberus discoidalis, by a cognitively straining task. This technology enables the online switching of a bioactive molecule on and off in response to a subject’s cognitive state, with potential implications to therapeutic control in disorders such as schizophrenia, depression, and attention deficits, which are among the most challenging conditions to diagnose and treat. PMID:27525806
Control of a 2 DoF robot using a brain-machine interface.
Hortal, Enrique; Ubeda, Andrés; Iáñez, Eduardo; Azorín, José M
2014-09-01
In this paper, a non-invasive spontaneous Brain-Machine Interface (BMI) is used to control the movement of a planar robot. To that end, two mental tasks are used to manage the visual interface that controls the robot. The robot used is a PupArm, a force-controlled planar robot designed by the nBio research group at the Miguel Hernández University of Elche (Spain). Two control strategies are compared: hierarchical and directional control. The experimental test (performed by four users) consists of reaching four targets. The errors and time used during the performance of the tests are compared in both control strategies (hierarchical and directional control). The advantages and disadvantages of each method are shown after the analysis of the results. The hierarchical control allows an accurate approaching to the goals but it is slower than using the directional control which, on the contrary, is less precise. The results show both strategies are useful to control this planar robot. In the future, by adding an extra device like a gripper, this BMI could be used in assistive applications such as grasping daily objects in a realistic environment. In order to compare the behavior of the system taking into account the opinion of the users, a NASA Tasks Load Index (TLX) questionnaire is filled out after two sessions are completed. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Gao, Qiang; Dou, Lixiang; Belkacem, Abdelkader Nasreddine; Chen, Chao
2017-01-01
A novel hybrid brain-computer interface (BCI) based on the electroencephalogram (EEG) signal which consists of a motor imagery- (MI-) based online interactive brain-controlled switch, "teeth clenching" state detector, and a steady-state visual evoked potential- (SSVEP-) based BCI was proposed to provide multidimensional BCI control. MI-based BCI was used as single-pole double throw brain switch (SPDTBS). By combining the SPDTBS with 4-class SSEVP-based BCI, movement of robotic arm was controlled in three-dimensional (3D) space. In addition, muscle artifact (EMG) of "teeth clenching" condition recorded from EEG signal was detected and employed as interrupter, which can initialize the statement of SPDTBS. Real-time writing task was implemented to verify the reliability of the proposed noninvasive hybrid EEG-EMG-BCI. Eight subjects participated in this study and succeeded to manipulate a robotic arm in 3D space to write some English letters. The mean decoding accuracy of writing task was 0.93 ± 0.03. Four subjects achieved the optimal criteria of writing the word "HI" which is the minimum movement of robotic arm directions (15 steps). Other subjects had needed to take from 2 to 4 additional steps to finish the whole process. These results suggested that our proposed hybrid noninvasive EEG-EMG-BCI was robust and efficient for real-time multidimensional robotic arm control.
Gao, Qiang
2017-01-01
A novel hybrid brain-computer interface (BCI) based on the electroencephalogram (EEG) signal which consists of a motor imagery- (MI-) based online interactive brain-controlled switch, “teeth clenching” state detector, and a steady-state visual evoked potential- (SSVEP-) based BCI was proposed to provide multidimensional BCI control. MI-based BCI was used as single-pole double throw brain switch (SPDTBS). By combining the SPDTBS with 4-class SSEVP-based BCI, movement of robotic arm was controlled in three-dimensional (3D) space. In addition, muscle artifact (EMG) of “teeth clenching” condition recorded from EEG signal was detected and employed as interrupter, which can initialize the statement of SPDTBS. Real-time writing task was implemented to verify the reliability of the proposed noninvasive hybrid EEG-EMG-BCI. Eight subjects participated in this study and succeeded to manipulate a robotic arm in 3D space to write some English letters. The mean decoding accuracy of writing task was 0.93 ± 0.03. Four subjects achieved the optimal criteria of writing the word “HI” which is the minimum movement of robotic arm directions (15 steps). Other subjects had needed to take from 2 to 4 additional steps to finish the whole process. These results suggested that our proposed hybrid noninvasive EEG-EMG-BCI was robust and efficient for real-time multidimensional robotic arm control. PMID:28660211
Cognitive Robotics, Embodied Cognition and Human-Robot Interaction
2010-11-03
architecture is a specification of the structure of the brain at a level of abstraction that explains how it achieves the function of the mind (Anderson...predictions about brain regions (fMRI) Wednesday, November 3, 2010 Embodied Cognitive Modeling • We use an MDS robot (Trafton et al., 2010...passed memory and/or reality control questions (e.g., “Where did Maxi put the chocolate ?” or “Where is the chocolate now?”). Our reasoning was that age
Kim, Youngmoo E.
2017-01-01
Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training. PMID:28804712
Batula, Alyssa M; Kim, Youngmoo E; Ayaz, Hasan
2017-01-01
Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training.
Brain-Machine Interface control of a robot arm using actor-critic rainforcement learning.
Pohlmeyer, Eric A; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline; Sanchez, Justin C
2012-01-01
Here we demonstrate how a marmoset monkey can use a reinforcement learning (RL) Brain-Machine Interface (BMI) to effectively control the movements of a robot arm for a reaching task. In this work, an actor-critic RL algorithm used neural ensemble activity in the monkey's motor cortext to control the robot movements during a two-target decision task. This novel approach to decoding offers unique advantages for BMI control applications. Compared to supervised learning decoding methods, the actor-critic RL algorithm does not require an explicit set of training data to create a static control model, but rather it incrementally adapts the model parameters according to its current performance, in this case requiring only a very basic feedback signal. We show how this algorithm achieved high performance when mapping the monkey's neural states (94%) to robot actions, and only needed to experience a few trials before obtaining accurate real-time control of the robot arm. Since RL methods responsively adapt and adjust their parameters, they can provide a method to create BMIs that are robust against perturbations caused by changes in either the neural input space or the output actions they generate under different task requirements or goals.
Humanlike robot hands controlled by brain activity arouse illusion of ownership in operators
Alimardani, Maryam; Nishio, Shuichi; Ishiguro, Hiroshi
2013-01-01
Operators of a pair of robotic hands report ownership for those hands when they hold image of a grasp motion and watch the robot perform it. We present a novel body ownership illusion that is induced by merely watching and controlling robot's motions through a brain machine interface. In past studies, body ownership illusions were induced by correlation of such sensory inputs as vision, touch and proprioception. However, in the presented illusion none of the mentioned sensations are integrated except vision. Our results show that during BMI-operation of robotic hands, the interaction between motor commands and visual feedback of the intended motions is adequate to incorporate the non-body limbs into one's own body. Our discussion focuses on the role of proprioceptive information in the mechanism of agency-driven illusions. We believe that our findings will contribute to improvement of tele-presence systems in which operators incorporate BMI-operated robots into their body representations. PMID:23928891
Pohlmeyer, Eric A.; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline W.; Sanchez, Justin C.
2014-01-01
Brain-machine interface (BMI) systems give users direct neural control of robotic, communication, or functional electrical stimulation systems. As BMI systems begin transitioning from laboratory settings into activities of daily living, an important goal is to develop neural decoding algorithms that can be calibrated with a minimal burden on the user, provide stable control for long periods of time, and can be responsive to fluctuations in the decoder’s neural input space (e.g. neurons appearing or being lost amongst electrode recordings). These are significant challenges for static neural decoding algorithms that assume stationary input/output relationships. Here we use an actor-critic reinforcement learning architecture to provide an adaptive BMI controller that can successfully adapt to dramatic neural reorganizations, can maintain its performance over long time periods, and which does not require the user to produce specific kinetic or kinematic activities to calibrate the BMI. Two marmoset monkeys used the Reinforcement Learning BMI (RLBMI) to successfully control a robotic arm during a two-target reaching task. The RLBMI was initialized using random initial conditions, and it quickly learned to control the robot from brain states using only a binary evaluative feedback regarding whether previously chosen robot actions were good or bad. The RLBMI was able to maintain control over the system throughout sessions spanning multiple weeks. Furthermore, the RLBMI was able to quickly adapt and maintain control of the robot despite dramatic perturbations to the neural inputs, including a series of tests in which the neuron input space was deliberately halved or doubled. PMID:24498055
Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot
Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.
2014-01-01
Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350
McWhinney, S R; Tremblay, A; Boe, S G; Bardouille, T
2018-02-01
Neurofeedback training teaches individuals to modulate brain activity by providing real-time feedback and can be used for brain-computer interface control. The present study aimed to optimize training by maximizing engagement through goal-oriented task design. Participants were shown either a visual display or a robot, where each was manipulated using motor imagery (MI)-related electroencephalography signals. Those with the robot were instructed to quickly navigate grid spaces, as the potential for goal-oriented design to strengthen learning was central to our investigation. Both groups were hypothesized to show increased magnitude of these signals across 10 sessions, with the greatest gains being seen in those navigating the robot due to increased engagement. Participants demonstrated the predicted increase in magnitude, with no differentiation between hemispheres. Participants navigating the robot showed stronger left-hand MI increases than those with the computer display. This is likely due to success being reliant on maintaining strong MI-related signals. While older participants showed stronger signals in early sessions, this trend later reversed, suggesting greater natural proficiency but reduced flexibility. These results demonstrate capacity for modulating neurofeedback using MI over a series of training sessions, using tasks of varied design. Importantly, the more goal-oriented robot control task resulted in greater improvements.
Falotico, Egidio; Vannucci, Lorenzo; Ambrosano, Alessandro; Albanese, Ugo; Ulbrich, Stefan; Vasquez Tieck, Juan Camilo; Hinkel, Georg; Kaiser, Jacques; Peric, Igor; Denninger, Oliver; Cauli, Nino; Kirtay, Murat; Roennau, Arne; Klinker, Gudrun; Von Arnim, Axel; Guyot, Luc; Peppicelli, Daniel; Martínez-Cañada, Pablo; Ros, Eduardo; Maier, Patrick; Weber, Sandro; Huber, Manuel; Plecher, David; Röhrbein, Florian; Deser, Stefan; Roitberg, Alina; van der Smagt, Patrick; Dillman, Rüdiger; Levi, Paul; Laschi, Cecilia; Knoll, Alois C; Gewaltig, Marc-Oliver
2017-01-01
Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain-body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 "Neurorobotics" of the Human Brain Project (HBP). At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.
Bio-robots automatic navigation with electrical reward stimulation.
Sun, Chao; Zhang, Xinlu; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang
2012-01-01
Bio-robots that controlled by outer stimulation through brain computer interface (BCI) suffer from the dependence on realtime guidance of human operators. Current automatic navigation methods for bio-robots focus on the controlling rules to force animals to obey man-made commands, with animals' intelligence ignored. This paper proposes a new method to realize the automatic navigation for bio-robots with electrical micro-stimulation as real-time rewards. Due to the reward-seeking instinct and trial-and-error capability, bio-robot can be steered to keep walking along the right route with rewards and correct its direction spontaneously when rewards are deprived. In navigation experiments, rat-robots learn the controlling methods in short time. The results show that our method simplifies the controlling logic and realizes the automatic navigation for rat-robots successfully. Our work might have significant implication for the further development of bio-robots with hybrid intelligence.
Neuromechanics: an integrative approach for understanding motor control.
Nishikawa, Kiisa; Biewener, Andrew A; Aerts, Peter; Ahn, Anna N; Chiel, Hillel J; Daley, Monica A; Daniel, Thomas L; Full, Robert J; Hale, Melina E; Hedrick, Tyson L; Lappin, A Kristopher; Nichols, T Richard; Quinn, Roger D; Satterlie, Richard A; Szymik, Brett
2007-07-01
Neuromechanics seeks to understand how muscles, sense organs, motor pattern generators, and brain interact to produce coordinated movement, not only in complex terrain but also when confronted with unexpected perturbations. Applications of neuromechanics include ameliorating human health problems (including prosthesis design and restoration of movement following brain or spinal cord injury), as well as the design, actuation and control of mobile robots. In animals, coordinated movement emerges from the interplay among descending output from the central nervous system, sensory input from body and environment, muscle dynamics, and the emergent dynamics of the whole animal. The inevitable coupling between neural information processing and the emergent mechanical behavior of animals is a central theme of neuromechanics. Fundamentally, motor control involves a series of transformations of information, from brain and spinal cord to muscles to body, and back to brain. The control problem revolves around the specific transfer functions that describe each transformation. The transfer functions depend on the rules of organization and operation that determine the dynamic behavior of each subsystem (i.e., central processing, force generation, emergent dynamics, and sensory processing). In this review, we (1) consider the contributions of muscles, (2) sensory processing, and (3) central networks to motor control, (4) provide examples to illustrate the interplay among brain, muscles, sense organs and the environment in the control of movement, and (5) describe advances in both robotics and neuromechanics that have emerged from application of biological principles in robotic design. Taken together, these studies demonstrate that (1) intrinsic properties of muscle contribute to dynamic stability and control of movement, particularly immediately after perturbations; (2) proprioceptive feedback reinforces these intrinsic self-stabilizing properties of muscle; (3) control systems must contend with inevitable time delays that can simplify or complicate control; and (4) like most animals under a variety of circumstances, some robots use a trial and error process to tune central feedforward control to emergent body dynamics.
Kim, Geon Ha; Jeon, Seun; Im, Kiho; Kwon, Hunki; Lee, Byung Hwa; Kim, Ga Young; Jeong, Hana; Han, Noh Eul; Seo, Sang Won; Cho, Hanna; Noh, Young; Park, Sang Eon; Kim, Hojeong; Hwang, Jung Won; Yoon, Cindy W.; Kim, Hee Jin; Ye, Byoung Seok; Chin, Ju Hee; Kim, Jung-Hyun; Suh, Mee Kyung; Lee, Jong Min; Kim, Sung Tae; Choi, Mun-Taek; Kim, Mun Sang; Heilman, Kenneth M; Jeong, Jee Hyang; Na, Duk L.
2015-01-01
The purpose of this study was to investigate if multi-domain cognitive training, especially robot-assisted training, alters cortical thickness in the brains of elderly participants. A controlled trial was conducted with 85 volunteers without cognitive impairment who were 60 years old or older. Participants were first randomized into two groups. One group consisted of 48 participants who would receive cognitive training and 37 who would not receive training. The cognitive training group was randomly divided into two groups, 24 who received traditional cognitive training and 24 who received robot-assisted cognitive training. The training for both groups consisted of daily 90-min-session, five days a week for a total of 12 weeks. The primary outcome was the changes in cortical thickness. When compared to the control group, both groups who underwent cognitive training demonstrated attenuation of age related cortical thinning in the frontotemporal association cortices. When the robot and the traditional interventions were directly compared, the robot group showed less cortical thinning in the anterior cingulate cortices. Our results suggest that cognitive training can mitigate age-associated structural brain changes in the elderly. Trial Registration ClnicalTrials.gov NCT01596205 PMID:25898367
Alimardani, Maryam; Nishio, Shuichi; Ishiguro, Hiroshi
2016-01-01
Body ownership illusions provide evidence that our sense of self is not coherent and can be extended to non-body objects. Studying about these illusions gives us practical tools to understand the brain mechanisms that underlie body recognition and the experience of self. We previously introduced an illusion of body ownership transfer (BOT) for operators of a very humanlike robot. This sensation of owning the robot’s body was confirmed when operators controlled the robot either by performing the desired motion with their body (motion-control) or by employing a brain-computer interface (BCI) that translated motor imagery commands to robot movement (BCI-control). The interesting observation during BCI-control was that the illusion could be induced even with a noticeable delay in the BCI system. Temporal discrepancy has always shown critical weakening effects on body ownership illusions. However the delay-robustness of BOT during BCI-control raised a question about the interaction between the proprioceptive inputs and delayed visual feedback in agency-driven illusions. In this work, we compared the intensity of BOT illusion for operators in two conditions; motion-control and BCI-control. Our results revealed a significantly stronger BOT illusion for the case of BCI-control. This finding highlights BCI’s potential in inducing stronger agency-driven illusions by building a direct communication between the brain and controlled body, and therefore removing awareness from the subject’s own body. PMID:27654174
Spataro, Rossella; Chella, Antonio; Allison, Brendan; Giardina, Marcello; Sorbello, Rosario; Tramonte, Salvatore; Guger, Christoph; La Bella, Vincenzo
2017-01-01
Locked-in Amyotrophic Lateral Sclerosis (ALS) patients are fully dependent on caregivers for any daily need. At this stage, basic communication and environmental control may not be possible even with commonly used augmentative and alternative communication devices. Brain Computer Interface (BCI) technology allows users to modulate brain activity for communication and control of machines and devices, without requiring a motor control. In the last several years, numerous articles have described how persons with ALS could effectively use BCIs for different goals, usually spelling. In the present study, locked-in ALS patients used a BCI system to directly control the humanoid robot NAO (Aldebaran Robotics, France) with the aim of reaching and grasping a glass of water. Four ALS patients and four healthy controls were recruited and trained to operate this humanoid robot through a P300-based BCI. A few minutes training was sufficient to efficiently operate the system in different environments. Three out of the four ALS patients and all controls successfully performed the task with a high level of accuracy. These results suggest that BCI-operated robots can be used by locked-in ALS patients as an artificial alter-ego, the machine being able to move, speak and act in his/her place. PMID:28298888
Robotic devices and brain-machine interfaces for hand rehabilitation post-stroke.
McConnell, Alistair C; Moioli, Renan C; Brasil, Fabricio L; Vallejo, Marta; Corne, David W; Vargas, Patricia A; Stokes, Adam A
2017-06-28
To review the state of the art of robotic-aided hand physiotherapy for post-stroke rehabilitation, including the use of brain-machine interfaces. Each patient has a unique clinical history and, in response to personalized treatment needs, research into individualized and at-home treatment options has expanded rapidly in recent years. This has resulted in the development of many devices and design strategies for use in stroke rehabilitation. The development progression of robotic-aided hand physiotherapy devices and brain-machine interface systems is outlined, focussing on those with mechanisms and control strategies designed to improve recovery outcomes of the hand post-stroke. A total of 110 commercial and non-commercial hand and wrist devices, spanning the 2 major core designs: end-effector and exoskeleton are reviewed. The growing body of evidence on the efficacy and relevance of incorporating brain-machine interfaces in stroke rehabilitation is summarized. The challenges involved in integrating robotic rehabilitation into the healthcare system are discussed. This review provides novel insights into the use of robotics in physiotherapy practice, and may help system designers to develop new devices.
Miura, Satoshi; Kobayashi, Yo; Kawamura, Kazuya; Seki, Masatoshi; Nakashima, Yasutaka; Noguchi, Takehiko; Kasuya, Masahiro; Yokoo, Yuki; Fujie, Masakatsu G
2012-01-01
Surgical robots have improved considerably in recent years, but intuitive operability, which represents user inter-operability, has not been quantitatively evaluated. Therefore, for design of a robot with intuitive operability, we propose a method to measure brain activity to determine intuitive operability. The objective of this paper is to determine the master configuration against the monitor that allows users to perceive the manipulator as part of their own body. We assume that the master configuration produces an immersive reality experience for the user of putting his own arm into the monitor. In our experiments, as subjects controlled the hand controller to position the tip of the virtual slave manipulator on a target in a surgical simulator, we measured brain activity through brain-imaging devices. We performed our experiments for a variety of master manipulator configurations with the monitor position fixed. For all test subjects, we found that brain activity was stimulated significantly when the master manipulator was located behind the monitor. We conclude that this master configuration produces immersive reality through the body image, which is related to visual and somatic sense feedback.
A soft body as a reservoir: case studies in a dynamic model of octopus-inspired soft robotic arm.
Nakajima, Kohei; Hauser, Helmut; Kang, Rongjie; Guglielmino, Emanuele; Caldwell, Darwin G; Pfeifer, Rolf
2013-01-01
The behaviors of the animals or embodied agents are characterized by the dynamic coupling between the brain, the body, and the environment. This implies that control, which is conventionally thought to be handled by the brain or a controller, can partially be outsourced to the physical body and the interaction with the environment. This idea has been demonstrated in a number of recently constructed robots, in particular from the field of "soft robotics". Soft robots are made of a soft material introducing high-dimensionality, non-linearity, and elasticity, which often makes the robots difficult to control. Biological systems such as the octopus are mastering their complex bodies in highly sophisticated manners by capitalizing on their body dynamics. We will demonstrate that the structure of the octopus arm cannot only be exploited for generating behavior but also, in a sense, as a computational resource. By using a soft robotic arm inspired by the octopus we show in a number of experiments how control is partially incorporated into the physical arm's dynamics and how the arm's dynamics can be exploited to approximate non-linear dynamical systems and embed non-linear limit cycles. Future application scenarios as well as the implications of the results for the octopus biology are also discussed.
Brain-Emulating Cognition and Control Architecture (BECCA) v. 0.2 beta
DOE Office of Scientific and Technical Information (OSTI.GOV)
ROHRER, BRANDON; & MORROW, JAMES
2009-06-16
BECCA is a learning and control method based on the function of the human brain. The goal behind its creation is to learn to control robots in unfamiliar environments in a way that is very robust, similar to the way that an infant learns to interact with her environment by trial and error. As of this release, this software contains an application for controlling robot hardware through a socket. The code was created so as to make it extensible to new applications. It is modular, object-oriented code in which the portions of the code that are specific to one robotmore » are easily separable from those portions that are the constant between implementations. BECCA makes very few assumptions about the robot and environment it is learning, and so is applicable to a wide range of learning and control problems.« less
Soekadar, Surjo R; Witkowski, Matthias; Vitiello, Nicola; Birbaumer, Niels
2015-06-01
The loss of hand function can result in severe physical and psychosocial impairment. Thus, compensation of a lost hand function using assistive robotics that can be operated in daily life is very desirable. However, versatile, intuitive, and reliable control of assistive robotics is still an unsolved challenge. Here, we introduce a novel brain/neural-computer interaction (BNCI) system that integrates electroencephalography (EEG) and electrooculography (EOG) to improve control of assistive robotics in daily life environments. To evaluate the applicability and performance of this hybrid approach, five healthy volunteers (HV) (four men, average age 26.5 ± 3.8 years) and a 34-year-old patient with complete finger paralysis due to a brachial plexus injury (BPI) used EEG (condition 1) and EEG/EOG (condition 2) to control grasping motions of a hand exoskeleton. All participants were able to control the BNCI system (BNCI control performance HV: 70.24 ± 16.71%, BPI: 65.93 ± 24.27%), but inclusion of EOG significantly improved performance across all participants (HV: 80.65 ± 11.28, BPI: 76.03 ± 18.32%). This suggests that hybrid BNCI systems can achieve substantially better control over assistive devices, e.g., a hand exoskeleton, than systems using brain signals alone and thus may increase applicability of brain-controlled assistive devices in daily life environments.
Falotico, Egidio; Vannucci, Lorenzo; Ambrosano, Alessandro; Albanese, Ugo; Ulbrich, Stefan; Vasquez Tieck, Juan Camilo; Hinkel, Georg; Kaiser, Jacques; Peric, Igor; Denninger, Oliver; Cauli, Nino; Kirtay, Murat; Roennau, Arne; Klinker, Gudrun; Von Arnim, Axel; Guyot, Luc; Peppicelli, Daniel; Martínez-Cañada, Pablo; Ros, Eduardo; Maier, Patrick; Weber, Sandro; Huber, Manuel; Plecher, David; Röhrbein, Florian; Deser, Stefan; Roitberg, Alina; van der Smagt, Patrick; Dillman, Rüdiger; Levi, Paul; Laschi, Cecilia; Knoll, Alois C.; Gewaltig, Marc-Oliver
2017-01-01
Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments. PMID:28179882
Choi, Bongjae; Jo, Sungho
2013-01-01
This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system. PMID:24023953
Choi, Bongjae; Jo, Sungho
2013-01-01
This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system.
A two-class self-paced BCI to control a robot in four directions.
Ron-Angevin, Ricardo; Velasco-Alvarez, Francisco; Sancha-Ros, Salvador; da Silva-Sauer, Leandro
2011-01-01
In this work, an electroencephalographic analysis-based, self-paced (asynchronous) brain-computer interface (BCI) is proposed to control a mobile robot using four different navigation commands: turn right, turn left, move forward and move back. In order to reduce the probability of misclassification, the BCI is to be controlled with only two mental tasks (relaxed state versus imagination of right hand movements), using an audio-cued interface. Four healthy subjects participated in the experiment. After two sessions controlling a simulated robot in a virtual environment (which allowed the user to become familiar with the interface), three subjects successfully moved the robot in a real environment. The obtained results show that the proposed interface enables control over the robot, even for subjects with low BCI performance. © 2011 IEEE
Evaluation of a completely robotized neurosurgical operating microscope.
Kantelhardt, Sven R; Finke, Markus; Schweikard, Achim; Giese, Alf
2013-01-01
Operating microscopes are essential for most neurosurgical procedures. Modern robot-assisted controls offer new possibilities, combining the advantages of conventional and automated systems. We evaluated the prototype of a completely robotized operating microscope with an integrated optical coherence tomography module. A standard operating microscope was fitted with motors and control instruments, with the manual control mode and balance preserved. In the robot mode, the microscope was steered by a remote control that could be fixed to a surgical instrument. External encoders and accelerometers tracked microscope movements. The microscope was additionally fitted with an optical coherence tomography-scanning module. The robotized microscope was tested on model systems. It could be freely positioned, without forcing the surgeon to take the hands from the instruments or avert the eyes from the oculars. Positioning error was about 1 mm, and vibration faded in 1 second. Tracking of microscope movements, combined with an autofocus function, allowed determination of the focus position within the 3-dimensional space. This constituted a second loop of navigation independent from conventional infrared reflector-based techniques. In the robot mode, automated optical coherence tomography scanning of large surface areas was feasible. The prototype of a robotized optical coherence tomography-integrated operating microscope combines the advantages of a conventional manually controlled operating microscope with a remote-controlled positioning aid and a self-navigating microscope system that performs automated positioning tasks such as surface scans. This demonstrates that, in the future, operating microscopes may be used to acquire intraoperative spatial data, volume changes, and structural data of brain or brain tumor tissue.
Integration of advanced teleoperation technologies for control of space robots
NASA Technical Reports Server (NTRS)
Stagnaro, Michael J.
1993-01-01
Teleoperated robots require one or more humans to control actuators, mechanisms, and other robot equipment given feedback from onboard sensors. To accomplish this task, the human or humans require some form of control station. Desirable features of such a control station include operation by a single human, comfort, and natural human interfaces (visual, audio, motion, tactile, etc.). These interfaces should work to maximize performance of the human/robot system by streamlining the link between human brain and robot equipment. This paper describes development of a control station testbed with the characteristics described above. Initially, this testbed will be used to control two teleoperated robots. Features of the robots include anthropomorphic mechanisms, slaving to the testbed, and delivery of sensory feedback to the testbed. The testbed will make use of technologies such as helmet mounted displays, voice recognition, and exoskeleton masters. It will allow tor integration and testing of emerging telepresence technologies along with techniques for coping with control link time delays. Systems developed from this testbed could be applied to ground control of space based robots. During man-tended operations, the Space Station Freedom may benefit from ground control of IVA or EVA robots with science or maintenance tasks. Planetary exploration may also find advanced teleoperation systems to be very useful.
Insect-Inspired Optical-Flow Navigation Sensors
NASA Technical Reports Server (NTRS)
Thakoor, Sarita; Morookian, John M.; Chahl, Javan; Soccol, Dean; Hines, Butler; Zornetzer, Steven
2005-01-01
Integrated circuits that exploit optical flow to sense motions of computer mice on or near surfaces ( optical mouse chips ) are used as navigation sensors in a class of small flying robots now undergoing development for potential use in such applications as exploration, search, and surveillance. The basic principles of these robots were described briefly in Insect-Inspired Flight Control for Small Flying Robots (NPO-30545), NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 61. To recapitulate from the cited prior article: The concept of optical flow can be defined, loosely, as the use of texture in images as a source of motion cues. The flight-control and navigation systems of these robots are inspired largely by the designs and functions of the vision systems and brains of insects, which have been demonstrated to utilize optical flow (as detected by their eyes and brains) resulting from their own motions in the environment. Optical flow has been shown to be very effective as a means of avoiding obstacles and controlling speeds and altitudes in robotic navigation. Prior systems used in experiments on navigating by means of optical flow have involved the use of panoramic optics, high-resolution image sensors, and programmable imagedata- processing computers.
Motor prediction in Brain-Computer Interfaces for controlling mobile robots.
Geng, Tao; Gan, John Q
2008-01-01
EEG-based Brain-Computer Interface (BCI) can be regarded as a new channel for motor control except that it does not involve muscles. Normal neuromuscular motor control has two fundamental components: (1) to control the body, and (2) to predict the consequences of the control command, which is called motor prediction. In this study, after training with a specially designed BCI paradigm based on motor imagery, two subjects learnt to predict the time course of some features of the EEG signals. It is shown that, with this newly-obtained motor prediction skill, subjects can use motor imagery of feet to directly control a mobile robot to avoid obstacles and reach a small target in a time-critical scenario.
A soft body as a reservoir: case studies in a dynamic model of octopus-inspired soft robotic arm
Nakajima, Kohei; Hauser, Helmut; Kang, Rongjie; Guglielmino, Emanuele; Caldwell, Darwin G.; Pfeifer, Rolf
2013-01-01
The behaviors of the animals or embodied agents are characterized by the dynamic coupling between the brain, the body, and the environment. This implies that control, which is conventionally thought to be handled by the brain or a controller, can partially be outsourced to the physical body and the interaction with the environment. This idea has been demonstrated in a number of recently constructed robots, in particular from the field of “soft robotics”. Soft robots are made of a soft material introducing high-dimensionality, non-linearity, and elasticity, which often makes the robots difficult to control. Biological systems such as the octopus are mastering their complex bodies in highly sophisticated manners by capitalizing on their body dynamics. We will demonstrate that the structure of the octopus arm cannot only be exploited for generating behavior but also, in a sense, as a computational resource. By using a soft robotic arm inspired by the octopus we show in a number of experiments how control is partially incorporated into the physical arm's dynamics and how the arm's dynamics can be exploited to approximate non-linear dynamical systems and embed non-linear limit cycles. Future application scenarios as well as the implications of the results for the octopus biology are also discussed. PMID:23847526
Robotics, motor learning, and neurologic recovery.
Reinkensmeyer, David J; Emken, Jeremy L; Cramer, Steven C
2004-01-01
Robotic devices are helping shed light on human motor control in health and injury. By using robots to apply novel force fields to the arm, investigators are gaining insight into how the nervous system models its external dynamic environment. The nervous system builds internal models gradually by experience and uses them in combination with impedance and feedback control strategies. Internal models are robust to environmental and neural noise, generalized across space, implemented in multiple brain regions, and developed in childhood. Robots are also being used to assist in repetitive movement practice following neurologic injury, providing insight into movement recovery. Robots can haptically assess sensorimotor performance, administer training, quantify amount of training, and improve motor recovery. In addition to providing insight into motor control, robotic paradigms may eventually enhance motor learning and rehabilitation beyond the levels possible with conventional training techniques.
Costa, Álvaro; Hortal, Enrique; Iáñez, Eduardo; Azorín, José M
2014-01-01
Non-invasive Brain-Machine Interfaces (BMIs) are being used more and more these days to design systems focused on helping people with motor disabilities. Spontaneous BMIs translate user's brain signals into commands to control devices. On these systems, by and large, 2 different mental tasks can be detected with enough accuracy. However, a large training time is required and the system needs to be adjusted on each session. This paper presents a supplementary system that employs BMI sensors, allowing the use of 2 systems (the BMI system and the supplementary system) with the same data acquisition device. This supplementary system is designed to control a robotic arm in two dimensions using electromyographical (EMG) signals extracted from the electroencephalographical (EEG) recordings. These signals are voluntarily produced by users clenching their jaws. EEG signals (with EMG contributions) were registered and analyzed to obtain the electrodes and the range of frequencies which provide the best classification results for 5 different clenching tasks. A training stage, based on the 2-dimensional control of a cursor, was designed and used by the volunteers to get used to this control. Afterwards, the control was extrapolated to a robotic arm in a 2-dimensional workspace. Although the training performed by volunteers requires 70 minutes, the final results suggest that in a shorter period of time (45 min), users should be able to control the robotic arm in 2 dimensions with their jaws. The designed system is compared with a similar 2-dimensional system based on spontaneous BMIs, and our system shows faster and more accurate performance. This is due to the nature of the control signals. Brain potentials are much more difficult to control than the electromyographical signals produced by jaw clenches. Additionally, the presented system also shows an improvement in the results compared with an electrooculographic system in a similar environment.
Costa, Álvaro; Hortal, Enrique; Iáñez, Eduardo; Azorín, José M.
2014-01-01
Non-invasive Brain-Machine Interfaces (BMIs) are being used more and more these days to design systems focused on helping people with motor disabilities. Spontaneous BMIs translate user's brain signals into commands to control devices. On these systems, by and large, 2 different mental tasks can be detected with enough accuracy. However, a large training time is required and the system needs to be adjusted on each session. This paper presents a supplementary system that employs BMI sensors, allowing the use of 2 systems (the BMI system and the supplementary system) with the same data acquisition device. This supplementary system is designed to control a robotic arm in two dimensions using electromyographical (EMG) signals extracted from the electroencephalographical (EEG) recordings. These signals are voluntarily produced by users clenching their jaws. EEG signals (with EMG contributions) were registered and analyzed to obtain the electrodes and the range of frequencies which provide the best classification results for 5 different clenching tasks. A training stage, based on the 2-dimensional control of a cursor, was designed and used by the volunteers to get used to this control. Afterwards, the control was extrapolated to a robotic arm in a 2-dimensional workspace. Although the training performed by volunteers requires 70 minutes, the final results suggest that in a shorter period of time (45 min), users should be able to control the robotic arm in 2 dimensions with their jaws. The designed system is compared with a similar 2-dimensional system based on spontaneous BMIs, and our system shows faster and more accurate performance. This is due to the nature of the control signals. Brain potentials are much more difficult to control than the electromyographical signals produced by jaw clenches. Additionally, the presented system also shows an improvement in the results compared with an electrooculographic system in a similar environment. PMID:25390372
Bergamasco, Massimo; Frisoli, Antonio; Fontana, Marco; Loconsole, Claudio; Leonardis, Daniele; Troncossi, Marco; Foumashi, Mohammad Mozaffari; Parenti-Castelli, Vincenzo
2011-01-01
This paper presents the preliminary results of the project BRAVO (Brain computer interfaces for Robotic enhanced Action in Visuo-motOr tasks). The objective of this project is to define a new approach to the development of assistive and rehabilitative robots for motor impaired users to perform complex visuomotor tasks that require a sequence of reaches, grasps and manipulations of objects. BRAVO aims at developing new robotic interfaces and HW/SW architectures for rehabilitation and regain/restoration of motor function in patients with upper limb sensorimotor impairment through extensive rehabilitation therapy and active assistance in the execution of Activities of Daily Living. The final system developed within this project will include a robotic arm exoskeleton and a hand orthosis that will be integrated together for providing force assistance. The main novelty that BRAVO introduces is the control of the robotic assistive device through the active prediction of intention/action. The system will actually integrate the information about the movement carried out by the user with a prediction of the performed action through an interpretation of current gaze of the user (measured through eye-tracking), brain activation (measured through BCI) and force sensor measurements. © 2011 IEEE
Artificial Intelligence/Robotics Applications to Navy Aircraft Maintenance.
1984-06-01
other automatic machinery such as presses, molding machines , and numerically-controlled machine tools, just as people do. A-36...Robotics Technologies 3 B. Relevant AI Technologies 4 1. Expert Systems 4 2. Automatic Planning 4 3. Natural Language 5 4. Machine Vision...building machines that imitate human behavior. Artificial intelligence is concerned with the functions of the brain, whereas robotics include, in
Robotic assessment of sensorimotor deficits after traumatic brain injury.
Debert, Chantel T; Herter, Troy M; Scott, Stephen H; Dukelow, Sean
2012-06-01
Robotic technology is commonly used to quantify aspects of typical sensorimotor function. We evaluated the feasibility of using robotic technology to assess visuomotor and position sense impairments following traumatic brain injury (TBI). We present results of robotic sensorimotor function testing in 12 subjects with TBI, who had a range of initial severities (9 severe, 2 moderate, 1 mild), and contrast these results with those of clinical tests. We also compared these with robotic test outcomes in persons without disability. For each subject with TBI, a review of the initial injury and neuroradiologic findings was conducted. Following this, each subject completed a number of standardized clinical measures (Fugl-Meyer Assessment, Purdue Peg Board, Montreal Cognitive Assessment, Rancho Los Amigos Scale), followed by two robotic tasks. A visually guided reaching task was performed to assess visuomotor control of the upper limb. An arm position-matching task was used to assess position sense. Robotic task performance in the subjects with TBI was compared with findings in a cohort of 170 person without disabilities. Subjects with TBI demonstrated a broad range of sensory and motor deficits on robotic testing. Notably, several subjects with TBI displayed significant deficits in one or both of the robotic tasks, despite normal scores on traditional clinical motor and cognitive assessment measures. The findings demonstrate the potential of robotic assessments for identifying deficits in visuomotor control and position sense following TBI. Improved identification of neurologic impairments following TBI may ultimately enhance rehabilitation.
From embodied mind to embodied robotics: humanities and system theoretical aspects.
Mainzer, Klaus
2009-01-01
After an introduction (1) the article analyzes the evolution of the embodied mind (2), the innovation of embodied robotics (3), and finally discusses conclusions of embodied robotics for human responsibility (4). Considering the evolution of the embodied mind (2), we start with an introduction of complex systems and nonlinear dynamics (2.1), apply this approach to neural self-organization (2.2), distinguish degrees of complexity of the brain (2.3), explain the emergence of cognitive states by complex systems dynamics (2.4), and discuss criteria for modeling the brain as complex nonlinear system (2.5). The innovation of embodied robotics (3) is a challenge of future technology. We start with the distinction of symbolic and embodied AI (3.1) and explain embodied robots as dynamical systems (3.2). Self-organization needs self-control of technical systems (3.3). Cellular neural networks (CNN) are an example of self-organizing technical systems offering new avenues for neurobionics (3.4). In general, technical neural networks support different kinds of learning robots (3.5). Finally, embodied robotics aim at the development of cognitive and conscious robots (3.6).
A hybrid BCI for enhanced control of a telepresence robot.
Carlson, Tom; Tonin, Luca; Perdikis, Serafeim; Leeb, Robert; del R Millán, José
2013-01-01
Motor-disabled end users have successfully driven a telepresence robot in a complex environment using a Brain-Computer Interface (BCI). However, to facilitate the interaction aspect that underpins the notion of telepresence, users must be able to voluntarily and reliably stop the robot at any moment, not just drive from point to point. In this work, we propose to exploit the user's residual muscular activity to provide a fast and reliable control channel, which can start/stop the telepresence robot at any moment. Our preliminary results show that not only does this hybrid approach increase the accuracy, but it also helps to reduce the workload and was the preferred control paradigm of all the participants.
Zeng, Hong; Wang, Yanxin; Wu, Changcheng; Song, Aiguo; Liu, Jia; Ji, Peng; Xu, Baoguo; Zhu, Lifeng; Li, Huijun; Wen, Pengcheng
2017-01-01
Brain-machine interface (BMI) can be used to control the robotic arm to assist paralysis people for performing activities of daily living. However, it is still a complex task for the BMI users to control the process of objects grasping and lifting with the robotic arm. It is hard to achieve high efficiency and accuracy even after extensive trainings. One important reason is lacking of sufficient feedback information for the user to perform the closed-loop control. In this study, we proposed a method of augmented reality (AR) guiding assistance to provide the enhanced visual feedback to the user for a closed-loop control with a hybrid Gaze-BMI, which combines the electroencephalography (EEG) signals based BMI and the eye tracking for an intuitive and effective control of the robotic arm. Experiments for the objects manipulation tasks while avoiding the obstacle in the workspace are designed to evaluate the performance of our method for controlling the robotic arm. According to the experimental results obtained from eight subjects, the advantages of the proposed closed-loop system (with AR feedback) over the open-loop system (with visual inspection only) have been verified. The number of trigger commands used for controlling the robotic arm to grasp and lift the objects with AR feedback has reduced significantly and the height gaps of the gripper in the lifting process have decreased more than 50% compared to those trials with normal visual inspection only. The results reveal that the hybrid Gaze-BMI user can benefit from the information provided by the AR interface, improving the efficiency and reducing the cognitive load during the grasping and lifting processes. PMID:29163123
Picelli, Alessandro; Chemello, Elena; Castellazzi, Paola; Filippetti, Mirko; Brugnera, Annalisa; Gandolfi, Marialuisa; Waldner, Andreas; Saltuari, Leopold; Smania, Nicola
2018-01-01
Preliminary evidence showed additional effects of anodal transcranial direct current stimulation over the damaged cerebral hemisphere combined with cathodal transcutaneous spinal direct current stimulation during robot-assisted gait training in chronic stroke patients. This is consistent with the neural organization of locomotion involving cortical and spinal control. The cerebellum is crucial for locomotor control, in particular for avoidance of obstacles, and adaptation to novel conditions during walking. Despite its key role in gait control, to date the effects of transcranial direct current stimulation of the cerebellum have not been investigated on brain stroke patients treated with robot-assisted gait training. To evaluate the effects of cerebellar transcranial direct current stimulation combined with transcutaneous spinal direct current stimulation on robot-assisted gait training in patients with chronic brain stroke. After balanced randomization, 20 chronic stroke patients received ten, 20-minute robot-assisted gait training sessions (five days a week, for two consecutive weeks) combined with central nervous system stimulation. Group 1 underwent on-line cathodal transcranial direct current stimulation over the contralesional cerebellar hemisphere + cathodal transcutaneous spinal direct current stimulation. Group 2 received on-line anodal transcranial direct current stimulation over the damaged cerebral hemisphere + cathodal transcutaneous spinal direct current stimulation. The primary outcome was the 6-minute walk test performed before, after, and at follow-up at 2 and 4 weeks post-treatment. The significant differences in the 6-minute walk test noted between groups at the first post-treatment evaluation (p = 0.041) were not maintained at either the 2-week (P = 0.650) or the 4-week (P = 0.545) follow-up evaluations. Our preliminary findings support the hypothesis that cathodal transcranial direct current stimulation over the contralesional cerebellar hemisphere in combination with cathodal transcutaneous spinal direct current stimulation might be useful to boost the effects of robot-assisted gait training in chronic brain stroke patients with walking impairment.
Artificial consciousness, artificial emotions, and autonomous robots.
Cardon, Alain
2006-12-01
Nowadays for robots, the notion of behavior is reduced to a simple factual concept at the level of the movements. On another hand, consciousness is a very cultural concept, founding the main property of human beings, according to themselves. We propose to develop a computable transposition of the consciousness concepts into artificial brains, able to express emotions and consciousness facts. The production of such artificial brains allows the intentional and really adaptive behavior for the autonomous robots. Such a system managing the robot's behavior will be made of two parts: the first one computes and generates, in a constructivist manner, a representation for the robot moving in its environment, and using symbols and concepts. The other part achieves the representation of the previous one using morphologies in a dynamic geometrical way. The robot's body will be seen for itself as the morphologic apprehension of its material substrata. The model goes strictly by the notion of massive multi-agent's organizations with a morphologic control.
Seung, Sungmin; Choi, Hongseok; Jang, Jongseong; Kim, Young Soo; Park, Jong-Oh; Park, Sukho; Ko, Seong Young
2017-01-01
This article presents a haptic-guided teleoperation for a tumor removal surgical robotic system, so-called a SIROMAN system. The system was developed in our previous work to make it possible to access tumor tissue, even those that seat deeply inside the brain, and to remove the tissue with full maneuverability. For a safe and accurate operation to remove only tumor tissue completely while minimizing damage to the normal tissue, a virtual wall-based haptic guidance together with a medical image-guided control is proposed and developed. The virtual wall is extracted from preoperative medical images, and the robot is controlled to restrict its motion within the virtual wall using haptic feedback. Coordinate transformation between sub-systems, a collision detection algorithm, and a haptic-guided teleoperation using a virtual wall are described in the context of using SIROMAN. A series of experiments using a simplified virtual wall are performed to evaluate the performance of virtual wall-based haptic-guided teleoperation. With haptic guidance, the accuracy of the robotic manipulator's trajectory is improved by 57% compared to one without. The tissue removal performance is also improved by 21% ( p < 0.05). The experiments show that virtual wall-based haptic guidance provides safer and more accurate tissue removal for single-port brain surgery.
Parisi, Domenico
2010-01-01
Trying to understand human language by constructing robots that have language necessarily implies an embodied view of language, where the meaning of linguistic expressions is derived from the physical interactions of the organism with the environment. The paper describes a neural model of language according to which the robot's behaviour is controlled by a neural network composed of two sub-networks, one dedicated to the non-linguistic interactions of the robot with the environment and the other one to processing linguistic input and producing linguistic output. We present the results of a number of simulations using the model and we suggest how the model can be used to account for various language-related phenomena such as disambiguation, the metaphorical use of words, the pervasive idiomaticity of multi-word expressions, and mental life as talking to oneself. The model implies a view of the meaning of words and multi-word expressions as a temporal process that takes place in the entire brain and has no clearly defined boundaries. The model can also be extended to emotional words if we assume that an embodied view of language includes not only the interactions of the robot's brain with the external environment but also the interactions of the brain with what is inside the body.
Lyons, Kenneth R; Joshi, Sanjay S
2013-06-01
Here we demonstrate the use of a new singlesignal surface electromyography (sEMG) brain-computer interface (BCI) to control a mobile robot in a remote location. Previous work on this BCI has shown that users are able to perform cursor-to-target tasks in two-dimensional space using only a single sEMG signal by continuously modulating the signal power in two frequency bands. Using the cursor-to-target paradigm, targets are shown on the screen of a tablet computer so that the user can select them, commanding the robot to move in different directions for a fixed distance/angle. A Wifi-enabled camera transmits video from the robot's perspective, giving the user feedback about robot motion. Current results show a case study with a C3-C4 spinal cord injury (SCI) subject using a single auricularis posterior muscle site to navigate a simple obstacle course. Performance metrics for operation of the BCI as well as completion of the telerobotic command task are developed. It is anticipated that this noninvasive and mobile system will open communication opportunities for the severely paralyzed, possibly using only a single sensor.
Visual and somatic sensory feedback of brain activity for intuitive surgical robot manipulation.
Miura, Satoshi; Matsumoto, Yuya; Kobayashi, Yo; Kawamura, Kazuya; Nakashima, Yasutaka; Fujie, Masakatsu G
2015-01-01
This paper presents a method to evaluate the hand-eye coordination of the master-slave surgical robot by measuring the activation of the intraparietal sulcus in users brain activity during controlling virtual manipulation. The objective is to examine the changes in activity of the intraparietal sulcus when the user's visual or somatic feedback is passed through or intercepted. The hypothesis is that the intraparietal sulcus activates significantly when both the visual and somatic sense pass feedback, but deactivates when either visual or somatic is intercepted. The brain activity of three subjects was measured by the functional near-infrared spectroscopic-topography brain imaging while they used a hand controller to move a virtual arm of a surgical simulator. The experiment was performed several times with three conditions: (i) the user controlled the virtual arm naturally under both visual and somatic feedback passed, (ii) the user moved with closed eyes under only somatic feedback passed, (iii) the user only gazed at the screen under only visual feedback passed. Brain activity showed significantly better control of the virtual arm naturally (p<;0.05) when compared with moving with closed eyes or only gazing among all participants. In conclusion, the brain can activate according to visual and somatic sensory feedback agreement.
Estévez, Natalia; Yu, Ningbo; Brügger, Mike; Villiger, Michael; Hepp-Reymond, Marie-Claude; Riener, Robert; Kollias, Spyros
2014-11-01
In neurorehabilitation, longitudinal assessment of arm movement related brain function in patients with motor disability is challenging due to variability in task performance. MRI-compatible robots monitor and control task performance, yielding more reliable evaluation of brain function over time. The main goals of the present study were first to define the brain network activated while performing active and passive elbow movements with an MRI-compatible arm robot (MaRIA) in healthy subjects, and second to test the reproducibility of this activation over time. For the fMRI analysis two models were compared. In model 1 movement onset and duration were included, whereas in model 2 force and range of motion were added to the analysis. Reliability of brain activation was tested with several statistical approaches applied on individual and group activation maps and on summary statistics. The activated network included mainly the primary motor cortex, primary and secondary somatosensory cortex, superior and inferior parietal cortex, medial and lateral premotor regions, and subcortical structures. Reliability analyses revealed robust activation for active movements with both fMRI models and all the statistical methods used. Imposed passive movements also elicited mainly robust brain activation for individual and group activation maps, and reliability was improved by including additional force and range of motion using model 2. These findings demonstrate that the use of robotic devices, such as MaRIA, can be useful to reliably assess arm movement related brain activation in longitudinal studies and may contribute in studies evaluating therapies and brain plasticity following injury in the nervous system.
McMullen, David P.; Hotson, Guy; Katyal, Kapil D.; Wester, Brock A.; Fifer, Matthew S.; McGee, Timothy G.; Harris, Andrew; Johannes, Matthew S.; Vogelstein, R. Jacob; Ravitz, Alan D.; Anderson, William S.; Thakor, Nitish V.; Crone, Nathan E.
2014-01-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 seconds for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. PMID:24760914
McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E
2014-07-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.
Lateral specialization in unilateral spatial neglect: a cognitive robotics model.
Conti, Daniela; Di Nuovo, Santo; Cangelosi, Angelo; Di Nuovo, Alessandro
2016-08-01
In this paper, we present the experimental results of an embodied cognitive robotic approach for modelling the human cognitive deficit known as unilateral spatial neglect (USN). To this end, we introduce an artificial neural network architecture designed and trained to control the spatial attentional focus of the iCub robotic platform. Like the human brain, the architecture is divided into two hemispheres and it incorporates bio-inspired plasticity mechanisms, which allow the development of the phenomenon of the specialization of the right hemisphere for spatial attention. In this study, we validate the model by replicating a previous experiment with human patients affected by the USN and numerical results show that the robot mimics the behaviours previously exhibited by humans. We also simulated recovery after the damage to compare the performance of each of the two hemispheres as additional validation of the model. Finally, we highlight some possible advantages of modelling cognitive dysfunctions of the human brain by means of robotic platforms, which can supplement traditional approaches for studying spatial impairments in humans.
LaFleur, Karl; Cassady, Kaitlin; Doud, Alexander; Shades, Kaleb; Rogin, Eitan; He, Bin
2013-01-01
Objective At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional physical space using noninvasive scalp EEG in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that operation of a real world device has on subjects’ control with comparison to a two-dimensional virtual cursor task. Approach Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a three-dimensional physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m/s. Significance Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user’s ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in the three-dimensional physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG based BCI systems to accomplish complex control in three-dimensional physical space. The present study may serve as a framework for the investigation of multidimensional non-invasive brain-computer interface control in a physical environment using telepresence robotics. PMID:23735712
Scott, Stephen H; Dukelow, Sean P
2011-01-01
Robotic technologies have profoundly affected the identification of fundamental properties of brain function. This success is attributable to robots being able to control the position of or forces applied to limbs, and their inherent ability to easily, objectively, and reliably quantify sensorimotor behavior. Our general hypothesis is that these same attributes make robotic technologies ideal for clinically assessing sensory, motor, and cognitive impairments in stroke and other neurological disorders. Further, they provide opportunities for novel therapeutic strategies. The present opinionated review describes how robotic technologies combined with virtual/augmented reality systems can support a broad range of behavioral tasks to objectively quantify brain function. This information could potentially be used to provide more accurate diagnostic and prognostic information than is available from current clinical assessment techniques. The review also highlights the potential benefits of robots to provide upper-limb therapy. Although the capital cost of these technologies is substantial, it pales in comparison with the potential cost reductions to the overall healthcare system that improved assessment and therapeutic interventions offer.
Causal network in a deafferented non-human primate brain.
Balasubramanian, Karthikeyan; Takahashi, Kazutaka; Hatsopoulos, Nicholas G
2015-01-01
De-afferented/efferented neural ensembles can undergo causal changes when interfaced to neuroprosthetic devices. These changes occur via recruitment or isolation of neurons, alterations in functional connectivity within the ensemble and/or changes in the role of neurons, i.e., excitatory/inhibitory. In this work, emergence of a causal network and changes in the dynamics are demonstrated for a deafferented brain region exposed to BMI (brain-machine interface) learning. The BMI was controlling a robot for reach-and-grasp behavior. And, the motor cortical regions used for the BMI were deafferented due to chronic amputation, and ensembles of neurons were decoded for velocity control of the multi-DOF robot. A generalized linear model-framework based Granger causality (GLM-GC) technique was used in estimating the ensemble connectivity. Model selection was based on the AIC (Akaike Information Criterion).
Vocal emotion of humanoid robots: a study from brain mechanism.
Wang, Youhui; Hu, Xiaohua; Dai, Weihui; Zhou, Jie; Kuo, Taitzong
2014-01-01
Driven by rapid ongoing advances in humanoid robot, increasing attention has been shifted into the issue of emotion intelligence of AI robots to facilitate the communication between man-machines and human beings, especially for the vocal emotion in interactive system of future humanoid robots. This paper explored the brain mechanism of vocal emotion by studying previous researches and developed an experiment to observe the brain response by fMRI, to analyze vocal emotion of human beings. Findings in this paper provided a new approach to design and evaluate the vocal emotion of humanoid robots based on brain mechanism of human beings.
Illusory movement perception improves motor control for prosthetic hands
Marasco, Paul D.; Hebert, Jacqueline S.; Sensinger, Jon W.; Shell, Courtney E.; Schofield, Jonathon S.; Thumser, Zachary C.; Nataraj, Raviraj; Beckler, Dylan T.; Dawson, Michael R.; Blustein, Dan H.; Gill, Satinder; Mensh, Brett D.; Granja-Vazquez, Rafael; Newcomb, Madeline D.; Carey, Jason P.; Orzell, Beth M.
2018-01-01
To effortlessly complete an intentional movement, the brain needs feedback from the body regarding the movement’s progress. This largely non-conscious kinesthetic sense helps the brain to learn relationships between motor commands and outcomes to correct movement errors. Prosthetic systems for restoring function have predominantly focused on controlling motorized joint movement. Without the kinesthetic sense, however, these devices do not become intuitively controllable. Here we report a method for endowing human amputees with a kinesthetic perception of dexterous robotic hands. Vibrating the muscles used for prosthetic control via a neural-machine interface produced the illusory perception of complex grip movements. Within minutes, three amputees integrated this kinesthetic feedback and improved movement control. Combining intent, kinesthesia, and vision instilled participants with a sense of agency over the robotic movements. This feedback approach for closed-loop control opens a pathway to seamless integration of minds and machines. PMID:29540617
Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.
2014-01-01
Abstract. Brain tumor margin removal is challenging because diseased tissue is often visually indistinguishable from healthy tissue. Leaving residual tumor leads to decreased survival, and removing normal tissue causes life-long neurological deficits. Thus, a surgical robotics system with a high degree of dexterity, accurate navigation, and highly precise resection is an ideal candidate for image-guided removal of fluorescently labeled brain tumor cells. To image, we developed a scanning fiber endoscope (SFE) which acquires concurrent reflectance and fluorescence wide-field images at a high resolution. This miniature flexible endoscope was affixed to the arm of a RAVEN II surgical robot providing programmable motion with feedback control using stereo-pair surveillance cameras. To verify the accuracy of the three-dimensional (3-D) reconstructed surgical field, a multimodal physical-sized model of debulked brain tumor was used to obtain the 3-D locations of residual tumor for robotic path planning to remove fluorescent cells. Such reconstruction is repeated intraoperatively during margin clean-up so the algorithm efficiency and accuracy are important to the robotically assisted surgery. Experimental results indicate that the time for creating this 3-D surface can be reduced to one-third by using known trajectories of a robot arm, and the error from the reconstructed phantom is within 0.67 mm in average compared to the model design. PMID:26158071
Vocal Emotion of Humanoid Robots: A Study from Brain Mechanism
Wang, Youhui; Hu, Xiaohua; Zhou, Jie; Kuo, Taitzong
2014-01-01
Driven by rapid ongoing advances in humanoid robot, increasing attention has been shifted into the issue of emotion intelligence of AI robots to facilitate the communication between man-machines and human beings, especially for the vocal emotion in interactive system of future humanoid robots. This paper explored the brain mechanism of vocal emotion by studying previous researches and developed an experiment to observe the brain response by fMRI, to analyze vocal emotion of human beings. Findings in this paper provided a new approach to design and evaluate the vocal emotion of humanoid robots based on brain mechanism of human beings. PMID:24587712
Lohkamp, Laura-Nanna; Vajkoczy, Peter; Budach, Volker; Kufeld, Markus
2018-05-01
Estimating efficacy, safety and outcome of frameless image-guided robotic radiosurgery for the treatment of recurrent brain metastases after whole brain radiotherapy (WBRT). We performed a retrospective single-center analysis including patients with recurrent brain metastases after WBRT, who have been treated with single session radiosurgery, using the CyberKnife® Radiosurgery System (CKRS) (Accuray Inc., CA) between 2011 and 2016. The primary end point was local tumor control, whereas secondary end points were distant tumor control, treatment-related toxicity and overall survival. 36 patients with 140 recurrent brain metastases underwent 46 single session CKRS treatments. Twenty one patients had multiple brain metastases (58%). The mean interval between WBRT and CKRS accounted for 2 years (range 0.2-7 years). The median number of treated metastases per treatment session was five (range 1-12) with a tumor volume of 1.26 ccm (mean) and a median tumor dose of 18 Gy prescribed to the 70% isodose line. Two patients experienced local tumor recurrence within the 1st year after treatment and 13 patients (36%) developed novel brain metastases. Nine of these patients underwent additional one to three CKRS treatments. Eight patients (22.2%) showed treatment-related radiation reactions on MRI, three with clinical symptoms. Median overall survival was 19 months after CKRS. The actuarial 1-year local control rate was 94.2%. CKRS has proven to be locally effective and safe due to high local tumor control rates and low toxicity. Thus CKRS offers a reliable salvage treatment option for recurrent brain metastases after WBRT.
Design and Implementation of a Brain Computer Interface System for Controlling a Robotic Claw
NASA Astrophysics Data System (ADS)
Angelakis, D.; Zoumis, S.; Asvestas, P.
2017-11-01
The aim of this paper is to present the design and implementation of a brain-computer interface (BCI) system that can control a robotic claw. The system is based on the Emotiv Epoc headset, which provides the capability of simultaneous recording of 14 EEG channels, as well as wireless connectivity by means of the Bluetooth protocol. The system is initially trained to decode what user thinks to properly formatted data. The headset communicates with a personal computer, which runs a dedicated software application, implemented under the Processing integrated development environment. The application acquires the data from the headset and invokes suitable commands to an Arduino Uno board. The board decodes the received commands and produces corresponding signals to a servo motor that controls the position of the robotic claw. The system was tested successfully on a healthy, male subject, aged 28 years. The results are promising, taking into account that no specialized hardware was used. However, tests on a larger number of users is necessary in order to draw solid conclusions regarding the performance of the proposed system.
Wilson, James C; Kesler, Mitch; Pelegrin, Sara-Lynn E; Kalvi, LeAnna; Gruber, Aaron; Steenland, Hendrik W
2015-09-30
The physical distance between predator and prey is a primary determinant of behavior, yet few paradigms exist to study this reliably in rodents. The utility of a robotically controlled laser for use in a predator-prey-like (PPL) paradigm was explored for use in rats. This involved the construction of a robotic two-dimensional gimbal to dynamically position a laser beam in a behavioral test chamber. Custom software was used to control the trajectory and final laser position in response to user input on a console. The software also detected the location of the laser beam and the rodent continuously so that the dynamics of the distance between them could be analyzed. When the animal or laser beam came within a fixed distance the animal would either be rewarded with electrical brain stimulation or shocked subcutaneously. Animals that received rewarding electrical brain stimulation could learn to chase the laser beam, while animals that received aversive subcutaneous shock learned to actively avoid the laser beam in the PPL paradigm. Mathematical computations are presented which describe the dynamic interaction of the laser and rodent. The robotic laser offers a neutral stimulus to train rodents in an open field and is the first device to be versatile enough to assess distance between predator and prey in real time. With ongoing behavioral testing this tool will permit the neurobiological investigation of predator/prey-like relationships in rodents, and may have future implications for prosthetic limb development through brain-machine interfaces. Copyright © 2015 Elsevier B.V. All rights reserved.
Farjadian, Amir B; Nabian, Mohsen; Hartman, Amber; Corsino, Johnathan; Mavroidis, Constantinos; Holden, Maureen K
2014-01-01
An estimated of 2,000,000 acute ankle sprains occur annually in the United States. Furthermore, ankle disabilities are caused by neurological impairments such as traumatic brain injury, cerebral palsy and stroke. The virtually interfaced robotic ankle and balance trainer (vi-RABT) was introduced as a cost-effective platform-based rehabilitation robot to improve overall ankle/balance strength, mobility and control. The system is equipped with 2 degrees of freedom (2-DOF) controlled actuation along with complete means of angle and torque measurement mechanisms. Vi-RABT was used to assess ankle strength, flexibility and motor control in healthy human subjects, while playing interactive virtual reality games on the screen. The results suggest that in the task with 2-DOF, subjects have better control over ankle's position vs. force.
Bridging the gap between motor imagery and motor execution with a brain-robot interface.
Bauer, Robert; Fels, Meike; Vukelić, Mathias; Ziemann, Ulf; Gharabaghi, Alireza
2015-03-01
According to electrophysiological studies motor imagery and motor execution are associated with perturbations of brain oscillations over spatially similar cortical areas. By contrast, neuroimaging and lesion studies suggest that at least partially distinct cortical networks are involved in motor imagery and execution. We sought to further disentangle this relationship by studying the role of brain-robot interfaces in the context of motor imagery and motor execution networks. Twenty right-handed subjects performed several behavioral tasks as indicators for imagery and execution of movements of the left hand, i.e. kinesthetic imagery, visual imagery, visuomotor integration and tonic contraction. In addition, subjects performed motor imagery supported by haptic/proprioceptive feedback from a brain-robot-interface. Principal component analysis was applied to assess the relationship of these indicators. The respective cortical resting state networks in the α-range were investigated by electroencephalography using the phase slope index. We detected two distinct abilities and cortical networks underlying motor control: a motor imagery network connecting the left parietal and motor areas with the right prefrontal cortex and a motor execution network characterized by transmission from the left to right motor areas. We found that a brain-robot-interface might offer a way to bridge the gap between these networks, opening thereby a backdoor to the motor execution system. This knowledge might promote patient screening and may lead to novel treatment strategies, e.g. for the rehabilitation of hemiparesis after stroke. Copyright © 2014 Elsevier Inc. All rights reserved.
A biologically inspired meta-control navigation system for the Psikharpax rat robot.
Caluwaerts, K; Staffa, M; N'Guyen, S; Grand, C; Dollé, L; Favre-Félix, A; Girard, B; Khamassi, M
2012-06-01
A biologically inspired navigation system for the mobile rat-like robot named Psikharpax is presented, allowing for self-localization and autonomous navigation in an initially unknown environment. The ability of parts of the model (e.g. the strategy selection mechanism) to reproduce rat behavioral data in various maze tasks has been validated before in simulations. But the capacity of the model to work on a real robot platform had not been tested. This paper presents our work on the implementation on the Psikharpax robot of two independent navigation strategies (a place-based planning strategy and a cue-guided taxon strategy) and a strategy selection meta-controller. We show how our robot can memorize which was the optimal strategy in each situation, by means of a reinforcement learning algorithm. Moreover, a context detector enables the controller to quickly adapt to changes in the environment-recognized as new contexts-and to restore previously acquired strategy preferences when a previously experienced context is recognized. This produces adaptivity closer to rat behavioral performance and constitutes a computational proposition of the role of the rat prefrontal cortex in strategy shifting. Moreover, such a brain-inspired meta-controller may provide an advancement for learning architectures in robotics.
Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI.
Stawicki, Piotr; Gembler, Felix; Volosyak, Ivan
2016-01-01
Brain-computer interfaces represent a range of acknowledged technologies that translate brain activity into computer commands. The aim of our research is to develop and evaluate a BCI control application for certain assistive technologies that can be used for remote telepresence or remote driving. The communication channel to the target device is based on the steady-state visual evoked potentials. In order to test the control application, a mobile robotic car (MRC) was introduced and a four-class BCI graphical user interface (with live video feedback and stimulation boxes on the same screen) for piloting the MRC was designed. For the purpose of evaluating a potential real-life scenario for such assistive technology, we present a study where 61 subjects steered the MRC through a predetermined route. All 61 subjects were able to control the MRC and finish the experiment (mean time 207.08 s, SD 50.25) with a mean (SD) accuracy and ITR of 93.03% (5.73) and 14.07 bits/min (4.44), respectively. The results show that our proposed SSVEP-based BCI control application is suitable for mobile robots with a shared-control approach. We also did not observe any negative influence of the simultaneous live video feedback and SSVEP stimulation on the performance of the BCI system.
Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI
2016-01-01
Brain-computer interfaces represent a range of acknowledged technologies that translate brain activity into computer commands. The aim of our research is to develop and evaluate a BCI control application for certain assistive technologies that can be used for remote telepresence or remote driving. The communication channel to the target device is based on the steady-state visual evoked potentials. In order to test the control application, a mobile robotic car (MRC) was introduced and a four-class BCI graphical user interface (with live video feedback and stimulation boxes on the same screen) for piloting the MRC was designed. For the purpose of evaluating a potential real-life scenario for such assistive technology, we present a study where 61 subjects steered the MRC through a predetermined route. All 61 subjects were able to control the MRC and finish the experiment (mean time 207.08 s, SD 50.25) with a mean (SD) accuracy and ITR of 93.03% (5.73) and 14.07 bits/min (4.44), respectively. The results show that our proposed SSVEP-based BCI control application is suitable for mobile robots with a shared-control approach. We also did not observe any negative influence of the simultaneous live video feedback and SSVEP stimulation on the performance of the BCI system. PMID:27528864
Minati, Ludovico; Nigri, Anna; Rosazza, Cristina; Bruzzone, Maria Grazia
2012-06-01
Previous studies have demonstrated the possibility of using functional MRI to control a robot arm through a brain-machine interface by directly coupling haemodynamic activity in the sensory-motor cortex to the position of two axes. Here, we extend this work by implementing interaction at a more abstract level, whereby imagined actions deliver structured commands to a robot arm guided by a machine vision system. Rather than extracting signals from a small number of pre-selected regions, the proposed system adaptively determines at individual level how to map representative brain areas to the input nodes of a classifier network. In this initial study, a median action recognition accuracy of 90% was attained on five volunteers performing a game consisting of collecting randomly positioned coloured pawns and placing them into cups. The "pawn" and "cup" instructions were imparted through four mental imaginery tasks, linked to robot arm actions by a state machine. With the current implementation in MatLab language the median action recognition time was 24.3s and the robot execution time was 17.7s. We demonstrate the notion of combining haemodynamic brain-machine interfacing with computer vision to implement interaction at the level of high-level commands rather than individual movements, which may find application in future fMRI approaches relevant to brain-lesioned patients, and provide source code supporting further work on larger command sets and real-time processing. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Ron-Angevin, Ricardo; Velasco-Álvarez, Francisco; Fernández-Rodríguez, Álvaro; Díaz-Estrella, Antonio; Blanca-Mena, María José; Vizcaíno-Martín, Francisco Javier
2017-05-30
Certain diseases affect brain areas that control the movements of the patients' body, thereby limiting their autonomy and communication capacity. Research in the field of Brain-Computer Interfaces aims to provide patients with an alternative communication channel not based on muscular activity, but on the processing of brain signals. Through these systems, subjects can control external devices such as spellers to communicate, robotic prostheses to restore limb movements, or domotic systems. The present work focus on the non-muscular control of a robotic wheelchair. A proposal to control a wheelchair through a Brain-Computer Interface based on the discrimination of only two mental tasks is presented in this study. The wheelchair displacement is performed with discrete movements. The control signals used are sensorimotor rhythms modulated through a right-hand motor imagery task or mental idle state. The peculiarity of the control system is that it is based on a serial auditory interface that provides the user with four navigation commands. The use of two mental tasks to select commands may facilitate control and reduce error rates compared to other endogenous control systems for wheelchairs. Seventeen subjects initially participated in the study; nine of them completed the three sessions of the proposed protocol. After the first calibration session, seven subjects were discarded due to a low control of their electroencephalographic signals; nine out of ten subjects controlled a virtual wheelchair during the second session; these same nine subjects achieved a medium accuracy level above 0.83 on the real wheelchair control session. The results suggest that more extensive training with the proposed control system can be an effective and safe option that will allow the displacement of a wheelchair in a controlled environment for potential users suffering from some types of motor neuron diseases.
Human brain spots emotion in non humanoid robots
Foucher, Aurélie; Jouvent, Roland; Nadel, Jacqueline
2011-01-01
The computation by which our brain elaborates fast responses to emotional expressions is currently an active field of brain studies. Previous studies have focused on stimuli taken from everyday life. Here, we investigated event-related potentials in response to happy vs neutral stimuli of human and non-humanoid robots. At the behavioural level, emotion shortened reaction times similarly for robotic and human stimuli. Early P1 wave was enhanced in response to happy compared to neutral expressions for robotic as well as for human stimuli, suggesting that emotion from robots is encoded as early as human emotion expression. Congruent with their lower faceness properties compared to human stimuli, robots elicited a later and lower N170 component than human stimuli. These findings challenge the claim that robots need to present an anthropomorphic aspect to interact with humans. Taken together, such results suggest that the early brain processing of emotional expressions is not bounded to human-like arrangements embodying emotion. PMID:20194513
Brain-machine interfaces for controlling lower-limb powered robotic systems.
He, Yongtian; Eguren, David; Azorín, José M; Grossman, Robert G; Luu, Trieu Phat; Contreras-Vidal, Jose L
2018-04-01
Lower-limb, powered robotics systems such as exoskeletons and orthoses have emerged as novel robotic interventions to assist or rehabilitate people with walking disabilities. These devices are generally controlled by certain physical maneuvers, for example pressing buttons or shifting body weight. Although effective, these control schemes are not what humans naturally use. The usability and clinical relevance of these robotics systems could be further enhanced by brain-machine interfaces (BMIs). A number of preliminary studies have been published on this topic, but a systematic understanding of the experimental design, tasks, and performance of BMI-exoskeleton systems for restoration of gait is lacking. To address this gap, we applied standard systematic review methodology for a literature search in PubMed and EMBASE databases and identified 11 studies involving BMI-robotics systems. The devices, user population, input and output of the BMIs and robot systems respectively, neural features, decoders, denoising techniques, and system performance were reviewed and compared. Results showed BMIs classifying walk versus stand tasks are the most common. The results also indicate that electroencephalography (EEG) is the only recording method for humans. Performance was not clearly presented in most of the studies. Several challenges were summarized, including EEG denoising, safety, responsiveness and others. We conclude that lower-body powered exoskeletons with automated gait intention detection based on BMIs open new possibilities in the assistance and rehabilitation fields, although the current performance, clinical benefits and several key challenging issues indicate that additional research and development is required to deploy these systems in the clinic and at home. Moreover, rigorous EEG denoising techniques, suitable performance metrics, consistent trial reporting, and more clinical trials are needed to advance the field.
Brain-machine interfaces for controlling lower-limb powered robotic systems
NASA Astrophysics Data System (ADS)
He, Yongtian; Eguren, David; Azorín, José M.; Grossman, Robert G.; Phat Luu, Trieu; Contreras-Vidal, Jose L.
2018-04-01
Objective. Lower-limb, powered robotics systems such as exoskeletons and orthoses have emerged as novel robotic interventions to assist or rehabilitate people with walking disabilities. These devices are generally controlled by certain physical maneuvers, for example pressing buttons or shifting body weight. Although effective, these control schemes are not what humans naturally use. The usability and clinical relevance of these robotics systems could be further enhanced by brain-machine interfaces (BMIs). A number of preliminary studies have been published on this topic, but a systematic understanding of the experimental design, tasks, and performance of BMI-exoskeleton systems for restoration of gait is lacking. Approach. To address this gap, we applied standard systematic review methodology for a literature search in PubMed and EMBASE databases and identified 11 studies involving BMI-robotics systems. The devices, user population, input and output of the BMIs and robot systems respectively, neural features, decoders, denoising techniques, and system performance were reviewed and compared. Main results. Results showed BMIs classifying walk versus stand tasks are the most common. The results also indicate that electroencephalography (EEG) is the only recording method for humans. Performance was not clearly presented in most of the studies. Several challenges were summarized, including EEG denoising, safety, responsiveness and others. Significance. We conclude that lower-body powered exoskeletons with automated gait intention detection based on BMIs open new possibilities in the assistance and rehabilitation fields, although the current performance, clinical benefits and several key challenging issues indicate that additional research and development is required to deploy these systems in the clinic and at home. Moreover, rigorous EEG denoising techniques, suitable performance metrics, consistent trial reporting, and more clinical trials are needed to advance the field.
Illusory movement perception improves motor control for prosthetic hands.
Marasco, Paul D; Hebert, Jacqueline S; Sensinger, Jon W; Shell, Courtney E; Schofield, Jonathon S; Thumser, Zachary C; Nataraj, Raviraj; Beckler, Dylan T; Dawson, Michael R; Blustein, Dan H; Gill, Satinder; Mensh, Brett D; Granja-Vazquez, Rafael; Newcomb, Madeline D; Carey, Jason P; Orzell, Beth M
2018-03-14
To effortlessly complete an intentional movement, the brain needs feedback from the body regarding the movement's progress. This largely nonconscious kinesthetic sense helps the brain to learn relationships between motor commands and outcomes to correct movement errors. Prosthetic systems for restoring function have predominantly focused on controlling motorized joint movement. Without the kinesthetic sense, however, these devices do not become intuitively controllable. We report a method for endowing human amputees with a kinesthetic perception of dexterous robotic hands. Vibrating the muscles used for prosthetic control via a neural-machine interface produced the illusory perception of complex grip movements. Within minutes, three amputees integrated this kinesthetic feedback and improved movement control. Combining intent, kinesthesia, and vision instilled participants with a sense of agency over the robotic movements. This feedback approach for closed-loop control opens a pathway to seamless integration of minds and machines. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Tessadori, Jacopo; Bisio, Marta; Martinoia, Sergio; Chiappalone, Michela
2012-01-01
Behaviors, from simple to most complex, require a two-way interaction with the environment and the contribution of different brain areas depending on the orchestrated activation of neuronal assemblies. In this work we present a new hybrid neuro-robotic architecture based on a neural controller bi-directionally connected to a virtual robot implementing a Braitenberg vehicle aimed at avoiding obstacles. The robot is characterized by proximity sensors and wheels, allowing it to navigate into a circular arena with obstacles of different sizes. As neural controller, we used hippocampal cultures dissociated from embryonic rats and kept alive over Micro Electrode Arrays (MEAs) for 3–8 weeks. The developed software architecture guarantees a bi-directional exchange of information between the natural and the artificial part by means of simple linear coding/decoding schemes. We used two different kinds of experimental preparation: “random” and “modular” populations. In the second case, the confinement was assured by a polydimethylsiloxane (PDMS) mask placed over the surface of the MEA device, thus defining two populations interconnected via specific microchannels. The main results of our study are: (i) neuronal cultures can be successfully interfaced to an artificial agent; (ii) modular networks show a different dynamics with respect to random culture, both in terms of spontaneous and evoked electrophysiological patterns; (iii) the robot performs better if a reinforcement learning paradigm (i.e., a tetanic stimulation delivered to the network following each collision) is activated, regardless of the modularity of the culture; (iv) the robot controlled by the modular network further enhances its capabilities in avoiding obstacles during the short-term plasticity trial. The developed paradigm offers a new framework for studying, in simplified model systems, neuro-artificial bi-directional interfaces for the development of new strategies for brain-machine interaction. PMID:23248586
Brain activation in parietal area during manipulation with a surgical robot simulator.
Miura, Satoshi; Kobayashi, Yo; Kawamura, Kazuya; Nakashima, Yasutaka; Fujie, Masakatsu G
2015-06-01
we present an evaluation method to qualify the embodiment caused by the physical difference between master-slave surgical robots by measuring the activation of the intraparietal sulcus in the user's brain activity during surgical robot manipulation. We show the change of embodiment based on the change of the optical axis-to-target view angle in the surgical simulator to change the manipulator's appearance in the monitor in terms of hand-eye coordination. The objective is to explore the change of brain activation according to the change of the optical axis-to-target view angle. In the experiments, we used a functional near-infrared spectroscopic topography (f-NIRS) brain imaging device to measure the brain activity of the seven subjects while they moved the hand controller to insert a curved needle into a target using the manipulator in a surgical simulator. The experiment was carried out several times with a variety of optical axis-to-target view angles. Some participants showed a significant peak (P value = 0.037, F-number = 2.841) when the optical axis-to-target view angle was 75°. The positional relationship between the manipulators and endoscope at 75° would be the closest to the human physical relationship between the hands and eyes.
NASA Astrophysics Data System (ADS)
LaFleur, Karl; Cassady, Kaitlin; Doud, Alexander; Shades, Kaleb; Rogin, Eitan; He, Bin
2013-08-01
Objective. At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional (3D) physical space using noninvasive scalp electroencephalogram (EEG) in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that the operation of a real world device has on subjects' control in comparison to a 2D virtual cursor task. Approach. Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a 3D physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Main results. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m s-1. Significance. Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user's ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in 3D physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG-based BCI systems for accomplish complex control in 3D physical space. The present study may serve as a framework for the investigation of multidimensional noninvasive BCI control in a physical environment using telepresence robotics.
LaFleur, Karl; Cassady, Kaitlin; Doud, Alexander; Shades, Kaleb; Rogin, Eitan; He, Bin
2013-08-01
At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional (3D) physical space using noninvasive scalp electroencephalogram (EEG) in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that the operation of a real world device has on subjects' control in comparison to a 2D virtual cursor task. Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a 3D physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m s(-1). Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user's ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in 3D physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG-based BCI systems for accomplish complex control in 3D physical space. The present study may serve as a framework for the investigation of multidimensional noninvasive BCI control in a physical environment using telepresence robotics.
Cortical Spiking Network Interfaced with Virtual Musculoskeletal Arm and Robotic Arm.
Dura-Bernal, Salvador; Zhou, Xianlian; Neymotin, Samuel A; Przekwas, Andrzej; Francis, Joseph T; Lytton, William W
2015-01-01
Embedding computational models in the physical world is a critical step towards constraining their behavior and building practical applications. Here we aim to drive a realistic musculoskeletal arm model using a biomimetic cortical spiking model, and make a robot arm reproduce the same trajectories in real time. Our cortical model consisted of a 3-layered cortex, composed of several hundred spiking model-neurons, which display physiologically realistic dynamics. We interconnected the cortical model to a two-joint musculoskeletal model of a human arm, with realistic anatomical and biomechanical properties. The virtual arm received muscle excitations from the neuronal model, and fed back proprioceptive information, forming a closed-loop system. The cortical model was trained using spike timing-dependent reinforcement learning to drive the virtual arm in a 2D reaching task. Limb position was used to simultaneously control a robot arm using an improved network interface. Virtual arm muscle activations responded to motoneuron firing rates, with virtual arm muscles lengths encoded via population coding in the proprioceptive population. After training, the virtual arm performed reaching movements which were smoother and more realistic than those obtained using a simplistic arm model. This system provided access to both spiking network properties and to arm biophysical properties, including muscle forces. The use of a musculoskeletal virtual arm and the improved control system allowed the robot arm to perform movements which were smoother than those reported in our previous paper using a simplistic arm. This work provides a novel approach consisting of bidirectionally connecting a cortical model to a realistic virtual arm, and using the system output to drive a robotic arm in real time. Our techniques are applicable to the future development of brain neuroprosthetic control systems, and may enable enhanced brain-machine interfaces with the possibility for finer control of limb prosthetics.
Puppets, robots, critics, and actors within a taxonomy of attention for developmental disorders
DENNIS, MAUREEN; SINOPOLI, KATIA J.; FLETCHER, JACK M.; SCHACHAR, RUSSELL
2008-01-01
This review proposes a new taxonomy of automatic and controlled attention. The taxonomy distinguishes among the role of the attendee (puppet and robot, critic and actor), the attention process (stimulus orienting vs. response control), and the attention operation (activation vs. inhibition vs. adjustment), and identifies cognitive phenotypes by which attention is overtly expressed. We apply the taxonomy to four childhood attention disorders: attention deficit hyperactivity disorder, spina bifida meningomyelocele, traumatic brain injury, and acute lymphoblastic leukemia. Variations in attention are related to specific brain regions that support normal attention processes when intact, and produce disordered attention when impaired. The taxonomy explains group differences in behavioral inattention, hyperactivity, and impulsiveness, as well as medication response. We also discuss issues relevant to theories of the cognitive and neural architecture of attention: functional dissociations within and between automatic and controlled attention; the relative importance of type of brain damage and developmental timing to attention profile; cognitive-energetic models of attention and white matter damage; temporal processing deficits, attention deficits and cerebellar damage; and the issue of cognitive phenotypes as candidate endophenotypes. PMID:18764966
Xu, Zhiming; So, Rosa Q; Toe, Kyaw Kyar; Ang, Kai Keng; Guan, Cuntai
2014-01-01
This paper presents an asynchronously intracortical brain-computer interface (BCI) which allows the subject to continuously drive a mobile robot. This system has a great implication for disabled patients to move around. By carefully designing a multiclass support vector machine (SVM), the subject's self-paced instantaneous movement intents are continuously decoded to control the mobile robot. In particular, we studied the stability of the neural representation of the movement directions. Experimental results on the nonhuman primate showed that the overt movement directions were stably represented in ensemble of recorded units, and our SVM classifier could successfully decode such movements continuously along the desired movement path. However, the neural representation of the stop state for the self-paced control was not stably represented and could drift.
Sloman, Aaron
2013-06-01
The approach Clark labels "action-oriented predictive processing" treats all cognition as part of a system of on-line control. This ignores other important aspects of animal, human, and robot intelligence. He contrasts it with an alleged "mainstream" approach that also ignores the depth and variety of AI/Robotic research. I don't think the theory presented is worth taking seriously as a complete model, even if there is much that it explains.
Jiang, Jun; Zhou, Zongtan; Yin, Erwei; Yu, Yang; Liu, Yadong; Hu, Dewen
2015-11-01
Motor imagery (MI)-based brain-computer interfaces (BCIs) allow disabled individuals to control external devices voluntarily, helping us to restore lost motor functions. However, the number of control commands available in MI-based BCIs remains limited, limiting the usability of BCI systems in control applications involving multiple degrees of freedom (DOF), such as control of a robot arm. To address this problem, we developed a novel Morse code-inspired method for MI-based BCI design to increase the number of output commands. Using this method, brain activities are modulated by sequences of MI (sMI) tasks, which are constructed by alternately imagining movements of the left or right hand or no motion. The codes of the sMI task was detected from EEG signals and mapped to special commands. According to permutation theory, an sMI task with N-length allows 2 × (2(N)-1) possible commands with the left and right MI tasks under self-paced conditions. To verify its feasibility, the new method was used to construct a six-class BCI system to control the arm of a humanoid robot. Four subjects participated in our experiment and the averaged accuracy of the six-class sMI tasks was 89.4%. The Cohen's kappa coefficient and the throughput of our BCI paradigm are 0.88 ± 0.060 and 23.5bits per minute (bpm), respectively. Furthermore, all of the subjects could operate an actual three-joint robot arm to grasp an object in around 49.1s using our approach. These promising results suggest that the Morse code-inspired method could be used in the design of BCIs for multi-DOF control. Copyright © 2015 Elsevier Ltd. All rights reserved.
Replicating Human Hand Synergies Onto Robotic Hands: A Review on Software and Hardware Strategies.
Salvietti, Gionata
2018-01-01
This review reports the principal solutions proposed in the literature to reduce the complexity of the control and of the design of robotic hands taking inspiration from the organization of the human brain. Several studies in neuroscience concerning the sensorimotor organization of the human hand proved that, despite the complexity of the hand, a few parameters can describe most of the variance in the patterns of configurations and movements. In other words, humans exploit a reduced set of parameters, known in the literature as synergies, to control their hands. In robotics, this dimensionality reduction can be achieved by coupling some of the degrees of freedom (DoFs) of the robotic hand, that results in a reduction of the needed inputs. Such coupling can be obtained at the software level, exploiting mapping algorithm to reproduce human hand organization, and at the hardware level, through either rigid or compliant physical couplings between the joints of the robotic hand. This paper reviews the main solutions proposed for both the approaches.
Nocchi, Federico; Gazzellini, Simone; Grisolia, Carmela; Petrarca, Maurizio; Cannatà, Vittorio; Cappa, Paolo; D'Alessio, Tommaso; Castelli, Enrico
2012-07-24
The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb) and non-biological (abstract object) movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. A visual functional Magnetic Resonance Imaging (fMRI) task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes). Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain's ability to assimilate abstract object movements with human motor gestures. In both conditions, activations were elicited in cerebral areas involved in visual perception, sensory integration, recognition of movement, re-mapping on the somatosensory and motor cortex, storage in memory, and response control. Results from the congruent vs. incongruent trials revealed greater activity for the former condition than the latter in a network including cingulate cortex, right inferior and middle frontal gyrus that are involved in the go-signal and in decision control. Results on healthy subjects would suggest the appropriateness of an abstract visual feedback provided during motor training. The task contributes to highlight the potential of fMRI in improving the understanding of visual motor processes and may also be useful in detecting brain reorganisation during training.
Tsui, Chun Sing Louis; Gan, John Q; Roberts, Stephen J
2009-03-01
Due to the non-stationarity of EEG signals, online training and adaptation are essential to EEG based brain-computer interface (BCI) systems. Self-paced BCIs offer more natural human-machine interaction than synchronous BCIs, but it is a great challenge to train and adapt a self-paced BCI online because the user's control intention and timing are usually unknown. This paper proposes a novel motor imagery based self-paced BCI paradigm for controlling a simulated robot in a specifically designed environment which is able to provide user's control intention and timing during online experiments, so that online training and adaptation of the motor imagery based self-paced BCI can be effectively investigated. We demonstrate the usefulness of the proposed paradigm with an extended Kalman filter based method to adapt the BCI classifier parameters, with experimental results of online self-paced BCI training with four subjects.
NASA Astrophysics Data System (ADS)
Maksimenko, Vladimir; Runnova, Anastasia; Pchelintseva, Svetlana; Efremova, Tatiana; Zhuravlev, Maksim; Pisarchik, Alexander
2018-04-01
We have considered time-frequency and spatio-temporal structure of electrical brain activity, associated with real and imaginary movements based on the multichannel EEG recordings. We have found that along with wellknown effects of event-related desynchronization (ERD) in α/μ - rhythms and β - rhythm, these types of activity are accompanied by the either ERS (for real movement) or ERD (for imaginary movement) in low-frequency δ - band, located mostly in frontal lobe. This may be caused by the associated processes of decision making, which take place when subject is deciding either perform the movement or imagine it. Obtained features have been found in untrained subject which it its turn gives the possibility to use our results in the development of brain-computer interfaces for controlling anthropomorphic robotic arm.
A Discussion of Possibility of Reinforcement Learning Using Event-Related Potential in BCI
NASA Astrophysics Data System (ADS)
Yamagishi, Yuya; Tsubone, Tadashi; Wada, Yasuhiro
Recently, Brain computer interface (BCI) which is a direct connecting pathway an external device such as a computer or a robot and a human brain have gotten a lot of attention. Since BCI can control the machines as robots by using the brain activity without using the voluntary muscle, the BCI may become a useful communication tool for handicapped persons, for instance, amyotrophic lateral sclerosis patients. However, in order to realize the BCI system which can perform precise tasks on various environments, it is necessary to design the control rules to adapt to the dynamic environments. Reinforcement learning is one approach of the design of the control rule. If this reinforcement leaning can be performed by the brain activity, it leads to the attainment of BCI that has general versatility. In this research, we paid attention to P300 of event-related potential as an alternative signal of the reward of reinforcement learning. We discriminated between the success and the failure trials from P300 of the EEG of the single trial by using the proposed discrimination algorithm based on Support vector machine. The possibility of reinforcement learning was examined from the viewpoint of the number of discriminated trials. It was shown that there was a possibility to be able to learn in most subjects.
Context-Based Filtering for Assisted Brain-Actuated Wheelchair Driving
Vanacker, Gerolf; Millán, José del R.; Lew, Eileen; Ferrez, Pierre W.; Moles, Ferran Galán; Philips, Johan; Van Brussel, Hendrik; Nuttin, Marnix
2007-01-01
Controlling a robotic device by using human brain signals is an interesting and challenging task. The device may be complicated to control and the nonstationary nature of the brain signals provides for a rather unstable input. With the use of intelligent processing algorithms adapted to the task at hand, however, the performance can be increased. This paper introduces a shared control system that helps the subject in driving an intelligent wheelchair with a noninvasive brain interface. The subject's steering intentions are estimated from electroencephalogram (EEG) signals and passed through to the shared control system before being sent to the wheelchair motors. Experimental results show a possibility for significant improvement in the overall driving performance when using the shared control system compared to driving without it. These results have been obtained with 2 healthy subjects during their first day of training with the brain-actuated wheelchair. PMID:18354739
2016-11-14
necessary capability to build a high density communication highway between 86 billion brain neurons and intelligent vehicles or robots . With this...build a high density communication highway between brain neurons and intelligent vehicles or robots . The final outcome of the INI using TDT system...will be beneficial to wounded warriors suffering from loss of limb function, so that, using sophisticated bidirectional robotic limbs, these
Tonet, Oliver; Marinelli, Martina; Citi, Luca; Rossini, Paolo Maria; Rossini, Luca; Megali, Giuseppe; Dario, Paolo
2008-01-15
Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications.
Di Lazzaro, Vincenzo; Capone, Fioravante; Di Pino, Giovanni; Pellegrino, Giovanni; Florio, Lucia; Zollo, Loredana; Simonetti, Davide; Ranieri, Federico; Brunelli, Nicoletta; Corbetto, Marzia; Miccinilli, Sandra; Bravi, Marco; Milighetti, Stefano; Guglielmelli, Eugenio; Sterzi, Silvia
2016-01-01
Previous studies suggested that both robot-assisted rehabilitation and non-invasive brain stimulation can produce a slight improvement in severe chronic stroke patients. It is still unknown whether their combination can produce synergistic and more consistent improvements. Safety and efficacy of this combination has been assessed within a proof-of-principle, double-blinded, semi-randomized, sham-controlled trial. Inhibitory continuous Theta Burst Stimulation (cTBS) was delivered on the affected hemisphere, in order to improve the response to the following robot-assisted therapy via a homeostatic increase of learning capacity. Twenty severe upper limb-impaired chronic stroke patients were randomized to robot-assisted therapy associated with real or sham cTBS, delivered for 10 working days. Eight real and nine sham patients completed the study. Change in Fugl-Meyer was chosen as primary outcome, while changes in several quantitative indicators of motor performance extracted by the robot as secondary outcomes. The treatment was well-tolerated by the patients and there were no adverse events. All patients achieved a small, but significant, Fugl-Meyer improvement (about 5%). The difference between the real and the sham cTBS groups was not significant. Among several secondary end points, only the Success Rate (percentage of targets reached by the patient) improved more in the real than in the sham cTBS group. This study shows that a short intensive robot-assisted rehabilitation produces a slight improvement in severe upper-limb impaired, even years after the stroke. The association with homeostatic metaplasticity-promoting non-invasive brain stimulation does not augment the clinical gain in patients with severe stroke. PMID:27013950
How does a surgeon’s brain buzz? An EEG coherence study on the interaction between humans and robot
2013-01-01
Introduction In humans, both primary and non-primary motor areas are involved in the control of voluntary movements. However, the dynamics of functional coupling among different motor areas have not been fully clarified yet. There is to date no research looking to the functional dynamics in the brain of surgeons working in laparoscopy compared with those trained and working in robotic surgery. Experimental procedures We enrolled 16 right-handed trained surgeons and assessed changes in intra- and inter-hemispheric EEG coherence with a 32-channels device during the same motor task with either a robotic or a laparoscopic approach. Estimates of auto and coherence spectra were calculated by a fast Fourier transform algorithm implemented on Matlab 5.3. Results We found increase of coherence in surgeons performing laparoscopy, especially in theta and lower alpha activity, in all experimental conditions (M1 vs. SMA, S1 vs. SMA, S1 vs. pre-SMA and M1 vs. S1; p < 0.001). Conversely, an increase in inter-hemispheric coherence in upper alpha and beta band was found in surgeons using the robotic procedure (right vs. left M1, right vs. left S1, right pre-SMA vs. left M1, left pre-SMA vs. right M1; p < 0.001). Discussion Our data provide a semi-quantitative evaluation of dynamics in functional coupling among different cortical areas in skilled surgeons performing laparoscopy or robotic surgery. These results suggest that motor and non-motor areas are differently activated and coordinated in surgeons performing the same task with different approaches. To the best of our knowledge, this is the first study that tried to assess semi-quantitative differences during the interaction between normal human brain and robotic devices. PMID:23607324
How does a surgeon's brain buzz? An EEG coherence study on the interaction between humans and robot.
Bocci, Tommaso; Moretto, Carlo; Tognazzi, Silvia; Briscese, Lucia; Naraci, Megi; Leocani, Letizia; Mosca, Franco; Ferrari, Mauro; Sartucci, Ferdinando
2013-04-22
In humans, both primary and non-primary motor areas are involved in the control of voluntary movements. However, the dynamics of functional coupling among different motor areas have not been fully clarified yet. There is to date no research looking to the functional dynamics in the brain of surgeons working in laparoscopy compared with those trained and working in robotic surgery. We enrolled 16 right-handed trained surgeons and assessed changes in intra- and inter-hemispheric EEG coherence with a 32-channels device during the same motor task with either a robotic or a laparoscopic approach. Estimates of auto and coherence spectra were calculated by a fast Fourier transform algorithm implemented on Matlab 5.3. We found increase of coherence in surgeons performing laparoscopy, especially in theta and lower alpha activity, in all experimental conditions (M1 vs. SMA, S1 vs. SMA, S1 vs. pre-SMA and M1 vs. S1; p < 0.001). Conversely, an increase in inter-hemispheric coherence in upper alpha and beta band was found in surgeons using the robotic procedure (right vs. left M1, right vs. left S1, right pre-SMA vs. left M1, left pre-SMA vs. right M1; p < 0.001). Our data provide a semi-quantitative evaluation of dynamics in functional coupling among different cortical areas in skilled surgeons performing laparoscopy or robotic surgery. These results suggest that motor and non-motor areas are differently activated and coordinated in surgeons performing the same task with different approaches. To the best of our knowledge, this is the first study that tried to assess semi-quantitative differences during the interaction between normal human brain and robotic devices.
NASA Astrophysics Data System (ADS)
Cao, Enguo; Inoue, Yoshio; Liu, Tao; Shibata, Kyoko
In many countries in which the phenomenon of population aging is being experienced, motor function recovery activities have aroused much interest. In this paper, a sit-to-stand rehabilitation robot utilizing a double-rope system was developed, and the performance of the robot was evaluated by analyzing the dynamic parameters of human lower limbs. For the robot control program, an impedance control method with a training game was developed to increase the effectiveness and frequency of rehabilitation activities, and a calculation method was developed for evaluating the joint moments of hip, knee, and ankle. Test experiments were designed, and four subjects were requested to stand up from a chair with assistance from the rehabilitation robot. In the experiments, body segment rotational angles, trunk movement trajectories, rope tensile forces, ground reaction forces (GRF) and centers of pressure (COP) were measured by sensors, and the moments of ankle, knee and hip joint were real-time calculated using the sensor-measured data. The experiment results showed that the sit-to-stand rehabilitation robot with impedance control method could maintain the comfortable training postures of users, decrease the moments of limb joints, and enhance training effectiveness. Furthermore, the game control method could encourage collaboration between the brain and limbs, and allow for an increase in the frequency and intensity of rehabilitation activities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, Karl D., E-mail: karl.price@sickkids.ca
Purpose: Current treatment of intraventricular hemorrhage (IVH) involves cerebral shunt placement or an invasive brain surgery. Magnetic resonance-guided focused ultrasound (MRgFUS) applied to the brains of pediatric patients presents an opportunity to treat IVH in a noninvasive manner, termed “incision-less surgery.” Current clinical and research focused ultrasound systems lack the capability to perform neonatal transcranial surgeries due to either range of motion or dexterity requirements. A novel robotic system is proposed to position a focused ultrasound transducer accurately above the head of a neonatal patient inside an MRI machine to deliver the therapy. Methods: A clinical Philips Sonalleve MRgFUS systemmore » was expanded to perform transcranial treatment. A five degree-of-freedom MR-conditional robot was designed and manufactured using MR compatible materials. The robot electronics and control were integrated into existing Philips electronics and software interfaces. The user commands the position of the robot with a graphical user interface, and is presented with real-time MR imaging of the patient throughout the surgery. The robot is validated through a series of experiments that characterize accuracy, signal-to-noise ratio degeneration of an MR image as a result of the robot, MR imaging artifacts generated by the robot, and the robot’s ability to operate in a representative surgical environment inside an MR machine. Results: Experimental results show the robot responds reliably within an MR environment, has achieved 0.59 ± 0.25 mm accuracy, does not produce severe MR-imaging artifacts, has a workspace providing sufficient coverage of a neonatal brain, and can manipulate a 5 kg payload. A full system demonstration shows these characteristics apply in an application environment. Conclusions: This paper presents a comprehensive look at the process of designing and validating a new robot from concept to implementation for use in an MR environment. An MR conditional robot has been designed and manufactured to design specifications. The system has demonstrated its feasibility as a platform for MRgFUS interventions for neonatal patients. The success of the system in experimental trials suggests that it is ready to be used for validation of the transcranial intervention in animal studies.« less
Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng
2016-01-01
A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints. PMID:27579033
Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien
2016-01-01
A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.
Matching brain-machine interface performance to space applications.
Citi, Luca; Tonet, Oliver; Marinelli, Martina
2009-01-01
A brain-machine interface (BMI) is a particular class of human-machine interface (HMI). BMIs have so far been studied mostly as a communication means for people who have little or no voluntary control of muscle activity. For able-bodied users, such as astronauts, a BMI would only be practical if conceived as an augmenting interface. A method is presented for pointing out effective combinations of HMIs and applications of robotics and automation to space. Latency and throughput are selected as performance measures for a hybrid bionic system (HBS), that is, the combination of a user, a device, and a HMI. We classify and briefly describe HMIs and space applications and then compare the performance of classes of interfaces with the requirements of classes of applications, both in terms of latency and throughput. Regions of overlap correspond to effective combinations. Devices requiring simpler control, such as a rover, a robotic camera, or environmental controls are suitable to be driven by means of BMI technology. Free flyers and other devices with six degrees of freedom can be controlled, but only at low-interactivity levels. More demanding applications require conventional interfaces, although they could be controlled by BMIs once the same levels of performance as currently recorded in animal experiments are attained. Robotic arms and manipulators could be the next frontier for noninvasive BMIs. Integrating smart controllers in HBSs could improve interactivity and boost the use of BMI technology in space applications.
An Intention-Driven Semi-autonomous Intelligent Robotic System for Drinking.
Zhang, Zhijun; Huang, Yongqian; Chen, Siyuan; Qu, Jun; Pan, Xin; Yu, Tianyou; Li, Yuanqing
2017-01-01
In this study, an intention-driven semi-autonomous intelligent robotic (ID-SIR) system is designed and developed to assist the severely disabled patients to live independently. The system mainly consists of a non-invasive brain-machine interface (BMI) subsystem, a robot manipulator and a visual detection and localization subsystem. Different from most of the existing systems remotely controlled by joystick, head- or eye tracking, the proposed ID-SIR system directly acquires the intention from users' brain. Compared with the state-of-art system only working for a specific object in a fixed place, the designed ID-SIR system can grasp any desired object in a random place chosen by a user and deliver it to his/her mouth automatically. As one of the main advantages of the ID-SIR system, the patient is only required to send one intention command for one drinking task and the autonomous robot would finish the rest of specific controlling tasks, which greatly eases the burden on patients. Eight healthy subjects attended our experiment, which contained 10 tasks for each subject. In each task, the proposed ID-SIR system delivered the desired beverage container to the mouth of the subject and then put it back to the original position. The mean accuracy of the eight subjects was 97.5%, which demonstrated the effectiveness of the ID-SIR system.
Brain-machine interfacing control of whole-body humanoid motion
Bouyarmane, Karim; Vaillant, Joris; Sugimoto, Norikazu; Keith, François; Furukawa, Jun-ichiro; Morimoto, Jun
2014-01-01
We propose to tackle in this paper the problem of controlling whole-body humanoid robot behavior through non-invasive brain-machine interfacing (BMI), motivated by the perspective of mapping human motor control strategies to human-like mechanical avatar. Our solution is based on the adequate reduction of the controllable dimensionality of a high-DOF humanoid motion in line with the state-of-the-art possibilities of non-invasive BMI technologies, leaving the complement subspace part of the motion to be planned and executed by an autonomous humanoid whole-body motion planning and control framework. The results are shown in full physics-based simulation of a 36-degree-of-freedom humanoid motion controlled by a user through EEG-extracted brain signals generated with motor imagery task. PMID:25140134
A GPU-accelerated cortical neural network model for visually guided robot navigation.
Beyeler, Michael; Oros, Nicolas; Dutt, Nikil; Krichmar, Jeffrey L
2015-12-01
Humans and other terrestrial animals use vision to traverse novel cluttered environments with apparent ease. On one hand, although much is known about the behavioral dynamics of steering in humans, it remains unclear how relevant perceptual variables might be represented in the brain. On the other hand, although a wealth of data exists about the neural circuitry that is concerned with the perception of self-motion variables such as the current direction of travel, little research has been devoted to investigating how this neural circuitry may relate to active steering control. Here we present a cortical neural network model for visually guided navigation that has been embodied on a physical robot exploring a real-world environment. The model includes a rate based motion energy model for area V1, and a spiking neural network model for cortical area MT. The model generates a cortical representation of optic flow, determines the position of objects based on motion discontinuities, and combines these signals with the representation of a goal location to produce motor commands that successfully steer the robot around obstacles toward the goal. The model produces robot trajectories that closely match human behavioral data. This study demonstrates how neural signals in a model of cortical area MT might provide sufficient motion information to steer a physical robot on human-like paths around obstacles in a real-world environment, and exemplifies the importance of embodiment, as behavior is deeply coupled not only with the underlying model of brain function, but also with the anatomical constraints of the physical body it controls. Copyright © 2015 Elsevier Ltd. All rights reserved.
Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots.
Zhao, Jing; Li, Wei; Li, Mengfan
2015-01-01
In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential)- and P300-based models using Cerebot-a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR) of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject's mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper.
Comparative Study of SSVEP- and P300-Based Models for the Telepresence Control of Humanoid Robots
Li, Mengfan
2015-01-01
In this paper, we evaluate the control performance of SSVEP (steady-state visual evoked potential)- and P300-based models using Cerebot—a mind-controlled humanoid robot platform. Seven subjects with diverse experience participated in experiments concerning the open-loop and closed-loop control of a humanoid robot via brain signals. The visual stimuli of both the SSVEP- and P300- based models were implemented on a LCD computer monitor with a refresh frequency of 60 Hz. Considering the operation safety, we set the classification accuracy of a model over 90.0% as the most important mandatory for the telepresence control of the humanoid robot. The open-loop experiments demonstrated that the SSVEP model with at most four stimulus targets achieved the average accurate rate about 90%, whereas the P300 model with the six or more stimulus targets under five repetitions per trial was able to achieve the accurate rates over 90.0%. Therefore, the four SSVEP stimuli were used to control four types of robot behavior; while the six P300 stimuli were chosen to control six types of robot behavior. Both of the 4-class SSVEP and 6-class P300 models achieved the average success rates of 90.3% and 91.3%, the average response times of 3.65 s and 6.6 s, and the average information transfer rates (ITR) of 24.7 bits/min 18.8 bits/min, respectively. The closed-loop experiments addressed the telepresence control of the robot; the objective was to cause the robot to walk along a white lane marked in an office environment using live video feedback. Comparative studies reveal that the SSVEP model yielded faster response to the subject’s mental activity with less reliance on channel selection, whereas the P300 model was found to be suitable for more classifiable targets and required less training. To conclude, we discuss the existing SSVEP and P300 models for the control of humanoid robots, including the models proposed in this paper. PMID:26562524
Cortical Spiking Network Interfaced with Virtual Musculoskeletal Arm and Robotic Arm
Dura-Bernal, Salvador; Zhou, Xianlian; Neymotin, Samuel A.; Przekwas, Andrzej; Francis, Joseph T.; Lytton, William W.
2015-01-01
Embedding computational models in the physical world is a critical step towards constraining their behavior and building practical applications. Here we aim to drive a realistic musculoskeletal arm model using a biomimetic cortical spiking model, and make a robot arm reproduce the same trajectories in real time. Our cortical model consisted of a 3-layered cortex, composed of several hundred spiking model-neurons, which display physiologically realistic dynamics. We interconnected the cortical model to a two-joint musculoskeletal model of a human arm, with realistic anatomical and biomechanical properties. The virtual arm received muscle excitations from the neuronal model, and fed back proprioceptive information, forming a closed-loop system. The cortical model was trained using spike timing-dependent reinforcement learning to drive the virtual arm in a 2D reaching task. Limb position was used to simultaneously control a robot arm using an improved network interface. Virtual arm muscle activations responded to motoneuron firing rates, with virtual arm muscles lengths encoded via population coding in the proprioceptive population. After training, the virtual arm performed reaching movements which were smoother and more realistic than those obtained using a simplistic arm model. This system provided access to both spiking network properties and to arm biophysical properties, including muscle forces. The use of a musculoskeletal virtual arm and the improved control system allowed the robot arm to perform movements which were smoother than those reported in our previous paper using a simplistic arm. This work provides a novel approach consisting of bidirectionally connecting a cortical model to a realistic virtual arm, and using the system output to drive a robotic arm in real time. Our techniques are applicable to the future development of brain neuroprosthetic control systems, and may enable enhanced brain-machine interfaces with the possibility for finer control of limb prosthetics. PMID:26635598
Roncone, Alessandro; Hoffmann, Matej; Pattacini, Ugo; Fadiga, Luciano; Metta, Giorgio
2016-01-01
This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement.
Integrating robotic action with biologic perception: A brain-machine symbiosis theory
NASA Astrophysics Data System (ADS)
Mahmoudi, Babak
In patients with motor disability the natural cyclic flow of information between the brain and external environment is disrupted by their limb impairment. Brain-Machine Interfaces (BMIs) aim to provide new communication channels between the brain and environment by direct translation of brain's internal states into actions. For enabling the user in a wide range of daily life activities, the challenge is designing neural decoders that autonomously adapt to different tasks, environments, and to changes in the pattern of neural activity. In this dissertation, a novel decoding framework for BMIs is developed in which a computational agent autonomously learns how to translate neural states into action based on maximization of a measure of shared goal between user and the agent. Since the agent and brain share the same goal, a symbiotic relationship between them will evolve therefore this decoding paradigm is called a Brain-Machine Symbiosis (BMS) framework. A decoding agent was implemented within the BMS framework based on the Actor-Critic method of Reinforcement Learning. The rule of the Actor as a neural decoder was to find mapping between the neural representation of motor states in the primary motor cortex (MI) and robot actions in order to solve reaching tasks. The Actor learned the optimal control policy using an evaluative feedback that was estimated by the Critic directly from the user's neural activity of the Nucleus Accumbens (NAcc). Through a series of computational neuroscience studies in a cohort of rats it was demonstrated that NAcc could provide a useful evaluative feedback by predicting the increase or decrease in the probability of earning reward based on the environmental conditions. Using a closed-loop BMI simulator it was demonstrated the Actor-Critic decoding architecture was able to adapt to different tasks as well as changes in the pattern of neural activity. The custom design of a dual micro-wire array enabled simultaneous implantation of MI and NAcc for the development of a full closed-loop system. The Actor-Critic decoding architecture was able to solve the brain-controlled reaching task using a robotic arm by capturing the interdependency between the simultaneous action representation in MI and reward expectation in NAcc.
Myoelectrically controlled wrist robot for stroke rehabilitation
2013-01-01
Background Robot-assisted rehabilitation is an advanced new technology in stroke rehabilitation to provide intensive training. Post-stroke motor recovery depends on active rehabilitation by voluntary participation of patient’s paretic motor system as early as possible in order to promote reorganization of brain. However, voluntary residual motor efforts to the affected limb have not been involved enough in most robot-assisted rehabilitation for patients after stroke. The objective of this study is to evaluate the feasibility of robot-assisted rehabilitation using myoelectric control on upper limb motor recovery. Methods In the present study, an exoskeleton-type rehabilitation robotic system was designed to provide voluntarily controlled assisted torque to the affected wrist. Voluntary intention was involved by using the residual surface electromyography (EMG) from flexor carpi radialis(FCR) and extensor carpi radialis (ECR)on the affected limb to control the mechanical assistance provided by the robotic system during wrist flexion and extension in a 20-session training. The system also applied constant resistant torque to the affected wrist during the training. Sixteen subjects after stroke had been recruited for evaluating the tracking performance and therapeutical effects of myoelectrically controlled robotic system. Results With the myoelectrically-controlled assistive torque, stroke survivors could reach a larger range of motion with a significant decrease in the EMG signal from the agonist muscles. The stroke survivors could be trained in the unreached range with their voluntary residual EMG on the paretic side. After 20-session rehabilitation training, there was a non-significant increase in the range of motion and a significant decrease in the root mean square error (RMSE) between the actual wrist angle and target angle. Significant improvements also could be found in muscle strength and clinical scales. Conclusions These results indicate that robot-aided therapy with voluntary participation of patient’s paretic motor system using myoelectric control might have positive effect on upper limb motor recovery. PMID:23758925
NASA Robotic Neurosurgery Testbed
NASA Technical Reports Server (NTRS)
Mah, Robert
1997-01-01
The detection of tissue interface (e.g., normal tissue, cancer, tumor) has been limited clinically to tactile feedback, temperature monitoring, and the use of a miniature ultrasound probe for tissue differentiation during surgical operations, In neurosurgery, the needle used in the standard stereotactic CT or MRI guided brain biopsy provides no information about the tissue being sampled. The tissue sampled depends entirely upon the accuracy with which the localization provided by the preoperative CT or MRI scan is translated to the intracranial biopsy site. In addition, no information about the tissue being traversed by the needle (e.g., a blood vessel) is provided. Hemorrhage due to the biopsy needle tearing a blood vessel within the brain is the most devastating complication of stereotactic CT/MRI guided brain biopsy. A robotic neurosurgery testbed has been developed at NASA Ames Research Center as a spin-off of technologies from space, aeronautics and medical programs. The invention entitled "Robotic Neurosurgery Leading to Multimodality Devices for Tissue Identification" is nearing a state ready for commercialization. The devices will: 1) improve diagnostic accuracy and precision of general surgery, with near term emphasis on stereotactic brain biopsy, 2) automate tissue identification, with near term emphasis on stereotactic brain biopsy, to permit remote control of the procedure, and 3) reduce morbidity for stereotactic brain biopsy. The commercial impact from this work is the potential development of a whole new generation of smart surgical tools to increase the safety, accuracy and efficiency of surgical procedures. Other potential markets include smart surgical tools for tumor ablation in neurosurgery, general exploratory surgery, prostate cancer surgery, and breast cancer surgery.
NASA Robotic Neurosurgery Testbed
NASA Technical Reports Server (NTRS)
Mah, Robert
1997-01-01
The detection of tissue interface (e.g., normal tissue, cancer, tumor) has been limited clinically to tactile feedback, temperature monitoring, and the use of a miniature ultrasound probe for tissue differentiation during surgical operations. In neurosurgery, the needle used in the standard stereotactic CT (Computational Tomography) or MRI (Magnetic Resonance Imaging) guided brain biopsy provides no information about the tissue being sampled. The tissue sampled depends entirely upon the accuracy with which the localization provided by the preoperative CT or MRI scan is translated to the intracranial biopsy site. In addition, no information about the tissue being traversed by the needle (e.g., a blood vessel) is provided. Hemorrhage due to the biopsy needle tearing a blood vessel within the brain is the most devastating complication of stereotactic CT/MRI guided brain biopsy. A robotic neurosurgery testbed has been developed at NASA Ames Research Center as a spin-off of technologies from space, aeronautics and medical programs. The invention entitled 'Robotic Neurosurgery Leading to Multimodality Devices for Tissue Identification' is nearing a state ready for commercialization. The devices will: 1) improve diagnostic accuracy and precision of general surgery, with near term emphasis on stereotactic brain biopsy, 2) automate tissue identification, with near term emphasis on stereotactic brain biopsy, to permit remote control of the procedure, and 3) reduce morbidity for stereotactic brain biopsy. The commercial impact from this work is the potential development of a whole new generation of smart surgical tools to increase the safety, accuracy and efficiency of surgical procedures. Other potential markets include smart surgical tools for tumor ablation in neurosurgery, general exploratory surgery, prostate cancer surgery, and breast cancer surgery.
Tidoni, Emmanuele; Gergondet, Pierre; Fusco, Gabriele; Kheddar, Abderrahmane; Aglioti, Salvatore M
2017-06-01
The efficient control of our body and successful interaction with the environment are possible through the integration of multisensory information. Brain-computer interface (BCI) may allow people with sensorimotor disorders to actively interact in the world. In this study, visual information was paired with auditory feedback to improve the BCI control of a humanoid surrogate. Healthy and spinal cord injured (SCI) people were asked to embody a humanoid robot and complete a pick-and-place task by means of a visual evoked potentials BCI system. Participants observed the remote environment from the robot's perspective through a head mounted display. Human-footsteps and computer-beep sounds were used as synchronous/asynchronous auditory feedback. Healthy participants achieved better placing accuracy when listening to human footstep sounds relative to a computer-generated sound. SCI people demonstrated more difficulty in steering the robot during asynchronous auditory feedback conditions. Importantly, subjective reports highlighted that the BCI mask overlaying the display did not limit the observation of the scenario and the feeling of being in control of the robot. Overall, the data seem to suggest that sensorimotor-related information may improve the control of external devices. Further studies are required to understand how the contribution of residual sensory channels could improve the reliability of BCI systems.
Neurobionics and the brain-computer interface: current applications and future horizons.
Rosenfeld, Jeffrey V; Wong, Yan Tat
2017-05-01
The brain-computer interface (BCI) is an exciting advance in neuroscience and engineering. In a motor BCI, electrical recordings from the motor cortex of paralysed humans are decoded by a computer and used to drive robotic arms or to restore movement in a paralysed hand by stimulating the muscles in the forearm. Simultaneously integrating a BCI with the sensory cortex will further enhance dexterity and fine control. BCIs are also being developed to: provide ambulation for paraplegic patients through controlling robotic exoskeletons; restore vision in people with acquired blindness; detect and control epileptic seizures; and improve control of movement disorders and memory enhancement. High-fidelity connectivity with small groups of neurons requires microelectrode placement in the cerebral cortex. Electrodes placed on the cortical surface are less invasive but produce inferior fidelity. Scalp surface recording using electroencephalography is much less precise. BCI technology is still in an early phase of development and awaits further technical improvements and larger multicentre clinical trials before wider clinical application and impact on the care of people with disabilities. There are also many ethical challenges to explore as this technology evolves.
Bui, Huu Phuoc; Tomar, Satyendra; Courtecuisse, Hadrien; Audette, Michel; Cotin, Stéphane; Bordas, Stéphane P A
2018-05-01
An error-controlled mesh refinement procedure for needle insertion simulations is presented. As an example, the procedure is applied for simulations of electrode implantation for deep brain stimulation. We take into account the brain shift phenomena occurring when a craniotomy is performed. We observe that the error in the computation of the displacement and stress fields is localised around the needle tip and the needle shaft during needle insertion simulation. By suitably and adaptively refining the mesh in this region, our approach enables to control, and thus to reduce, the error whilst maintaining a coarser mesh in other parts of the domain. Through academic and practical examples we demonstrate that our adaptive approach, as compared with a uniform coarse mesh, increases the accuracy of the displacement and stress fields around the needle shaft and, while for a given accuracy, saves computational time with respect to a uniform finer mesh. This facilitates real-time simulations. The proposed methodology has direct implications in increasing the accuracy, and controlling the computational expense of the simulation of percutaneous procedures such as biopsy, brachytherapy, regional anaesthesia, or cryotherapy. Moreover, the proposed approach can be helpful in the development of robotic surgeries because the simulation taking place in the control loop of a robot needs to be accurate, and to occur in real time. Copyright © 2018 John Wiley & Sons, Ltd.
Hong Kai Yap; Kamaldin, Nazir; Jeong Hoon Lim; Nasrallah, Fatima A; Goh, James Cho Hong; Chen-Hua Yeow
2017-06-01
In this paper, we present the design, fabrication and evaluation of a soft wearable robotic glove, which can be used with functional Magnetic Resonance imaging (fMRI) during the hand rehabilitation and task specific training. The soft wearable robotic glove, called MR-Glove, consists of two major components: a) a set of soft pneumatic actuators and b) a glove. The soft pneumatic actuators, which are made of silicone elastomers, generate bending motion and actuate finger joints upon pressurization. The device is MR-compatible as it contains no ferromagnetic materials and operates pneumatically. Our results show that the device did not cause artifacts to fMRI images during hand rehabilitation and task-specific exercises. This study demonstrated the possibility of using fMRI and MR-compatible soft wearable robotic device to study brain activities and motor performances during hand rehabilitation, and to unravel the functional effects of rehabilitation robotics on brain stimulation.
NASA Astrophysics Data System (ADS)
Schieber, Marc H.
2016-07-01
Control of the human hand has been both difficult to understand scientifically and difficult to emulate technologically. The article by Santello and colleagues in the current issue of Physics of Life Reviews[1] highlights the accelerating pace of interaction between the neuroscience of controlling body movement and the engineering of robotic hands that can be used either autonomously or as part of a motor neuroprosthesis, an artificial body part that moves under control from a human subject's own nervous system. Motor neuroprostheses typically involve a brain-computer interface (BCI) that takes signals from the subject's nervous system or muscles, interprets those signals through a decoding algorithm, and then applies the resulting output to control the artificial device.
Investigation of goal change to optimize upper-extremity motor performance in a robotic environment.
Brewer, Bambi R; Klatzky, Roberta; Markham, Heather; Matsuoka, Yoky
2009-10-01
Robotic devices for therapy have the potential to enable intensive, fully customized home rehabilitation over extended periods for individuals with stroke and traumatic brain injury, thus empowering them to maximize their functional recovery. For robotic rehabilitation to be most effective, systems must have the capacity to assign performance goals to the user and to increment those goals to encourage performance improvement. Otherwise, individuals may plateau at an artificially low level of function. Frequent goal change is needed to motivate improvements in performance by individuals with brain injury; but because of entrenched habits, these individuals may avoid striving for goals that they perceive as becoming ever more difficult. For this reason, implicit, undetectable goal change (distortion) may be more effective than explicit goal change at optimizing the motor performance of some individuals with brain injury. This paper reviews a body of work that provides a basis for incorporating implicit goal change into a robotic rehabilitation paradigm. This work was conducted with individuals without disability to provide foundational knowledge for using goal change in a robotic environment. In addition, we compare motor performance with goal change to performance with no goal or with a static goal for individuals without brain injury. Our results show that goal change can improve motor performance when participants attend to visual feedback. Building on these preliminary results can lead to more effective robotic paradigms for the rehabilitation of individuals with brain injury, including individuals with cerebral palsy.
Leeb, Robert; Perdikis, Serafeim; Tonin, Luca; Biasiucci, Andrea; Tavella, Michele; Creatura, Marco; Molina, Alberto; Al-Khodairy, Abdul; Carlson, Tom; Millán, José D R
2013-10-01
Brain-computer interfaces (BCIs) are no longer only used by healthy participants under controlled conditions in laboratory environments, but also by patients and end-users, controlling applications in their homes or clinics, without the BCI experts around. But are the technology and the field mature enough for this? Especially the successful operation of applications - like text entry systems or assistive mobility devices such as tele-presence robots - requires a good level of BCI control. How much training is needed to achieve such a level? Is it possible to train naïve end-users in 10 days to successfully control such applications? In this work, we report our experiences of training 24 motor-disabled participants at rehabilitation clinics or at the end-users' homes, without BCI experts present. We also share the lessons that we have learned through transferring BCI technologies from the lab to the user's home or clinics. The most important outcome is that 50% of the participants achieved good BCI performance and could successfully control the applications (tele-presence robot and text-entry system). In the case of the tele-presence robot the participants achieved an average performance ratio of 0.87 (max. 0.97) and for the text entry application a mean of 0.93 (max. 1.0). The lessons learned and the gathered user feedback range from pure BCI problems (technical and handling), to common communication issues among the different people involved, and issues encountered while controlling the applications. The points raised in this paper are very widely applicable and we anticipate that they might be faced similarly by other groups, if they move on to bringing the BCI technology to the end-user, to home environments and towards application prototype control. Copyright © 2013 Elsevier B.V. All rights reserved.
Robotic Stereotaxy in Cranial Neurosurgery: A Qualitative Systematic Review.
Fomenko, Anton; Serletis, Demitre
2017-12-14
Modern-day stereotactic techniques have evolved to tackle the neurosurgical challenge of accurately and reproducibly accessing specific brain targets. Neurosurgical advances have been made in synergy with sophisticated technological developments and engineering innovations such as automated robotic platforms. Robotic systems offer a unique combination of dexterity, durability, indefatigability, and precision. To perform a systematic review of robotic integration for cranial stereotactic guidance in neurosurgery. Specifically, we comprehensively analyze the strengths and weaknesses of a spectrum of robotic technologies, past and present, including details pertaining to each system's kinematic specifications and targeting accuracy profiles. Eligible articles on human clinical applications of cranial robotic-guided stereotactic systems between 1985 and 2017 were extracted from several electronic databases, with a focus on stereotactic biopsy procedures, stereoelectroencephalography, and deep brain stimulation electrode insertion. Cranial robotic stereotactic systems feature serial or parallel architectures with 4 to 7 degrees of freedom, and frame-based or frameless registration. Indications for robotic assistance are diversifying, and include stereotactic biopsy, deep brain stimulation and stereoelectroencephalography electrode placement, ventriculostomy, and ablation procedures. Complication rates are low, and mainly consist of hemorrhage. Newer systems benefit from increasing targeting accuracy, intraoperative imaging ability, improved safety profiles, and reduced operating times. We highlight emerging future directions pertaining to the integration of robotic technologies into future neurosurgical procedures. Notably, a trend toward miniaturization, cost-effectiveness, frameless registration, and increasing safety and accuracy characterize successful stereotactic robotic technologies. Copyright © 2017 by the Congress of Neurological Surgeons
Clinical application of a modular ankle robot for stroke rehabilitation.
Forrester, Larry W; Roy, Anindo; Goodman, Ronald N; Rietschel, Jeremy; Barton, Joseph E; Krebs, Hermano Igo; Macko, Richard F
2013-01-01
Advances in our understanding of neuroplasticity and motor learning post-stroke are now being leveraged with the use of robotics technology to enhance physical rehabilitation strategies. Major advances have been made with upper extremity robotics, which have been tested for efficacy in multi-site trials across the subacute and chronic phases of stroke. In contrast, use of lower extremity robotics to promote locomotor re-learning has been more recent and presents unique challenges by virtue of the complex multi-segmental mechanics of gait. Here we review a programmatic effort to develop and apply the concept of joint-specific modular robotics to the paretic ankle as a means to improve underlying impairments in distal motor control that may have a significant impact on gait biomechanics and balance. An impedance controlled ankle robot module (anklebot) is described as a platform to test the idea that a modular approach can be used to modify training and measure the time profile of treatment response. Pilot studies using seated visuomotor anklebot training with chronic patients are reviewed, along with results from initial efforts to evaluate the anklebot's utility as a clinical tool for assessing intrinsic ankle stiffness. The review includes a brief discussion of future directions for using the seated anklebot training in the earliest phases of sub-acute therapy, and to incorporate neurophysiological measures of cerebro-cortical activity as a means to reveal underlying mechanistic processes of motor learning and brain plasticity associated with robotic training. Finally we conclude with an initial control systems strategy for utilizing the anklebot as a gait training tool that includes integrating an Internal Model-based adaptive controller to both accommodate individual deficit severities and adapt to changes in patient performance.
Clinical application of a modular ankle robot for stroke rehabilitation
Forrester, Larry W.; Roy, Anindo; Goodman, Ronald N.; Rietschel, Jeremy; Barton, Joseph E.; Krebs, Hermano Igo; Macko, Richard F.
2015-01-01
Background Advances in our understanding of neuroplasticity and motor learning post-stroke are now being leveraged with the use of robotics technology to enhance physical rehabilitation strategies. Major advances have been made with upper extremity robotics, which have been tested for efficacy in multi-site trials across the subacute and chronic phases of stroke. In contrast, use of lower extremity robotics to promote locomotor re-learning has been more recent and presents unique challenges by virtue of the complex multi-segmental mechanics of gait. Objectives Here we review a programmatic effort to develop and apply the concept of joint-specific modular robotics to the paretic ankle as a means to improve underlying impairments in distal motor control that may have a significant impact on gait biomechanics and balance. Methods An impedance controlled ankle robot module (anklebot) is described as a platform to test the idea that a modular approach can be used to modify training and measure the time profile of treatment response. Results Pilot studies using seated visuomotor anklebot training with chronic patients are reviewed, along with results from initial efforts to evaluate the anklebot's utility as a clinical tool for assessing intrinsic ankle stiffness. The review includes a brief discussion of future directions for using the seated anklebot training in the earliest phases of sub-acute therapy, and to incorporate neurophysiological measures of cerebro-cortical activity as a means to reveal underlying mechanistic processes of motor learning and brain plasticity associated with robotic training. Conclusions Finally we conclude with an initial control systems strategy for utilizing the anklebot as a gait training tool that includes integrating an Internal Model-based adaptive controller to both accommodate individual deficit severities and adapt to changes in patient performance. PMID:23949045
Reach and grasp by people with tetraplegia using a neurally controlled robotic arm
Hochberg, Leigh R.; Bacher, Daniel; Jarosiewicz, Beata; Masse, Nicolas Y.; Simeral, John D.; Vogel, Joern; Haddadin, Sami; Liu, Jie; Cash, Sydney S.; van der Smagt, Patrick; Donoghue, John P.
2012-01-01
Paralysis following spinal cord injury (SCI), brainstem stroke, amyotrophic lateral sclerosis (ALS) and other disorders can disconnect the brain from the body, eliminating the ability to carry out volitional movements. A neural interface system (NIS)1–5 could restore mobility and independence for people with paralysis by translating neuronal activity directly into control signals for assistive devices. We have previously shown that people with longstanding tetraplegia can use an NIS to move and click a computer cursor and to control physical devices6–8. Able-bodied monkeys have used an NIS to control a robotic arm9, but it is unknown whether people with profound upper extremity paralysis or limb loss could use cortical neuronal ensemble signals to direct useful arm actions. Here, we demonstrate the ability of two people with long-standing tetraplegia to use NIS-based control of a robotic arm to perform three-dimensional reach and grasp movements. Participants controlled the arm over a broad space without explicit training, using signals decoded from a small, local population of motor cortex (MI) neurons recorded from a 96-channel microelectrode array. One of the study participants, implanted with the sensor five years earlier, also used a robotic arm to drink coffee from a bottle. While robotic reach and grasp actions were not as fast or accurate as those of an able-bodied person, our results demonstrate the feasibility for people with tetraplegia, years after CNS injury, to recreate useful multidimensional control of complex devices directly from a small sample of neural signals. PMID:22596161
Wireless Cortical Brain-Machine Interface for Whole-Body Navigation in Primates
NASA Astrophysics Data System (ADS)
Rajangam, Sankaranarayani; Tseng, Po-He; Yin, Allen; Lehew, Gary; Schwarz, David; Lebedev, Mikhail A.; Nicolelis, Miguel A. L.
2016-03-01
Several groups have developed brain-machine-interfaces (BMIs) that allow primates to use cortical activity to control artificial limbs. Yet, it remains unknown whether cortical ensembles could represent the kinematics of whole-body navigation and be used to operate a BMI that moves a wheelchair continuously in space. Here we show that rhesus monkeys can learn to navigate a robotic wheelchair, using their cortical activity as the main control signal. Two monkeys were chronically implanted with multichannel microelectrode arrays that allowed wireless recordings from ensembles of premotor and sensorimotor cortical neurons. Initially, while monkeys remained seated in the robotic wheelchair, passive navigation was employed to train a linear decoder to extract 2D wheelchair kinematics from cortical activity. Next, monkeys employed the wireless BMI to translate their cortical activity into the robotic wheelchair’s translational and rotational velocities. Over time, monkeys improved their ability to navigate the wheelchair toward the location of a grape reward. The navigation was enacted by populations of cortical neurons tuned to whole-body displacement. During practice with the apparatus, we also noticed the presence of a cortical representation of the distance to reward location. These results demonstrate that intracranial BMIs could restore whole-body mobility to severely paralyzed patients in the future.
Roncone, Alessandro; Fadiga, Luciano; Metta, Giorgio
2016-01-01
This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement. PMID:27711136
Frameless robotically targeted stereotactic brain biopsy: feasibility, diagnostic yield, and safety.
Bekelis, Kimon; Radwan, Tarek A; Desai, Atman; Roberts, David W
2012-05-01
Frameless stereotactic brain biopsy has become an established procedure in many neurosurgical centers worldwide. Robotic modifications of image-guided frameless stereotaxy hold promise for making these procedures safer, more effective, and more efficient. The authors hypothesized that robotic brain biopsy is a safe, accurate procedure, with a high diagnostic yield and a safety profile comparable to other stereotactic biopsy methods. This retrospective study included 41 patients undergoing frameless stereotactic brain biopsy of lesions (mean size 2.9 cm) for diagnostic purposes. All patients underwent image-guided, robotic biopsy in which the SurgiScope system was used in conjunction with scalp fiducial markers and a preoperatively selected target and trajectory. Forty-five procedures, with 50 supratentorial targets selected, were performed. The mean operative time was 44.6 minutes for the robotic biopsy procedures. This decreased over the second half of the study by 37%, from 54.7 to 34.5 minutes (p < 0.025). The diagnostic yield was 97.8% per procedure, with a second procedure being diagnostic in the single nondiagnostic case. Complications included one transient worsening of a preexisting deficit (2%) and another deficit that was permanent (2%). There were no infections. Robotic biopsy involving a preselected target and trajectory is safe, accurate, efficient, and comparable to other procedures employing either frame-based stereotaxy or frameless, nonrobotic stereotaxy. It permits biopsy in all patients, including those with small target lesions. Robotic biopsy planning facilitates careful preoperative study and optimization of needle trajectory to avoid sulcal vessels, bridging veins, and ventricular penetration.
Kirchner, Elsa A.; Kim, Su K.; Tabie, Marc; Wöhrle, Hendrik; Maurus, Michael; Kirchner, Frank
2016-01-01
Advanced man-machine interfaces (MMIs) are being developed for teleoperating robots at remote and hardly accessible places. Such MMIs make use of a virtual environment and can therefore make the operator immerse him-/herself into the environment of the robot. In this paper, we present our developed MMI for multi-robot control. Our MMI can adapt to changes in task load and task engagement online. Applying our approach of embedded Brain Reading we improve user support and efficiency of interaction. The level of task engagement was inferred from the single-trial detectability of P300-related brain activity that was naturally evoked during interaction. With our approach no secondary task is needed to measure task load. It is based on research results on the single-stimulus paradigm, distribution of brain resources and its effect on the P300 event-related component. It further considers effects of the modulation caused by a delayed reaction time on the P300 component evoked by complex responses to task-relevant messages. We prove our concept using single-trial based machine learning analysis, analysis of averaged event-related potentials and behavioral analysis. As main results we show (1) a significant improvement of runtime needed to perform the interaction tasks compared to a setting in which all subjects could easily perform the tasks. We show that (2) the single-trial detectability of the event-related potential P300 can be used to measure the changes in task load and task engagement during complex interaction while also being sensitive to the level of experience of the operator and (3) can be used to adapt the MMI individually to the different needs of users without increasing total workload. Our online adaptation of the proposed MMI is based on a continuous supervision of the operator's cognitive resources by means of embedded Brain Reading. Operators with different qualifications or capabilities receive only as many tasks as they can perform to avoid mental overload as well as mental underload. PMID:27445742
Gentili, Rodolphe J; Oh, Hyuk; Kregling, Alissa V; Reggia, James A
2016-05-19
The human hand's versatility allows for robust and flexible grasping. To obtain such efficiency, many robotic hands include human biomechanical features such as fingers having their two last joints mechanically coupled. Although such coupling enables human-like grasping, controlling the inverse kinematics of such mechanical systems is challenging. Here we propose a cortical model for fine motor control of a humanoid finger, having its two last joints coupled, that learns the inverse kinematics of the effector. This neural model functionally mimics the population vector coding as well as sensorimotor prediction processes of the brain's motor/premotor and parietal regions, respectively. After learning, this neural architecture could both overtly (actual execution) and covertly (mental execution or motor imagery) perform accurate, robust and flexible finger movements while reproducing the main human finger kinematic states. This work contributes to developing neuro-mimetic controllers for dexterous humanoid robotic/prosthetic upper-extremities, and has the potential to promote human-robot interactions.
Developing Humanoid Robots for Real-World Environments
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Kuhlman, Michael; Assad, Chris; Keymeulen, Didier
2008-01-01
Humanoids are steadily improving in appearance and functionality demonstrated in controlled environments. To address the challenges of operation in the real-world, researchers have proposed the use of brain-inspired architectures for robot control, and the use of robot learning techniques that enable the robot to acquire and tune skills and behaviours. In the first part of the paper we introduce new concepts and results in these two areas. First, we present a cerebellum-inspired model that demonstrated efficiency in the sensory-motor control of anthropomorphic arms, and in gait control of dynamic walkers. Then, we present a set of new ideas related to robot learning, emphasizing the importance of developing teaching techniques that support learning. In the second part of the paper we propose the use in robotics of the iterative and incremental development methodologies, in the context of practical task-oriented applications. These methodologies promise to rapidly reach system-level integration, and to early identify system-level weaknesses to focus on. We apply this methodology in a task targeting the automated assembly of a modular structure using HOAP-2. We confirm this approach led to rapid development of a end-to-end capability, and offered guidance on which technologies to focus on for gradual improvement of a complete functional system. It is believed that providing Grand Challenge type milestones in practical task-oriented applications accelerates development. As a meaningful target in short-mid term we propose the 'IKEA Challenge', aimed at the demonstration of autonomous assembly of various pieces of furniture, from the box, following included written/drawn instructions.
Cognitive patterns: giving autonomy some context
NASA Astrophysics Data System (ADS)
Dumond, Danielle; Stacy, Webb; Geyer, Alexandra; Rousseau, Jeffrey; Therrien, Mike
2013-05-01
Today's robots require a great deal of control and supervision, and are unable to intelligently respond to unanticipated and novel situations. Interactions between an operator and even a single robot take place exclusively at a very low, detailed level, in part because no contextual information about a situation is conveyed or utilized to make the interaction more effective and less time consuming. Moreover, the robot control and sensing systems do not learn from experience and, therefore, do not become better with time or apply previous knowledge to new situations. With multi-robot teams, human operators, in addition to managing the low-level details of navigation and sensor management while operating single robots, are also required to manage inter-robot interactions. To make the most use of robots in combat environments, it will be necessary to have the capability to assign them new missions (including providing them context information), and to have them report information about the environment they encounter as they proceed with their mission. The Cognitive Patterns Knowledge Generation system (CPKG) has the ability to connect to various knowledge-based models, multiple sensors, and to a human operator. The CPKG system comprises three major internal components: Pattern Generation, Perception/Action, and Adaptation, enabling it to create situationally-relevant abstract patterns, match sensory input to a suitable abstract pattern in a multilayered top-down/bottom-up fashion similar to the mechanisms used for visual perception in the brain, and generate new abstract patterns. The CPKG allows the operator to focus on things other than the operation of the robot(s).
Silvoni, Stefano; Cavinato, Marianna; Volpato, Chiara; Cisotto, Giulia; Genna, Clara; Agostini, Michela; Turolla, Andrea; Ramos-Murguialday, Ander; Piccione, Francesco
2013-01-01
In a proof-of-principle prototypical demonstration we describe a new type of brain-machine interface (BMI) paradigm for upper limb motor-training. The proposed technique allows a fast contingent and proportionally modulated stimulation of afferent proprioceptive and motor output neural pathways using operant learning. Continuous and immediate assisted-feedback of force proportional to rolandic rhythm oscillations during actual movements was employed and illustrated with a single case experiment. One hemiplegic patient was trained for 2 weeks coupling somatosensory brain oscillations with force-field control during a robot-mediated center-out motor-task whose execution approaches movements of everyday life. The robot facilitated actual movements adding a modulated force directed to the target, thus providing a non-delayed proprioceptive feedback. Neuro-electric, kinematic, and motor-behavioral measures were recorded in pre- and post-assessments without force assistance. Patient's healthy arm was used as control since neither a placebo control was possible nor other control conditions. We observed a generalized and significant kinematic improvement in the affected arm and a spatial accuracy improvement in both arms, together with an increase and focalization of the somatosensory rhythm changes used to provide assisted-force-feedback. The interpretation of the neurophysiological and kinematic evidences reported here is strictly related to the repetition of the motor-task and the presence of the assisted-force-feedback. Results are described as systematic observations only, without firm conclusions about the effectiveness of the methodology. In this prototypical view, the design of appropriate control conditions is discussed. This study presents a novel operant-learning-based BMI-application for motor-training coupling brain oscillations and force feedback during an actual movement.
Integrating sensorimotor systems in a robot model of cricket behavior
NASA Astrophysics Data System (ADS)
Webb, Barbara H.; Harrison, Reid R.
2000-10-01
The mechanisms by which animals manage sensorimotor integration and coordination of different behaviors can be investigated in robot models. In previous work the first author has build a robot that localizes sound based on close modeling of the auditory and neural system in the cricket. It is known that the cricket combines its response to sound with other sensorimotor activities such as an optomotor reflex and reactions to mechanical stimulation for the antennae and cerci. Behavioral evidence suggests some ways these behaviors may be integrated. We have tested the addition of an optomotor response, using an analog VLSI circuit developed by the second author, to the sound localizing behavior and have shown that it can, as in the cricket, improve the directness of the robot's path to sound. In particular it substantially improves behavior when the robot is subject to a motor disturbance. Our aim is to better understand how the insect brain functions in controlling complex combinations of behavior, with the hope that this will also suggest novel mechanisms for sensory integration on robots.
Tsekos, Nikolaos V; Khanicheh, Azadeh; Christoforou, Eftychios; Mavroidis, Constantinos
2007-01-01
The continuous technological progress of magnetic resonance imaging (MRI), as well as its widespread clinical use as a highly sensitive tool in diagnostics and advanced brain research, has brought a high demand for the development of magnetic resonance (MR)-compatible robotic/mechatronic systems. Revolutionary robots guided by real-time three-dimensional (3-D)-MRI allow reliable and precise minimally invasive interventions with relatively short recovery times. Dedicated robotic interfaces used in conjunction with fMRI allow neuroscientists to investigate the brain mechanisms of manipulation and motor learning, as well as to improve rehabilitation therapies. This paper gives an overview of the motivation, advantages, technical challenges, and existing prototypes for MR-compatible robotic/mechatronic devices.
Motor-Skill Learning in an Insect Inspired Neuro-Computational Control System
Arena, Eleonora; Arena, Paolo; Strauss, Roland; Patané, Luca
2017-01-01
In nature, insects show impressive adaptation and learning capabilities. The proposed computational model takes inspiration from specific structures of the insect brain: after proposing key hypotheses on the direct involvement of the mushroom bodies (MBs) and on their neural organization, we developed a new architecture for motor learning to be applied in insect-like walking robots. The proposed model is a nonlinear control system based on spiking neurons. MBs are modeled as a nonlinear recurrent spiking neural network (SNN) with novel characteristics, able to memorize time evolutions of key parameters of the neural motor controller, so that existing motor primitives can be improved. The adopted control scheme enables the structure to efficiently cope with goal-oriented behavioral motor tasks. Here, a six-legged structure, showing a steady-state exponentially stable locomotion pattern, is exposed to the need of learning new motor skills: moving through the environment, the structure is able to modulate motor commands and implements an obstacle climbing procedure. Experimental results on a simulated hexapod robot are reported; they are obtained in a dynamic simulation environment and the robot mimicks the structures of Drosophila melanogaster. PMID:28337138
NASA Technical Reports Server (NTRS)
Kadivar, Zahra; Beck, Christopher E.; Rovekamp, Roger N.; O'Malley, Marcia K.; Joyce, Charles A.
2016-01-01
Treatment intensity has a profound effect on motor recovery following neurological injury. The use of robotics has potential to automate these labor-intensive therapy procedures that are typically performed by physical therapists. Further, the use of wearable robotics offers an aspect of portability that may allow for rehabilitation outside the clinic. The authors have developed a soft, portable, lightweight upper extremity wearable robotic device to provide motor rehabilitation of patients with affected upper limbs due to traumatic brain injury (TBI). A key feature of the device demonstrated in this paper is the isolation of shoulder and elbow movements necessary for effective rehabilitation interventions. Herein is presented a feasibility study with one subject and demonstration of the device's ability to provide safe, comfortable, and controlled upper extremity movements. Moreover, it is shown that by decoupling shoulder and elbow motions, desired isolated joint actuation can be achieved.
Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social
Wiese, Eva; Metta, Giorgio; Wykowska, Agnieszka
2017-01-01
Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human–robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human–human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human–robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human–robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles. PMID:29046651
Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social.
Wiese, Eva; Metta, Giorgio; Wykowska, Agnieszka
2017-01-01
Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user's needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human-robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human-human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human-robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human-robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles.
Toward more versatile and intuitive cortical brain-machine interfaces.
Andersen, Richard A; Kellis, Spencer; Klaes, Christian; Aflalo, Tyson
2014-09-22
Brain-machine interfaces have great potential for the development of neuroprosthetic applications to assist patients suffering from brain injury or neurodegenerative disease. One type of brain-machine interface is a cortical motor prosthetic, which is used to assist paralyzed subjects. Motor prosthetics to date have typically used the motor cortex as a source of neural signals for controlling external devices. The review will focus on several new topics in the arena of cortical prosthetics. These include using: recordings from cortical areas outside motor cortex; local field potentials as a source of recorded signals; somatosensory feedback for more dexterous control of robotics; and new decoding methods that work in concert to form an ecology of decode algorithms. These new advances promise to greatly accelerate the applicability and ease of operation of motor prosthetics. Copyright © 2014 Elsevier Ltd. All rights reserved.
Waspe, Adam C; McErlain, David D; Pitelka, Vasek; Holdsworth, David W; Lacefield, James C; Fenster, Aaron
2010-04-01
Preclinical research protocols often require insertion of needles to specific targets within small animal brains. To target biologically relevant locations in rodent brains more effectively, a robotic device has been developed that is capable of positioning a needle along oblique trajectories through a single burr hole in the skull under volumetric microcomputed tomography (micro-CT) guidance. An x-ray compatible stereotactic frame secures the head throughout the procedure using a bite bar, nose clamp, and ear bars. CT-to-robot registration enables structures identified in the image to be mapped to physical coordinates in the brain. Registration is accomplished by injecting a barium sulfate contrast agent as the robot withdraws the needle from predefined points in a phantom. Registration accuracy is affected by the robot-positioning error and is assessed by measuring the surface registration error for the fiducial and target needle tracks (FRE and TRE). This system was demonstrated in situ by injecting 200 microm tungsten beads into rat brains along oblique trajectories through a single burr hole on the top of the skull under micro-CT image guidance. Postintervention micro-CT images of each skull were registered with preintervention high-field magnetic resonance images of the brain to infer the anatomical locations of the beads. Registration using four fiducial needle tracks and one target track produced a FRE and a TRE of 96 and 210 microm, respectively. Evaluation with tissue-mimicking gelatin phantoms showed that locations could be targeted with a mean error of 154 +/- 113 microm. The integration of a robotic needle-positioning device with volumetric micro-CT image guidance should increase the accuracy and reduce the invasiveness of stereotactic needle interventions in small animals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waspe, Adam C.; McErlain, David D.; Pitelka, Vasek
Purpose: Preclinical research protocols often require insertion of needles to specific targets within small animal brains. To target biologically relevant locations in rodent brains more effectively, a robotic device has been developed that is capable of positioning a needle along oblique trajectories through a single burr hole in the skull under volumetric microcomputed tomography (micro-CT) guidance. Methods: An x-ray compatible stereotactic frame secures the head throughout the procedure using a bite bar, nose clamp, and ear bars. CT-to-robot registration enables structures identified in the image to be mapped to physical coordinates in the brain. Registration is accomplished by injecting amore » barium sulfate contrast agent as the robot withdraws the needle from predefined points in a phantom. Registration accuracy is affected by the robot-positioning error and is assessed by measuring the surface registration error for the fiducial and target needle tracks (FRE and TRE). This system was demonstrated in situ by injecting 200 {mu}m tungsten beads into rat brains along oblique trajectories through a single burr hole on the top of the skull under micro-CT image guidance. Postintervention micro-CT images of each skull were registered with preintervention high-field magnetic resonance images of the brain to infer the anatomical locations of the beads. Results: Registration using four fiducial needle tracks and one target track produced a FRE and a TRE of 96 and 210 {mu}m, respectively. Evaluation with tissue-mimicking gelatin phantoms showed that locations could be targeted with a mean error of 154{+-}113 {mu}m. Conclusions: The integration of a robotic needle-positioning device with volumetric micro-CT image guidance should increase the accuracy and reduce the invasiveness of stereotactic needle interventions in small animals.« less
The kinematic architecture of the Active Headframe: A new head support for awake brain surgery.
Malosio, Matteo; Negri, Simone Pio; Pedrocchi, Nicola; Vicentini, Federico; Cardinale, Francesco; Tosatti, Lorenzo Molinari
2012-01-01
This paper presents the novel hybrid kinematic structure of the Active Headframe, a robotic head support to be employed in brain surgery operations for an active and dynamic control of the patient's head position and orientation, particularly addressing awake surgery requirements. The topology has been conceived in order to satisfy all the installation, functional and dynamic requirements. A kinetostatic optimization has been performed to obtain the actual geometric dimensions of the prototype currently being developed.
Effect of Error Augmentation on Brain Activation and Motor Learning of a Complex Locomotor Task
Marchal-Crespo, Laura; Michels, Lars; Jaeger, Lukas; López-Olóriz, Jorge; Riener, Robert
2017-01-01
Up to date, the functional gains obtained after robot-aided gait rehabilitation training are limited. Error augmenting strategies have a great potential to enhance motor learning of simple motor tasks. However, little is known about the effect of these error modulating strategies on complex tasks, such as relearning to walk after a neurologic accident. Additionally, neuroimaging evaluation of brain regions involved in learning processes could provide valuable information on behavioral outcomes. We investigated the effect of robotic training strategies that augment errors—error amplification and random force disturbance—and training without perturbations on brain activation and motor learning of a complex locomotor task. Thirty-four healthy subjects performed the experiment with a robotic stepper (MARCOS) in a 1.5 T MR scanner. The task consisted in tracking a Lissajous figure presented on a display by coordinating the legs in a gait-like movement pattern. Behavioral results showed that training without perturbations enhanced motor learning in initially less skilled subjects, while error amplification benefited better-skilled subjects. Training with error amplification, however, hampered transfer of learning. Randomly disturbing forces induced learning and promoted transfer in all subjects, probably because the unexpected forces increased subjects' attention. Functional MRI revealed main effects of training strategy and skill level during training. A main effect of training strategy was seen in brain regions typically associated with motor control and learning, such as, the basal ganglia, cerebellum, intraparietal sulcus, and angular gyrus. Especially, random disturbance and no perturbation lead to stronger brain activation in similar brain regions than error amplification. Skill-level related effects were observed in the IPS, in parts of the superior parietal lobe (SPL), i.e., precuneus, and temporal cortex. These neuroimaging findings indicate that gait-like motor learning depends on interplay between subcortical, cerebellar, and fronto-parietal brain regions. An interesting observation was the low activation observed in the brain's reward system after training with error amplification compared to training without perturbations. Our results suggest that to enhance learning of a locomotor task, errors should be augmented based on subjects' skill level. The impacts of these strategies on motor learning, brain activation, and motivation in neurological patients need further investigation. PMID:29021739
Boubchir, Larbi; Touati, Youcef; Daachi, Boubaker; Chérif, Arab Ali
2015-08-01
In thought-based steering of robots, error potentials (ErrP) can appear when the action resulting from the brain-machine interface (BMI) classifier/controller does not correspond to the user's thought. Using the Steady State Visual Evoked Potentials (SSVEP) techniques, ErrP, which appear when a classification error occurs, are not easily recognizable by only examining the temporal or frequency characteristics of EEG signals. A supplementary classification process is therefore needed to identify them in order to stop the course of the action and back up to a recovery state. This paper presents a set of time-frequency (t-f) features for the detection and classification of EEG ErrP in extra-brain activities due to misclassification observed by a user exploiting non-invasive BMI and robot control in the task space. The proposed features are able to characterize and detect ErrP activities in the t-f domain. These features are derived from the information embedded in the t-f representation of EEG signals, and include the Instantaneous Frequency (IF), t-f information complexity, SVD information, energy concentration and sub-bands' energies. The experiment results on real EEG data show that the use of the proposed t-f features for detecting and classifying EEG ErrP achieved an overall classification accuracy up to 97% for 50 EEG segments using 2-class SVM classifier.
Calibration of the motor-assisted robotic stereotaxy system: MARS.
Heinig, Maximilian; Hofmann, Ulrich G; Schlaefer, Alexander
2012-11-01
The motor-assisted robotic stereotaxy system presents a compact and light-weight robotic system for stereotactic neurosurgery. Our system is designed to position probes in the human brain for various applications, for example, deep brain stimulation. It features five fully automated axes. High positioning accuracy is of utmost importance in robotic neurosurgery. First, the key parameters of the robot's kinematics are determined using an optical tracking system. Next, the positioning errors at the center of the arc--which is equivalent to the target position in stereotactic interventions--are investigated using a set of perpendicular cameras. A modeless robot calibration method is introduced and evaluated. To conclude, the application accuracy of the robot is studied in a phantom trial. We identified the bending of the arc under load as the robot's main error source. A calibration algorithm was implemented to compensate for the deflection of the robot's arc. The mean error after the calibration was 0.26 mm, the 68.27th percentile was 0.32 mm, and the 95.45th was 0.50 mm. The kinematic properties of the robot were measured, and based on the results an appropriate calibration method was derived. With mean errors smaller than currently used mechanical systems, our results show that the robot's accuracy is appropriate for stereotactic interventions.
Learning in and from brain-based devices.
Edelman, Gerald M
2007-11-16
Biologically based mobile devices have been constructed that differ from robots based on artificial intelligence. These brain-based devices (BBDs) contain simulated brains that autonomously categorize signals from the environment without a priori instruction. Two such BBDs, Darwin VII and Darwin X, are described here. Darwin VII recognizes objects and links categories to behavior through instrumental conditioning. Darwin X puts together the "what,"when," and "where" from cues in the environment into an episodic memory that allows it to find a desired target. Although these BBDs are designed to provide insights into how the brain works, their principles may find uses in building hybrid machines. These machines would combine the learning ability of BBDs with explicitly programmed control systems.
Bayón, C; Lerma, S; Ramírez, O; Serrano, J I; Del Castillo, M D; Raya, R; Belda-Lois, J M; Martínez, I; Rocon, E
2016-11-14
Cerebral Palsy (CP) is a disorder of posture and movement due to a defect in the immature brain. The use of robotic devices as alternative treatment to improve the gait function in patients with CP has increased. Nevertheless, current gait trainers are focused on controlling complete joint trajectories, avoiding postural control and the adaptation of the therapy to a specific patient. This paper presents the applicability of a new robotic platform called CPWalker in children with spastic diplegia. CPWalker consists of a smart walker with body weight and autonomous locomotion support and an exoskeleton for joint motion support. Likewise, CPWalker enables strategies to improve postural control during walking. The integrated robotic platform provides means for testing novel gait rehabilitation therapies in subjects with CP and similar motor disorders. Patient-tailored therapies were programmed in the device for its evaluation in three children with spastic diplegia for 5 weeks. After ten sessions of personalized training with CPWalker, the children improved the mean velocity (51.94 ± 41.97 %), cadence (29.19 ± 33.36 %) and step length (26.49 ± 19.58 %) in each leg. Post-3D gait assessments provided kinematic outcomes closer to normal values than Pre-3D assessments. The results show the potential of the novel robotic platform to serve as a rehabilitation tool. The autonomous locomotion and impedance control enhanced the children's participation during therapies. Moreover, participants' postural control was substantially improved, which indicates the usefulness of the approach based on promoting the patient's trunk control while the locomotion therapy is executed. Although results are promising, further studies with bigger sample size are required.
Resquin, F; Ibañez, J; Gonzalez-Vargas, J; Brunetti, F; Dimbwadyo, I; Alves, S; Carrasco, L; Torres, L; Pons, Jose Luis
2016-08-01
Reaching and grasping are two of the most affected functions after stroke. Hybrid rehabilitation systems combining Functional Electrical Stimulation with Robotic devices have been proposed in the literature to improve rehabilitation outcomes. In this work, we present the combined use of a hybrid robotic system with an EEG-based Brain-Machine Interface to detect the user's movement intentions to trigger the assistance. The platform has been tested in a single session with a stroke patient. The results show how the patient could successfully interact with the BMI and command the assistance of the hybrid system with low latencies. Also, the Feedback Error Learning controller implemented in this system could adjust the required FES intensity to perform the task.
Brain-robot interface driven plasticity: Distributed modulation of corticospinal excitability.
Kraus, Dominic; Naros, Georgios; Bauer, Robert; Leão, Maria Teresa; Ziemann, Ulf; Gharabaghi, Alireza
2016-01-15
Brain-robot interfaces (BRI) are studied as novel interventions to facilitate functional restoration in patients with severe and persistent motor deficits following stroke. They bridge the impaired connection in the sensorimotor loop by providing brain-state dependent proprioceptive feedback with orthotic devices attached to the hand or arm of the patients. The underlying neurophysiology of this BRI neuromodulation is still largely unknown. We investigated changes of corticospinal excitability with transcranial magnetic stimulation in thirteen right-handed healthy subjects who performed 40min of kinesthetic motor imagery receiving proprioceptive feedback with a robotic orthosis attached to the left hand contingent to event-related desynchronization of the right sensorimotor cortex in the β-band (16-22Hz). Neural correlates of this BRI intervention were probed by acquiring the stimulus-response curve (SRC) of both motor evoked potential (MEP) peak-to-peak amplitudes and areas under the curve. In addition, a motor mapping was obtained. The specificity of the effects was studied by comparing two neighboring hand muscles, one BRI-trained and one control muscle. Robust changes of MEP amplitude but not MEP area occurred following the BRI intervention, but only in the BRI-trained muscle. The steep part of the SRC showed an MEP increase, while the plateau of the SRC showed an MEP decrease. MEP mapping revealed a distributed pattern with a decrease of excitability in the hand area of the primary motor cortex, which controlled the BRI, but an increase of excitability in the surrounding somatosensory and premotor cortex. In conclusion, the BRI intervention induced a complex pattern of modulated corticospinal excitability, which may boost subsequent motor learning during physiotherapy. Copyright © 2015 Elsevier Inc. All rights reserved.
LAZARIDOU, ASIMINA; ASTRAKAS, LOUKAS; MINTZOPOULOS, DIONYSSIOS; KHANICHEH, AZADEH; SINGHAL, ANEESH B.; MOSKOWITZ, MICHAEL A.; ROSEN, BRUCE; TZIKA, ARIA A.
2013-01-01
Stroke is the third leading cause of mortality and a frequent cause of long-term adult impairment. Improved strategies to enhance motor function in individuals with chronic disability from stroke are thus required. Post-stroke therapy may improve rehabilitation and reduce long-term disability; however, objective methods for evaluating the specific impact of rehabilitation are rare. Brain imaging studies on patients with chronic stroke have shown evidence for reorganization of areas showing functional plasticity after a stroke. In this study, we hypothesized that brain mapping using a novel magnetic resonance (MR)-compatible hand device in conjunction with state-of-the-art magnetic resonance imaging (MRI) can serve as a novel biomarker for brain plasticity induced by rehabilitative motor training in patients with chronic stroke. This hypothesis is based on the premises that robotic devices, by stimulating brain plasticity, can assist in restoring movement compromised by stroke-induced pathological changes in the brain and that these changes can then be monitored by advanced MRI. We serially examined 15 healthy controls and 4 patients with chronic stroke. We employed a combination of diffusion tensor imaging (DTI) and volumetric MRI using a 3-tesla (3T) MRI system using a 12-channel Siemens Tim coil and a novel MR-compatible hand-induced robotic device. DTI data revealed that the number of fibers and the average tract length significantly increased after 8 weeks of hand training by 110% and 64%, respectively (p<0.001). New corticospinal tract (CST) fibers projecting progressively closer to the motor cortex appeared during training. Volumetric data analysis showed a statistically significant increase in the cortical thickness of the ventral postcentral gyrus areas of patients after training relative to pre-training cortical thickness (p<0.001). We suggest that rehabilitation is possible for a longer period of time after stroke than previously thought, showing that structural plasticity is possible even after 6 months due to retained neuroplasticity. Our study is an example of personalized medicine using advanced neuroimaging methods in conjunction with robotics in the molecular medicine era. PMID:23982596
Boninger, Michael L; Wechsler, Lawrence R; Stein, Joel
2014-11-01
The aim of this study was to describe the current state and latest advances in robotics, stem cells, and brain-computer interfaces in rehabilitation and recovery for stroke. The authors of this summary recently reviewed this work as part of a national presentation. The article represents the information included in each area. Each area has seen great advances and challenges as products move to market and experiments are ongoing. Robotics, stem cells, and brain-computer interfaces all have tremendous potential to reduce disability and lead to better outcomes for patients with stroke. Continued research and investment will be needed as the field moves forward. With this investment, the potential for recovery of function is likely substantial.
Boninger, Michael L; Wechsler, Lawrence R.; Stein, Joel
2014-01-01
Objective To describe the current state and latest advances in robotics, stem cells, and brain computer interfaces in rehabilitation and recovery for stroke. Design The authors of this summary recently reviewed this work as part of a national presentation. The paper represents the information included in each area. Results Each area has seen great advances and challenges as products move to market and experiments are ongoing. Conclusion Robotics, stem cells, and brain computer interfaces all have tremendous potential to reduce disability and lead to better outcomes for patients with stroke. Continued research and investment will be needed as the field moves forward. With this investment, the potential for recovery of function is likely substantial PMID:25313662
Gui, Kai; Liu, Honghai; Zhang, Dingguo
2017-11-01
Robotic exoskeletons for physical rehabilitation have been utilized for retraining patients suffering from paraplegia and enhancing motor recovery in recent years. However, users are not voluntarily involved in most systems. This paper aims to develop a locomotion trainer with multiple gait patterns, which can be controlled by the active motion intention of users. A multimodal human-robot interaction (HRI) system is established to enhance subject's active participation during gait rehabilitation, which includes cognitive HRI (cHRI) and physical HRI (pHRI). The cHRI adopts brain-computer interface based on steady-state visual evoked potential. The pHRI is realized via admittance control based on electromyography. A central pattern generator is utilized to produce rhythmic and continuous lower joint trajectories, and its state variables are regulated by cHRI and pHRI. A custom-made leg exoskeleton prototype with the proposed multimodal HRI is tested on healthy subjects and stroke patients. The results show that voluntary and active participation can be effectively involved to achieve various assistive gait patterns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xinmin; Belcher, Andrew H.; Grelewicz, Zachary
Purpose: To develop a control system to correct both translational and rotational head motion deviations in real-time during frameless stereotactic radiosurgery (SRS). Methods: A novel feedback control with a feed-forward algorithm was utilized to correct for the coupling of translation and rotation present in serial kinematic robotic systems. Input parameters for the algorithm include the real-time 6DOF target position, the frame pitch pivot point to target distance constant, and the translational and angular Linac beam off (gating) tolerance constants for patient safety. Testing of the algorithm was done using a 4D (XY Z + pitch) robotic stage, an infrared headmore » position sensing unit and a control computer. The measured head position signal was processed and a resulting command was sent to the interface of a four-axis motor controller, through which four stepper motors were driven to perform motion compensation. Results: The control of the translation of a brain target was decoupled with the control of the rotation. For a phantom study, the corrected position was within a translational displacement of 0.35 mm and a pitch displacement of 0.15° 100% of the time. For a volunteer study, the corrected position was within displacements of 0.4 mm and 0.2° over 98.5% of the time, while it was 10.7% without correction. Conclusions: The authors report a control design approach for both translational and rotational head motion correction. The experiments demonstrated that control performance of the 4D robotic stage meets the submillimeter and subdegree accuracy required by SRS.« less
Marcus, Hani J; Seneci, Carlo A; Payne, Christopher J; Nandi, Dipankar; Darzi, Ara; Yang, Guang-Zhong
2014-03-01
Over the past decade, advances in image guidance, endoscopy, and tube-shaft instruments have allowed for the further development of keyhole transcranial endoscope-assisted microsurgery, utilizing smaller craniotomies and minimizing exposure and manipulation of unaffected brain tissue. Although such approaches offer the possibility of shorter operating times, reduced morbidity and mortality, and improved long-term outcomes, the technical skills required to perform such surgery are inevitably greater than for traditional open surgical techniques, and they have not been widely adopted by neurosurgeons. Surgical robotics, which has the ability to improve visualization and increase dexterity, therefore has the potential to enhance surgical performance. To evaluate the role of surgical robots in keyhole transcranial endoscope-assisted microsurgery. The technical challenges faced by surgeons utilizing keyhole craniotomies were reviewed, and a thorough appraisal of presently available robotic systems was performed. Surgical robotic systems have the potential to incorporate advances in augmented reality, stereoendoscopy, and jointed-wrist instruments, and therefore to significantly impact the field of keyhole neurosurgery. To date, over 30 robotic systems have been applied to neurosurgical procedures. The vast majority of these robots are best described as supervisory controlled, and are designed for stereotactic or image-guided surgery. Few telesurgical robots are suitable for keyhole neurosurgical approaches, and none are in widespread clinical use in the field. New robotic platforms in minimally invasive neurosurgery must possess clear and unambiguous advantages over conventional approaches if they are to achieve significant clinical penetration.
Toward more versatile and intuitive cortical brain machine interfaces
Andersen, Richard A.; Kellis, Spencer; Klaes, Christian; Aflalo, Tyson
2015-01-01
Brain machine interfaces have great potential in neuroprosthetic applications to assist patients with brain injury and neurodegenerative diseases. One type of BMI is a cortical motor prosthetic which is used to assist paralyzed subjects. Motor prosthetics to date have typically used the motor cortex as a source of neural signals for controlling external devices. The review will focus on several new topics in the arena of cortical prosthetics. These include using 1) recordings from cortical areas outside motor cortex; 2) local field potentials (LFPs) as a source of recorded signals; 3) somatosensory feedback for more dexterous control of robotics; and 4) new decoding methods that work in concert to form an ecology of decode algorithms. These new advances hold promise in greatly accelerating the applicability and ease of operation of motor prosthetics. PMID:25247368
Cyr, André; Boukadoum, Mounir; Thériault, Frédéric
2014-01-01
In this paper, we investigate the operant conditioning (OC) learning process within a bio-inspired paradigm, using artificial spiking neural networks (ASNN) to act as robot brain controllers. In biological agents, OC results in behavioral changes learned from the consequences of previous actions, based on progressive prediction adjustment from rewarding or punishing signals. In a neurorobotics context, virtual and physical autonomous robots may benefit from a similar learning skill when facing unknown and unsupervised environments. In this work, we demonstrate that a simple invariant micro-circuit can sustain OC in multiple learning scenarios. The motivation for this new OC implementation model stems from the relatively complex alternatives that have been described in the computational literature and recent advances in neurobiology. Our elementary kernel includes only a few crucial neurons, synaptic links and originally from the integration of habituation and spike-timing dependent plasticity as learning rules. Using several tasks of incremental complexity, our results show that a minimal neural component set is sufficient to realize many OC procedures. Hence, with the proposed OC module, designing learning tasks with an ASNN and a bio-inspired robot context leads to simpler neural architectures for achieving complex behaviors. PMID:25120464
Cyr, André; Boukadoum, Mounir; Thériault, Frédéric
2014-01-01
In this paper, we investigate the operant conditioning (OC) learning process within a bio-inspired paradigm, using artificial spiking neural networks (ASNN) to act as robot brain controllers. In biological agents, OC results in behavioral changes learned from the consequences of previous actions, based on progressive prediction adjustment from rewarding or punishing signals. In a neurorobotics context, virtual and physical autonomous robots may benefit from a similar learning skill when facing unknown and unsupervised environments. In this work, we demonstrate that a simple invariant micro-circuit can sustain OC in multiple learning scenarios. The motivation for this new OC implementation model stems from the relatively complex alternatives that have been described in the computational literature and recent advances in neurobiology. Our elementary kernel includes only a few crucial neurons, synaptic links and originally from the integration of habituation and spike-timing dependent plasticity as learning rules. Using several tasks of incremental complexity, our results show that a minimal neural component set is sufficient to realize many OC procedures. Hence, with the proposed OC module, designing learning tasks with an ASNN and a bio-inspired robot context leads to simpler neural architectures for achieving complex behaviors.
Twitching in Sensorimotor Development from Sleeping Rats to Robots
Marques, Hugo Gravato; Iida, Fumiya
2013-01-01
It is still not known how the “rudimentary” movements of fetuses and infants are transformed into the coordinated, flexible, and adaptive movements of adults. In addressing this important issue, we consider a behavior that has been perennially viewed as a functionless by-product of a dreaming brain: the jerky limb movements called myoclonic twitches. Recent work has identified the neural mechanisms that produce twitching as well as those that convey sensory feedback from twitching limbs to the spinal cord and brain. In turn, these mechanistic insights have helped inspire new ideas about the functional roles that twitching might play in the self-organization of spinal and supraspinal sensorimotor circuits. Striking support for these ideas is coming from the field of developmental robotics: When twitches are mimicked in robot models of the musculoskeletal system, basic neural circuitry self-organizes. Mutually inspired biological and synthetic approaches promise not only to produce better robots, but also to solve fundamental problems concerning the developmental origins of sensorimotor maps in the spinal cord and brain. PMID:23787051
Temporal coding of brain patterns for direct limb control in humans.
Müller-Putz, Gernot R; Scherer, Reinhold; Pfurtscheller, Gert; Neuper, Christa
2010-01-01
For individuals with a high spinal cord injury (SCI) not only the lower limbs, but also the upper extremities are paralyzed. A neuroprosthesis can be used to restore the lost hand and arm function in those tetraplegics. The main problem for this group of individuals, however, is the reduced ability to voluntarily operate device controllers. A brain-computer interface provides a non-manual alternative to conventional input devices by translating brain activity patterns into control commands. We show that the temporal coding of individual mental imagery pattern can be used to control two independent degrees of freedom - grasp and elbow function - of an artificial robotic arm by utilizing a minimum number of EEG scalp electrodes. We describe the procedure from the initial screening to the final application. From eight naïve subjects participating online feedback experiments, four were able to voluntarily control an artificial arm by inducing one motor imagery pattern derived from one EEG derivation only.
Kim, Yoon Jae; Park, Sung Woo; Yeom, Hong Gi; Bang, Moon Suk; Kim, June Sic; Chung, Chun Kee; Kim, Sungwan
2015-08-20
A brain-machine interface (BMI) should be able to help people with disabilities by replacing their lost motor functions. To replace lost functions, robot arms have been developed that are controlled by invasive neural signals. Although invasive neural signals have a high spatial resolution, non-invasive neural signals are valuable because they provide an interface without surgery. Thus, various researchers have developed robot arms driven by non-invasive neural signals. However, robot arm control based on the imagined trajectory of a human hand can be more intuitive for patients. In this study, therefore, an integrated robot arm-gripper system (IRAGS) that is driven by three-dimensional (3D) hand trajectories predicted from non-invasive neural signals was developed and verified. The IRAGS was developed by integrating a six-degree of freedom robot arm and adaptive robot gripper. The system was used to perform reaching and grasping motions for verification. The non-invasive neural signals, magnetoencephalography (MEG) and electroencephalography (EEG), were obtained to control the system. The 3D trajectories were predicted by multiple linear regressions. A target sphere was placed at the terminal point of the real trajectories, and the system was commanded to grasp the target at the terminal point of the predicted trajectories. The average correlation coefficient between the predicted and real trajectories in the MEG case was [Formula: see text] ([Formula: see text]). In the EEG case, it was [Formula: see text] ([Formula: see text]). The success rates in grasping the target plastic sphere were 18.75 and 7.50 % with MEG and EEG, respectively. The success rates of touching the target were 52.50 and 58.75 % respectively. A robot arm driven by 3D trajectories predicted from non-invasive neural signals was implemented, and reaching and grasping motions were performed. In most cases, the robot closely approached the target, but the success rate was not very high because the non-invasive neural signal is less accurate. However the success rate could be sufficiently improved for practical applications by using additional sensors. Robot arm control based on hand trajectories predicted from EEG would allow for portability, and the performance with EEG was comparable to that with MEG.
Knaepen, Kristel; Mierau, Andreas; Swinnen, Eva; Fernandez Tellez, Helio; Michielsen, Marc; Kerckhofs, Eric; Lefeber, Dirk; Meeusen, Romain
2015-01-01
In order to determine optimal training parameters for robot-assisted treadmill walking, it is essential to understand how a robotic device interacts with its wearer, and thus, how parameter settings of the device affect locomotor control. The aim of this study was to assess the effect of different levels of guidance force during robot-assisted treadmill walking on cortical activity. Eighteen healthy subjects walked at 2 km.h-1 on a treadmill with and without assistance of the Lokomat robotic gait orthosis. Event-related spectral perturbations and changes in power spectral density were investigated during unassisted treadmill walking as well as during robot-assisted treadmill walking at 30%, 60% and 100% guidance force (with 0% body weight support). Clustering of independent components revealed three clusters of activity in the sensorimotor cortex during treadmill walking and robot-assisted treadmill walking in healthy subjects. These clusters demonstrated gait-related spectral modulations in the mu, beta and low gamma bands over the sensorimotor cortex related to specific phases of the gait cycle. Moreover, mu and beta rhythms were suppressed in the right primary sensory cortex during treadmill walking compared to robot-assisted treadmill walking with 100% guidance force, indicating significantly larger involvement of the sensorimotor area during treadmill walking compared to robot-assisted treadmill walking. Only marginal differences in the spectral power of the mu, beta and low gamma bands could be identified between robot-assisted treadmill walking with different levels of guidance force. From these results it can be concluded that a high level of guidance force (i.e., 100% guidance force) and thus a less active participation during locomotion should be avoided during robot-assisted treadmill walking. This will optimize the involvement of the sensorimotor cortex which is known to be crucial for motor learning.
Robotic multimodality stereotactic brain tissue identification: work in progress
NASA Technical Reports Server (NTRS)
Andrews, R.; Mah, R.; Galvagni, A.; Guerrero, M.; Papasin, R.; Wallace, M.; Winters, J.
1997-01-01
Real-time identification of tissue would improve procedures such as stereotactic brain biopsy (SBX), functional and implantation neurosurgery, and brain tumor excision. To standard SBX equipment has been added: (1) computer-controlled stepper motors to drive the biopsy needle/probe precisely; (2) multiple microprobes to track tissue density, detect blood vessels and changes in blood flow, and distinguish the various tissues being penetrated; (3) neural net learning programs to allow real-time comparisons of current data with a normative data bank; (4) three-dimensional graphic displays to follow the probe as it traverses brain tissue. The probe can differentiate substances such as pig brain, differing consistencies of the 'brain-like' foodstuff tofu, and gels made to simulate brain, as well as detect blood vessels imbedded in these substances. Multimodality probes should improve the safety, efficacy, and diagnostic accuracy of SBX and other neurosurgical procedures.
ERIC Educational Resources Information Center
Nikelshpur, Dmitry O.
2014-01-01
Similar to mammalian brains, Artificial Neural Networks (ANN) are universal approximators, capable of yielding near-optimal solutions to a wide assortment of problems. ANNs are used in many fields including medicine, internet security, engineering, retail, robotics, warfare, intelligence control, and finance. "ANNs have a tendency to get…
A Robotics-Based Approach to Modeling of Choice Reaching Experiments on Visual Attention
Strauss, Soeren; Heinke, Dietmar
2012-01-01
The paper presents a robotics-based model for choice reaching experiments on visual attention. In these experiments participants were asked to make rapid reach movements toward a target in an odd-color search task, i.e., reaching for a green square among red squares and vice versa (e.g., Song and Nakayama, 2008). Interestingly these studies found that in a high number of trials movements were initially directed toward a distractor and only later were adjusted toward the target. These “curved” trajectories occurred particularly frequently when the target in the directly preceding trial had a different color (priming effect). Our model is embedded in a closed-loop control of a LEGO robot arm aiming to mimic these reach movements. The model is based on our earlier work which suggests that target selection in visual search is implemented through parallel interactions between competitive and cooperative processes in the brain (Heinke and Humphreys, 2003; Heinke and Backhaus, 2011). To link this model with the control of the robot arm we implemented a topological representation of movement parameters following the dynamic field theory (Erlhagen and Schoener, 2002). The robot arm is able to mimic the results of the odd-color search task including the priming effect and also generates human-like trajectories with a bell-shaped velocity profile. Theoretical implications and predictions are discussed in the paper. PMID:22529827
Interfacing insect brain for space applications.
Di Pino, Giovanni; Seidl, Tobias; Benvenuto, Antonella; Sergi, Fabrizio; Campolo, Domenico; Accoto, Dino; Maria Rossini, Paolo; Guglielmelli, Eugenio
2009-01-01
Insects exhibit remarkable navigation capabilities that current control architectures are still far from successfully mimic and reproduce. In this chapter, we present the results of a study on conceptualizing insect/machine hybrid controllers for improving autonomy of exploratory vehicles. First, the different principally possible levels of interfacing between insect and machine are examined followed by a review of current approaches towards hybridity and enabling technologies. Based on the insights of this activity, we propose a double hybrid control architecture which hinges around the concept of "insect-in-a-cockpit." It integrates both biological/artificial (insect/robot) modules and deliberative/reactive behavior. The basic assumption is that "low-level" tasks are managed by the robot, while the "insect intelligence" is exploited whenever high-level problem solving and decision making is required. Both neural and natural interfacing have been considered to achieve robustness and redundancy of exchanged information.
Bae, Sung Jin; Jang, Sung Ho; Seo, Jeong Pyo; Chang, Pyung Hun
2017-07-01
The optimal conditions inducing proper brain activation during performance of rehabilitation robots should be examined to enhance the efficiency of robot rehabilitation based on the concept of brain plasticity. In this study, we attempted to investigate differences in cortical activation according to the speeds of passive wrist movements performed by a rehabilitation robot for stroke patients. 9 stroke patients with right hemiparesis participated in this study. Passive movements of the affected wrist were performed by the rehabilitation robot at three different speeds: 0.25 Hz; slow, 0.5Hz; moderate and 0.75 Hz; fast. We used functional near-infrared spectroscopy to measure the brain activity during the passive movements performed by a robot. Group-average activation map and the relative changes in oxy-hemoglobin (ΔOxyHb) in two regions of interest: the primary sensory-motor cortex (SM1); premotor area (PMA) and region of all channels were measured. In the result of group-averaged activation map, the contralateral SM1, PMA and somatosensory association cortex (SAC) showed the greatest significant activation according to the movements at 0.75 Hz, while there is no significantly activated area at 0.5 Hz. Regarding ΔOxyHb, no significant diiference was observed among three speeds regardless of region. In conclusion, the contralateral SM1, PMA and SAC showed the greatest activation by a fast speed (0.75 Hz) rather than slow (0.25 Hz) and moderate (0. 5 Hz) speed. Our results suggest an optimal speed for execution of the wrist rehabilitation robot. Therefore, we believe that our findings might point to several promising applications for future research regarding useful and empirically-based robot rehabilitation therapy.
Surgical bedside master console for neurosurgical robotic system.
Arata, Jumpei; Kenmotsu, Hajime; Takagi, Motoki; Hori, Tatsuya; Miyagi, Takahiro; Fujimoto, Hideo; Kajita, Yasukazu; Hayashi, Yuichiro; Chinzei, Kiyoyuki; Hashizume, Makoto
2013-01-01
We are currently developing a neurosurgical robotic system that facilitates access to residual tumors and improves brain tumor removal surgical outcomes. The system combines conventional and robotic surgery allowing for a quick conversion between the procedures. This concept requires a new master console that can be positioned at the surgical bedside and be sterilized. The master console was developed using new technologies, such as a parallel mechanism and pneumatic sensors. The parallel mechanism is a purely passive 5-DOF (degrees of freedom) joystick based on the author's haptic research. The parallel mechanism enables motion input of conventional brain tumor removal surgery with a compact, intuitive interface that can be used in a conventional surgical environment. In addition, the pneumatic sensors implemented on the mechanism provide an intuitive interface and electrically isolate the tool parts from the mechanism so they can be easily sterilized. The 5-DOF parallel mechanism is compact (17 cm width, 19cm depth, and 15cm height), provides a 505,050 mm and 90° workspace and is highly backdrivable (0.27N of resistance force representing the surgical motion). The evaluation tests revealed that the pneumatic sensors can properly measure the suction strength, grasping force, and hand contact. In addition, an installability test showed that the master console can be used in a conventional surgical environment. The proposed master console design was shown to be feasible for operative neurosurgery based on comprehensive testing. This master console is currently being tested for master-slave control with a surgical robotic system.
A novel BCI-controlled pneumatic glove system for home-based neurorehabilitation.
Coffey, Aodhán L; Leamy, Darren J; Ward, Tomás E
2014-01-01
Commercially available devices for Brain-Computer Interface (BCI)-controlled robotic stroke rehabilitation are prohibitively expensive for many researchers who are interested in the topic and physicians who would utilize such a device. Additionally, they are cumbersome and require a technician to operate, increasing the inaccessibility of such devices for home-based robotic stroke rehabilitation therapy. Presented here is the design, implementation and test of an inexpensive, portable and adaptable BCI-controlled hand therapy device. The system utilizes a soft, flexible, pneumatic glove which can be used to deflect the subject's wrist and fingers. Operation is provided by a custom-designed pneumatic circuit. Air flow is controlled by an embedded system, which receives serial port instruction from a PC running real-time BCI software. System tests demonstrate that glove control can be successfully driven by a real-time BCI. A system such as the one described here may be used to explore closed loop neurofeedback rehabilitation in stroke relatively inexpensively and potentially in home environments.
Flight simulation using a Brain-Computer Interface: A pilot, pilot study.
Kryger, Michael; Wester, Brock; Pohlmeyer, Eric A; Rich, Matthew; John, Brendan; Beaty, James; McLoughlin, Michael; Boninger, Michael; Tyler-Kabara, Elizabeth C
2017-01-01
As Brain-Computer Interface (BCI) systems advance for uses such as robotic arm control it is postulated that the control paradigms could apply to other scenarios, such as control of video games, wheelchair movement or even flight. The purpose of this pilot study was to determine whether our BCI system, which involves decoding the signals of two 96-microelectrode arrays implanted into the motor cortex of a subject, could also be used to control an aircraft in a flight simulator environment. The study involved six sessions in which various parameters were modified in order to achieve the best flight control, including plane type, view, control paradigm, gains, and limits. Successful flight was determined qualitatively by evaluating the subject's ability to perform requested maneuvers, maintain flight paths, and avoid control losses such as dives, spins and crashes. By the end of the study, it was found that the subject could successfully control an aircraft. The subject could use both the jet and propeller plane with different views, adopting an intuitive control paradigm. From the subject's perspective, this was one of the most exciting and entertaining experiments she had performed in two years of research. In conclusion, this study provides a proof-of-concept that traditional motor cortex signals combined with a decoding paradigm can be used to control systems besides a robotic arm for which the decoder was developed. Aside from possible functional benefits, it also shows the potential for a new recreational activity for individuals with disabilities who are able to master BCI control. Copyright © 2016 Elsevier Inc. All rights reserved.
Realistic modeling of neurons and networks: towards brain simulation.
D'Angelo, Egidio; Solinas, Sergio; Garrido, Jesus; Casellato, Claudia; Pedrocchi, Alessandra; Mapelli, Jonathan; Gandolfi, Daniela; Prestori, Francesca
2013-01-01
Realistic modeling is a new advanced methodology for investigating brain functions. Realistic modeling is based on a detailed biophysical description of neurons and synapses, which can be integrated into microcircuits. The latter can, in turn, be further integrated to form large-scale brain networks and eventually to reconstruct complex brain systems. Here we provide a review of the realistic simulation strategy and use the cerebellar network as an example. This network has been carefully investigated at molecular and cellular level and has been the object of intense theoretical investigation. The cerebellum is thought to lie at the core of the forward controller operations of the brain and to implement timing and sensory prediction functions. The cerebellum is well described and provides a challenging field in which one of the most advanced realistic microcircuit models has been generated. We illustrate how these models can be elaborated and embedded into robotic control systems to gain insight into how the cellular properties of cerebellar neurons emerge in integrated behaviors. Realistic network modeling opens up new perspectives for the investigation of brain pathologies and for the neurorobotic field.
Realistic modeling of neurons and networks: towards brain simulation
D’Angelo, Egidio; Solinas, Sergio; Garrido, Jesus; Casellato, Claudia; Pedrocchi, Alessandra; Mapelli, Jonathan; Gandolfi, Daniela; Prestori, Francesca
Summary Realistic modeling is a new advanced methodology for investigating brain functions. Realistic modeling is based on a detailed biophysical description of neurons and synapses, which can be integrated into microcircuits. The latter can, in turn, be further integrated to form large-scale brain networks and eventually to reconstruct complex brain systems. Here we provide a review of the realistic simulation strategy and use the cerebellar network as an example. This network has been carefully investigated at molecular and cellular level and has been the object of intense theoretical investigation. The cerebellum is thought to lie at the core of the forward controller operations of the brain and to implement timing and sensory prediction functions. The cerebellum is well described and provides a challenging field in which one of the most advanced realistic microcircuit models has been generated. We illustrate how these models can be elaborated and embedded into robotic control systems to gain insight into how the cellular properties of cerebellar neurons emerge in integrated behaviors. Realistic network modeling opens up new perspectives for the investigation of brain pathologies and for the neurorobotic field. PMID:24139652
Adaptation to a cortex controlled robot attached at the pelvis and engaged during locomotion in rats
Song, Weiguo; Giszter, Simon F.
2011-01-01
Brain Machine Interfaces (BMIs) should ideally show robust adaptation of the BMI across different tasks and daily activities. Most BMIs have used over-practiced tasks. Little is known about BMIs in dynamic environments. How are mechanically body-coupled BMIs integrated into ongoing rhythmic dynamics, e.g., in locomotion? To examine this we designed a novel BMI using neural discharge in the hindlimb/trunk motor cortex in rats during locomotion to control a robot attached at the pelvis. We tested neural adaptation when rats experienced (a) control locomotion, (b) ‘simple elastic load’ (a robot load on locomotion without any BMI neural control) and (c) ‘BMI with elastic load’ (in which the robot loaded locomotion and a BMI neural control could counter this load). Rats significantly offset applied loads with the BMI while preserving more normal pelvic height compared to load alone. Adaptation occurred over about 100–200 step cycles in a trial. Firing rates increased in both the loaded conditions compared to baseline. Mean phases of cells’ discharge in the step cycle shifted significantly between BMI and the simple load condition. Over time more BMI cells became positively correlated with the external force and modulated more deeply, and neurons’ network correlations on a 100ms timescale increased. Loading alone showed none of these effects. The BMI neural changes of rate and force correlations persisted or increased over repeated trials. Our results show that rats have the capacity to use motor adaptation and motor learning to fairly rapidly engage hindlimb/trunk coupled BMIs in their locomotion. PMID:21414932
Michmizos, Konstantinos P; Krebs, Hermano Igo
2017-01-01
Robot-aided sensorimotor therapy imposes highly repetitive tasks that can translate to substantial improvement when patients remain cognitively engaged into the clinical procedure, a goal that most children find hard to pursue. Knowing that the child's brain is much more plastic than an adult's, it is reasonable to expect that the clinical gains observed in the adult population during the last two decades would be followed up by even greater gains in children. Nonetheless, and despite the multitude of adult studies, in children we are just getting started: There is scarcity of pediatric robotic rehabilitation devices that are currently available and the number of clinical studies that employ them is also very limited. We have recently developed the MIT's pedi-Anklebot, an adaptive habilitation robotic device that continuously motivates physically impaired children to do their best by tracking the child's performance and modifying their therapy accordingly. The robot's design is based on a multitude of studies we conducted focusing on the ankle sensorimotor control. In this paper, we briefly describe the device and the adaptive environment we built around the impaired children, present the initial clinical results and discuss how they could steer future trends in pediatric robotic therapy. The results support the potential for future interventions to account for the differences in the sensorimotor control of the targeted limbs and their functional use (rhythmic vs. discrete movements and mechanical impedance training) and explore how the new technological advancements such as the augmented reality would employ new knowledge from neuroscience.
Hussain, Irfan; Santarnecchi, Emiliano; Leo, Andrea; Ricciardi, Emiliano; Rossi, Simone; Prattichizzo, Domenico
2017-07-01
The Supernumerary robotic limbs are a recently introduced class of wearable robots that, differently from traditional prostheses and exoskeletons, aim at adding extra effectors (i.e., arms, legs, or fingers) to the human user, rather than substituting or enhancing the natural ones. However, it is still undefined whether the use of supernumerary robotic limbs could specifically lead to neural modifications in brain dynamics. The illusion of owning the part of body has been already proven in many experimental observations, such as those relying on multisensory integration (e.g., rubber hand illusion), prosthesis and even on virtual reality. In this paper we present a description of a novel magnetic compatible supernumerary robotic finger together with preliminary observations from two functional magnetic resonance imaging (fMRI) experiments, in which brain activity was measured before and after a period of training with the robotic device, and during the use of the novel MRI-compatible version of the supernumerary robotic finger. Results showed that the usage of the MR-compatible robotic finger is safe and does not produce artifacts on MRI images. Moreover, the training with the supernumerary robotic finger recruits a network of motor-related cortical regions (i.e. primary and supplementary motor areas), hence the same motor network of a fully physiological voluntary motor gestures.
Progress in Insect-Inspired Optical Navigation Sensors
NASA Technical Reports Server (NTRS)
Thakoor, Sarita; Chahl, Javaan; Zometzer, Steve
2005-01-01
Progress has been made in continuing efforts to develop optical flight-control and navigation sensors for miniature robotic aircraft. The designs of these sensors are inspired by the designs and functions of the vision systems and brains of insects. Two types of sensors of particular interest are polarization compasses and ocellar horizon sensors. The basic principle of polarization compasses was described (but without using the term "polarization compass") in "Insect-Inspired Flight Control for Small Flying Robots" (NPO-30545), NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 61. To recapitulate: Bees use sky polarization patterns in ultraviolet (UV) light, caused by Rayleigh scattering of sunlight by atmospheric gas molecules, as direction references relative to the apparent position of the Sun. A robotic direction-finding technique based on this concept would be more robust in comparison with a technique based on the direction to the visible Sun because the UV polarization pattern is distributed across the entire sky and, hence, is redundant and can be extrapolated from a small region of clear sky in an elsewhere cloudy sky that hides the Sun.
Observation-based training for neuroprosthetic control of grasping by amputees.
Agashe, Harshavardhan A; Contreras-Vidal, Jose L
2014-01-01
Current brain-machine interfaces (BMIs) allow upper limb amputees to position robotic arms with a high degree of accuracy, but lack the ability to control hand pre-shaping for grasping different objects. We have previously shown that low frequency (0.1-1 Hz) time domain cortical activity recorded at the scalp via electroencephalography (EEG) encodes information about grasp pre-shaping. To transfer this technology to clinical populations such as amputees, the challenge lies in constructing BMI models in the absence of overt training hand movements. Here we show that it is possible to train BMI models using observed grasping movements performed by a robotic hand attached to amputees' residual limb. Three transradial amputees controlled the grasping motion of an attached robotic hand via their EEG, following the action-observation training phase. Over multiple sessions, subjects successfully grasped the presented object (a bottle or a credit card) in 53±16 % of trials, demonstrating the validity of the BMI models. Importantly, the validation of the BMI model was through closed-loop performance, which demonstrates generalization of the model to unseen data. These results suggest `mirror neuron system' properties captured by delta band EEG that allows neural representation for action observation to be used for action control in an EEG-based BMI system.
Cortical and subcortical mechanisms of brain-machine interfaces.
Marchesotti, Silvia; Martuzzi, Roberto; Schurger, Aaron; Blefari, Maria Laura; Del Millán, José R; Bleuler, Hannes; Blanke, Olaf
2017-06-01
Technical advances in the field of Brain-Machine Interfaces (BMIs) enable users to control a variety of external devices such as robotic arms, wheelchairs, virtual entities and communication systems through the decoding of brain signals in real time. Most BMI systems sample activity from restricted brain regions, typically the motor and premotor cortex, with limited spatial resolution. Despite the growing number of applications, the cortical and subcortical systems involved in BMI control are currently unknown at the whole-brain level. Here, we provide a comprehensive and detailed report of the areas active during on-line BMI control. We recorded functional magnetic resonance imaging (fMRI) data while participants controlled an EEG-based BMI inside the scanner. We identified the regions activated during BMI control and how they overlap with those involved in motor imagery (without any BMI control). In addition, we investigated which regions reflect the subjective sense of controlling a BMI, the sense of agency for BMI-actions. Our data revealed an extended cortical-subcortical network involved in operating a motor-imagery BMI. This includes not only sensorimotor regions but also the posterior parietal cortex, the insula and the lateral occipital cortex. Interestingly, the basal ganglia and the anterior cingulate cortex were involved in the subjective sense of controlling the BMI. These results inform basic neuroscience by showing that the mechanisms of BMI control extend beyond sensorimotor cortices. This knowledge may be useful for the development of BMIs that offer a more natural and embodied feeling of control for the user. Hum Brain Mapp 38:2971-2989, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Markert, H; Kaufmann, U; Kara Kayikci, Z; Palm, G
2009-03-01
Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single word and grammatical level. The language system is embedded into a robot in order to demonstrate the correct semantical understanding of the input sentences by letting the robot perform corresponding actions. For that purpose, a simple neural action planning system has been combined with neural networks for visual object recognition and visual attention control mechanisms.
Cognitive memory and mapping in a brain-like system for robotic navigation.
Tang, Huajin; Huang, Weiwei; Narayanamoorthy, Aditya; Yan, Rui
2017-03-01
Electrophysiological studies in animals may provide a great insight into developing brain-like models of spatial cognition for robots. These studies suggest that the spatial ability of animals requires proper functioning of the hippocampus and the entorhinal cortex (EC). The involvement of the hippocampus in spatial cognition has been extensively studied, both in animal as well as in theoretical studies, such as in the brain-based models by Edelman and colleagues. In this work, we extend these earlier models, with a particular focus on the spatial coding properties of the EC and how it functions as an interface between the hippocampus and the neocortex, as proposed by previous work. By realizing the cognitive memory and mapping functions of the hippocampus and the EC, respectively, we develop a neurobiologically-inspired system to enable a mobile robot to perform task-based navigation in a maze environment. Copyright © 2016 Elsevier Ltd. All rights reserved.
Movement Anticipation and EEG: Implications for BCI-Contingent Robot Therapy
Norman, Sumner L.; Dennison, Mark; Wolbrecht, Eric; Cramer, Steven C.; Srinivasan, Ramesh; Reinkensmeyer, David J.
2017-01-01
Brain-computer interfacing is a technology that has the potential to improve patient engagement in robot-assisted rehabilitation therapy. For example, movement intention reduces mu (8-13 Hz) oscillation amplitude over the sensorimotor cortex, a phenomenon referred to as event-related desynchronization (ERD). In an ERD-contingent assistance paradigm, initial BCI-enhanced robotic therapy studies have used ERD to provide robotic assistance for movement. Here we investigated how ERD changed as a function of audio-visual stimuli, overt movement from the participant, and robotic assistance. Twelve unimpaired subjects played a computer game designed for rehabilitation therapy with their fingers using the FINGER robotic exoskeleton. In the game, the participant and robot matched movement timing to audio-visual stimuli in the form of notes approaching a target on the screen set to the consistent beat of popular music. The audio-visual stimulation of the game alone did not cause ERD, before or after training. In contrast, overt movement by the subject caused ERD, whether or not the robot assisted the finger movement. Notably, ERD was also present when the subjects remained passive and the robot moved their fingers to play the game. This ERD occurred in anticipation of the passive finger movement with similar onset timing as for the overt movement conditions. These results demonstrate that ERD can be contingent on expectation of robotic assistance; that is, the brain generates an anticipatory ERD in expectation of a robot-imposed but predictable movement. This is a caveat that should be considered in designing BCIs for enhancing patient effort in roboticallyassisted therapy. PMID:26891487
Grimm, Florian; Naros, Georgios; Gutenberg, Angelika; Keric, Naureen; Giese, Alf; Gharabaghi, Alireza
2015-09-01
Frame-based stereotactic interventions are considered the gold standard for brain biopsies, but they have limitations with regard to flexibility and patient comfort because of the bulky head ring attached to the patient. Frameless image guidance systems that use scalp fiducial markers offer more flexibility and patient comfort but provide less stability and accuracy during drilling and biopsy needle positioning. Head-mounted robot-guided biopsies could provide the advantages of these 2 techniques without the downsides. The goal of this study was to evaluate the feasibility and safety of a robotic guidance device, affixed to the patient's skull through a small mounting platform, for use in brain biopsy procedures. This was a retrospective study of 37 consecutive patients who presented with supratentorial lesions and underwent brain biopsy procedures in which a surgical guidance robot was used to determine clinical outcomes and technical procedural operability. The portable head-mounted device was well tolerated by the patients and enabled stable drilling and needle positioning during surgery. Flexible adjustments of predefined paths and selection of new trajectories were successfully performed intraoperatively without the need for manual settings and fixations. The patients experienced no permanent deficits or infections after surgery. The head-mounted robot-guided approach presented here combines the stability of a bone-mounted set-up with the flexibility and tolerability of frameless systems. By reducing human interference (i.e., manual parameter settings, calibrations, and adjustments), this technology might be particularly useful in neurosurgical interventions that necessitate multiple trajectories.
Knaepen, Kristel; Mierau, Andreas; Swinnen, Eva; Fernandez Tellez, Helio; Michielsen, Marc; Kerckhofs, Eric; Lefeber, Dirk; Meeusen, Romain
2015-01-01
In order to determine optimal training parameters for robot-assisted treadmill walking, it is essential to understand how a robotic device interacts with its wearer, and thus, how parameter settings of the device affect locomotor control. The aim of this study was to assess the effect of different levels of guidance force during robot-assisted treadmill walking on cortical activity. Eighteen healthy subjects walked at 2 km.h-1 on a treadmill with and without assistance of the Lokomat robotic gait orthosis. Event-related spectral perturbations and changes in power spectral density were investigated during unassisted treadmill walking as well as during robot-assisted treadmill walking at 30%, 60% and 100% guidance force (with 0% body weight support). Clustering of independent components revealed three clusters of activity in the sensorimotor cortex during treadmill walking and robot-assisted treadmill walking in healthy subjects. These clusters demonstrated gait-related spectral modulations in the mu, beta and low gamma bands over the sensorimotor cortex related to specific phases of the gait cycle. Moreover, mu and beta rhythms were suppressed in the right primary sensory cortex during treadmill walking compared to robot-assisted treadmill walking with 100% guidance force, indicating significantly larger involvement of the sensorimotor area during treadmill walking compared to robot-assisted treadmill walking. Only marginal differences in the spectral power of the mu, beta and low gamma bands could be identified between robot-assisted treadmill walking with different levels of guidance force. From these results it can be concluded that a high level of guidance force (i.e., 100% guidance force) and thus a less active participation during locomotion should be avoided during robot-assisted treadmill walking. This will optimize the involvement of the sensorimotor cortex which is known to be crucial for motor learning. PMID:26485148
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohrer, Brandon Robinson; Rothganger, Fredrick H.; Wagner, John S.
The purpose of this LDRD is to develop technology allowing warfighters to provide high-level commands to their unmanned assets, freeing them to command a group of them or commit the bulk of their attention elsewhere. To this end, a brain-emulating cognition and control architecture (BECCA) was developed, incorporating novel and uniquely capable feature creation and reinforcement learning algorithms. BECCA was demonstrated on both a mobile manipulator platform and on a seven degree of freedom serial link robot arm. Existing military ground robots are almost universally teleoperated and occupy the complete attention of an operator. They may remove a soldier frommore » harm's way, but they do not necessarily reduce manpower requirements. Current research efforts to solve the problem of autonomous operation in an unstructured, dynamic environment fall short of the desired performance. In order to increase the effectiveness of unmanned vehicle (UV) operators, we proposed to develop robots that can be 'directed' rather than remote-controlled. They are instructed and trained by human operators, rather than driven. The technical approach is modeled closely on psychological and neuroscientific models of human learning. Two Sandia-developed models are utilized in this effort: the Sandia Cognitive Framework (SCF), a cognitive psychology-based model of human processes, and BECCA, a psychophysical-based model of learning, motor control, and conceptualization. Together, these models span the functional space from perceptuo-motor abilities, to high-level motivational and attentional processes.« less
Minimalistic toy robot to analyze a scenery of speaker-listener condition in autism.
Giannopulu, Irini; Montreynaud, Valérie; Watanabe, Tomio
2016-05-01
Atypical neural architecture causes impairment in communication capabilities and reduces the ability of representing the referential statements of other people in children with autism. During a scenery of "speaker-listener" communication, we have analyzed verbal and emotional expressions in neurotypical children (n = 20) and in children with autism (n = 20). The speaker was always a child, and the listener was a human or a minimalistic robot which reacts to speech expression by nodding only. Although both groups performed the task, everything happens as if the robot could allow children with autism to elaborate a multivariate equation encoding and conceptualizing within his/her brain, and externalizing into unconscious emotion (heart rate) and conscious verbal speech (words). Such a behavior would indicate that minimalistic artificial environments such as toy robots could be considered as the root of neuronal organization and reorganization with the potential to improve brain activity.
La Vida Robot - High School Engineering Program Combats Engineering Brain Drain
Cameron, Allan; Lajvardi, Fredi
2018-05-04
Carl Hayden High School has built an impressive reputation with its robotics club. At a time when interest in science, math and engineering is declining, the Falcon Robotics club has young people fired up about engineering. Their program in underwater robots (MATE) and FIRST robotics is becoming a national model, not for building robots, but for building engineers. Teachers Fredi Lajvardi and Allan Cameron will present their story (How kids 'from the mean streets of Phoenix took on the best from M.I.T. in the national underwater bot championship' - Wired Magazine, April 2005) and how every student needs the opportunity to 'do real engineering.'
Hyper- and viscoelastic modeling of needle and brain tissue interaction.
Lehocky, Craig A; Yixing Shi; Riviere, Cameron N
2014-01-01
Deep needle insertion into brain is important for both diagnostic and therapeutic clinical interventions. We have developed an automated system for robotically steering flexible needles within the brain to improve targeting accuracy. In this work, we have developed a finite element needle-tissue interaction model that allows for the investigation of safe parameters for needle steering. The tissue model implemented contains both hyperelastic and viscoelastic properties to simulate the instantaneous and time-dependent responses of brain tissue. Several needle models were developed with varying parameters to study the effects of the parameters on tissue stress, strain and strain rate during needle insertion and rotation. The parameters varied include needle radius, bevel angle, bevel tip fillet radius, insertion speed, and rotation speed. The results will guide the design of safe needle tips and control systems for intracerebral needle steering.
Svoboda, Jan; Lobellová, Veronika; Popelíková, Anna; Ahuja, Nikhil; Kelemen, Eduard; Stuchlík, Aleš
2017-03-01
Although animals often learn and monitor the spatial properties of relevant moving objects such as conspecifics and predators to properly organize their own spatial behavior, the underlying brain substrate has received little attention and hence remains elusive. Because the anterior cingulate cortex (ACC) participates in conflict monitoring and effort-based decision making, and ACC neurons respond to objects in the environment, it may also play a role in the monitoring of moving cues and exerting the appropriate spatial response. We used a robot avoidance task in which a rat had to maintain at least a 25cm distance from a small programmable robot to avoid a foot shock. In successive sessions, we trained ten Long Evans male rats to avoid a fast-moving robot (4cm/s), a stationary robot, and a slow-moving robot (1cm/s). In each condition, the ACC was transiently inactivated by bilateral injections of muscimol in the penultimate session and a control saline injection was given in the last session. Compared to the corresponding saline session, ACC-inactivated rats received more shocks when tested in the fast-moving condition, but not in the stationary or slow robot conditions. Furthermore, ACC-inactivated rats less frequently responded to an approaching robot with appropriate escape responses although their response to shock stimuli remained preserved. Since we observed no effect on slow or stationary robot avoidance, we conclude that the ACC may exert cognitive efforts for monitoring dynamic updating of the position of an object, a role complementary to the dorsal hippocampus. Copyright © 2017 Elsevier Inc. All rights reserved.
Pilot clinical trial of a robot-aided neuro-rehabilitation workstation with stroke patients
NASA Astrophysics Data System (ADS)
Krebs, Hermano I.; Hogan, Neville; Aisen, Mindy L.; Volpe, Bruce T.
1996-12-01
This paper summarizes our efforts to apply robotics and automation technology to assist, enhance, quantify, and document neuro-rehabilitation. It reviews a pilot clinical trial involving twenty stroke patients with a prototype robot-aided rehabilitation facility developed at MIT and tested at Burke Rehabilitation Hospital. In particular, we present a few results: (a) on the patient's tolerance of the procedure, (b) whether peripheral manipulation of the impaired limb influences brain recovery, (c) on the development of a robot-aided assessment procedure.
Effect of motor dynamics on nonlinear feedback robot arm control
NASA Technical Reports Server (NTRS)
Tarn, Tzyh-Jong; Li, Zuofeng; Bejczy, Antal K.; Yun, Xiaoping
1991-01-01
A nonlinear feedback robot controller that incorporates the robot manipulator dynamics and the robot joint motor dynamics is proposed. The manipulator dynamics and the motor dynamics are coupled to obtain a third-order-dynamic model, and differential geometric control theory is applied to produce a linearized and decoupled robot controller. The derived robot controller operates in the robot task space, thus eliminating the need for decomposition of motion commands into robot joint space commands. Computer simulations are performed to verify the feasibility of the proposed robot controller. The controller is further experimentally evaluated on the PUMA 560 robot arm. The experiments show that the proposed controller produces good trajectory tracking performances and is robust in the presence of model inaccuracies. Compared with a nonlinear feedback robot controller based on the manipulator dynamics only, the proposed robot controller yields conspicuously improved performance.
EXiO-A Brain-Controlled Lower Limb Exoskeleton for Rhesus Macaques.
Vouga, Tristan; Zhuang, Katie Z; Olivier, Jeremy; Lebedev, Mikhail A; Nicolelis, Miguel A L; Bouri, Mohamed; Bleuler, Hannes
2017-02-01
Recent advances in the field of brain-machine interfaces (BMIs) have demonstrated enormous potential to shape the future of rehabilitation and prosthetic devices. Here, a lower-limb exoskeleton controlled by the intracortical activity of an awake behaving rhesus macaque is presented as a proof-of-concept for a locomotorBMI. A detailed description of the mechanical device, including its innovative features and first experimental results, is provided. During operation, BMI-decoded position and velocity are directly mapped onto the bipedal exoskeleton's motions, which then move the monkey's legs as the monkey remains physicallypassive. To meet the unique requirements of such an application, the exoskeleton's features include: high output torque with backdrivable actuation, size adjustability, and safe user-robot interface. In addition, a novel rope transmission is introduced and implemented. To test the performance of the exoskeleton, a mechanical assessment was conducted, which yielded quantifiable results for transparency, efficiency, stiffness, and tracking performance. Usage under both brain control and automated actuation demonstrates the device's capability to fulfill the demanding needs of this application. These results lay the groundwork for further advancement in BMI-controlled devices for primates including humans.
INL Multi-Robot Control Interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The INL Multi-Robot Control Interface controls many robots through a single user interface. The interface includes a robot display window for each robot showing the robotâs condition. More than one window can be used depending on the number of robots. The user interface also includes a robot control window configured to receive commands for sending to the respective robot and a multi-robot common window showing information received from each robot.
NASA Astrophysics Data System (ADS)
Fels, Meike; Bauer, Robert; Gharabaghi, Alireza
2015-08-01
Objective. Novel rehabilitation strategies apply robot-assisted exercises and neurofeedback tasks to facilitate intensive motor training. We aimed to disentangle task-specific and subject-related contributions to the perceived workload of these interventions and the related cortical activation patterns. Approach. We assessed the perceived workload with the NASA Task Load Index in twenty-one subjects who were exposed to two different feedback tasks in a cross-over design: (i) brain-robot interface (BRI) with haptic/proprioceptive feedback of sensorimotor oscillations related to motor imagery, and (ii) control of neuromuscular activity with feedback of the electromyography (EMG) of the same hand. We also used electroencephalography to examine the cortical activation patterns beforehand in resting state and during the training session of each task. Main results. The workload profile of BRI feedback differed from EMG feedback and was particularly characterized by the experience of frustration. The frustration level was highly correlated across tasks, suggesting subject-related relevance of this workload component. Those subjects who were specifically challenged by the respective tasks could be detected by an interhemispheric alpha-band network in resting state before the training and by their sensorimotor theta-band activation pattern during the exercise. Significance. Neurophysiological profiles in resting state and during the exercise may provide task-independent workload markers for monitoring and matching participants’ ability and task difficulty of neurofeedback interventions.
La Vida Robot - High School Engineering Program Combats Engineering Brain Drain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron, Allan; Lajvardi, Fredi
Carl Hayden High School has built an impressive reputation with its robotics club. At a time when interest in science, math and engineering is declining, the Falcon Robotics club has young people fired up about engineering. Their program in underwater robots (MATE) and FIRST robotics is becoming a national model, not for building robots, but for building engineers. Teachers Fredi Lajvardi and Allan Cameron will present their story (How kids 'from the mean streets of Phoenix took on the best from M.I.T. in the national underwater bot championship' - Wired Magazine, April 2005) and how every student needs the opportunitymore » to 'do real engineering.'« less
Robotic Assisted Microsurgery - RAMS FY'97
NASA Technical Reports Server (NTRS)
1997-01-01
JPL and Microdexterity Systems collaborated to develop new surgical capabilities. They developed a Robot Assisted Microsurgery (RAM) tool for surgeons to use for operating on the eye, ear, brain, and blood vessels with unprecedented dexterity. A surgeon can hold the surgical instrument with motions of 6 degrees of freedom with an accuracy of 25 microns in a 70 cu cm workspace. In 1996 a demonstration was performed to remove a microscopic particle from a simulated eyeball. In 1997, tests were performed at UCLA to compare telerobotics with mechanical operations. In 5 out of 7 tests, the RAM tool performed with a significant improvement of preciseness over mechanical operation. New design features include: (1) amplified forced feedback; (2) simultaneous slave robot instrumentation; (3) index control switch on master handle; and (4) tool control switches. Upgrades include: (1) increase in computational power; and (2) installation of hard disk memory storage device for independent operation and independent operation of forceps. In 1997 a final demonstration was performed using 2 telerobotics simultaneously in a microsurgery suture procedure to close a slit in a thin sheet of latex rubber which extended the capabilities of microsurgery procedures. After completing trials and demonstrations for the FDA the potential benefits for thousands of operations will be exposed.
Tandem robot control system and method for controlling mobile robots in tandem
Hayward, David R.; Buttz, James H.; Shirey, David L.
2002-01-01
A control system for controlling mobile robots provides a way to control mobile robots, connected in tandem with coupling devices, to navigate across difficult terrain or in closed spaces. The mobile robots can be controlled cooperatively as a coupled system in linked mode or controlled individually as separate robots.
Elnady, Ahmed Mohamed; Zhang, Xin; Xiao, Zhen Gang; Yong, Xinyi; Randhawa, Bubblepreet Kaur; Boyd, Lara; Menon, Carlo
2015-01-01
Traditional, hospital-based stroke rehabilitation can be labor-intensive and expensive. Furthermore, outcomes from rehabilitation are inconsistent across individuals and recovery is hard to predict. Given these uncertainties, numerous technological approaches have been tested in an effort to improve rehabilitation outcomes and reduce the cost of stroke rehabilitation. These techniques include brain-computer interface (BCI), robotic exoskeletons, functional electrical stimulation (FES), and proprioceptive feedback. However, to the best of our knowledge, no studies have combined all these approaches into a rehabilitation platform that facilitates goal-directed motor movements. Therefore, in this paper, we combined all these technologies to test the feasibility of using a BCI-driven exoskeleton with FES (robotic training device) to facilitate motor task completion among individuals with stroke. The robotic training device operated to assist a pre-defined goal-directed motor task. Because it is hard to predict who can utilize this type of technology, we considered whether the ability to adapt skilled movements with proprioceptive feedback would predict who could learn to control a BCI-driven robotic device. To accomplish this aim, we developed a motor task that requires proprioception for completion to assess motor-proprioception ability. Next, we tested the feasibility of robotic training system in individuals with chronic stroke (n = 9) and found that the training device was well tolerated by all the participants. Ability on the motor-proprioception task did not predict the time to completion of the BCI-driven task. Both participants who could accurately target (n = 6) and those who could not (n = 3), were able to learn to control the BCI device, with each BCI trial lasting on average 2.47 min. Our results showed that the participants' ability to use proprioception to control motor output did not affect their ability to use the BCI-driven exoskeleton with FES. Based on our preliminary results, we show that our robotic training device has potential for use as therapy for a broad range of individuals with stroke.
Bakkum, Douglas J.; Gamblen, Philip M.; Ben-Ary, Guy; Chao, Zenas C.; Potter, Steve M.
2007-01-01
Here, we and others describe an unusual neurorobotic project, a merging of art and science called MEART, the semi-living artist. We built a pneumatically actuated robotic arm to create drawings, as controlled by a living network of neurons from rat cortex grown on a multi-electrode array (MEA). Such embodied cultured networks formed a real-time closed-loop system which could now behave and receive electrical stimulation as feedback on its behavior. We used MEART and simulated embodiments, or animats, to study the network mechanisms that produce adaptive, goal-directed behavior. This approach to neural interfacing will help instruct the design of other hybrid neural-robotic systems we call hybrots. The interfacing technologies and algorithms developed have potential applications in responsive deep brain stimulation systems and for motor prosthetics using sensory components. In a broader context, MEART educates the public about neuroscience, neural interfaces, and robotics. It has paved the way for critical discussions on the future of bio-art and of biotechnology. PMID:18958276
Fast Dynamical Coupling Enhances Frequency Adaptation of Oscillators for Robotic Locomotion Control
Nachstedt, Timo; Tetzlaff, Christian; Manoonpong, Poramate
2017-01-01
Rhythmic neural signals serve as basis of many brain processes, in particular of locomotion control and generation of rhythmic movements. It has been found that specific neural circuits, named central pattern generators (CPGs), are able to autonomously produce such rhythmic activities. In order to tune, shape and coordinate the produced rhythmic activity, CPGs require sensory feedback, i.e., external signals. Nonlinear oscillators are a standard model of CPGs and are used in various robotic applications. A special class of nonlinear oscillators are adaptive frequency oscillators (AFOs). AFOs are able to adapt their frequency toward the frequency of an external periodic signal and to keep this learned frequency once the external signal vanishes. AFOs have been successfully used, for instance, for resonant tuning of robotic locomotion control. However, the choice of parameters for a standard AFO is characterized by a trade-off between the speed of the adaptation and its precision and, additionally, is strongly dependent on the range of frequencies the AFO is confronted with. As a result, AFOs are typically tuned such that they require a comparably long time for their adaptation. To overcome the problem, here, we improve the standard AFO by introducing a novel adaptation mechanism based on dynamical coupling strengths. The dynamical adaptation mechanism enhances both the speed and precision of the frequency adaptation. In contrast to standard AFOs, in this system, the interplay of dynamics on short and long time scales enables fast as well as precise adaptation of the oscillator for a wide range of frequencies. Amongst others, a very natural implementation of this mechanism is in terms of neural networks. The proposed system enables robotic applications which require fast retuning of locomotion control in order to react to environmental changes or conditions. PMID:28377710
Performance evaluation of haptic hand-controllers in a robot-assisted surgical system.
Zareinia, Kourosh; Maddahi, Yaser; Ng, Canaan; Sepehri, Nariman; Sutherland, Garnette R
2015-12-01
This paper presents the experimental evaluation of three commercially available haptic hand-controllers to evaluate which was more suitable to the participants. Two surgeons and seven engineers performed two peg-in-hole tasks with different levels of difficulty. Each operator guided the end-effector of a Kuka manipulator that held surgical forceps and was equipped with a surgical microscope. Sigma 7, HD(2) and PHANToM Premium 3.0 hand-controllers were compared. Ten measures were adopted to evaluate operators' performances with respect to effort, speed and accuracy in completing a task, operator improvement during the tests, and the force applied by each haptic device. The best performance was observed with the Premium 3.0; the hand-piece was able to be held in a similar way to that used by surgeons to hold conventional tools. Hand-controllers with a linkage structure similar to the human upper extremity take advantage of the inherent human brain connectome, resulting in improved surgeon performance during robotic-assisted surgery. Copyright © 2015 John Wiley & Sons, Ltd.
Cognitive and sociocultural aspects of robotized technology: innovative processes of adaptation
NASA Astrophysics Data System (ADS)
Kvesko, S. B.; Kvesko, B. B.; Kornienko, M. A.; Nikitina, Y. A.; Pankova, N. M.
2018-05-01
The paper dwells upon interaction between socio-cultural phenomena and cognitive characteristics of robotized technology. The interdisciplinary approach was employed in order to cast light on the manifold and multilevel identity of scientific advance in terms of robotized technology within the mental realm. Analyzing robotized technology from the viewpoint of its significance for the modern society is one of the upcoming trends in the contemporary scientific realm. The robots under production are capable of interacting with people; this results in a growing necessity for the studies on social status of robotized technological items. Socio-cultural aspect of cognitive robotized technology is reflected in the fact that the nature becomes ‘aware’ of itself via human brain, a human being tends to strives for perfection in their intellectual and moral dimensions.
Coordinated Control Of Mobile Robotic Manipulators
NASA Technical Reports Server (NTRS)
Seraji, Homayoun
1995-01-01
Computationally efficient scheme developed for on-line coordinated control of both manipulation and mobility of robots that include manipulator arms mounted on mobile bases. Applicable to variety of mobile robotic manipulators, including robots that move along tracks (typically, painting and welding robots), robots mounted on gantries and capable of moving in all three dimensions, wheeled robots, and compound robots (consisting of robots mounted on other robots). Theoretical basis discussed in several prior articles in NASA Tech Briefs, including "Increasing the Dexterity of Redundant Robots" (NPO-17801), "Redundant Robot Can Avoid Obstacles" (NPO-17852), "Configuration-Control Scheme Copes With Singularities" (NPO-18556), "More Uses for Configuration Control of Robots" (NPO-18607/NPO-18608).
Tidoni, Emmanuele; Abu-Alqumsan, Mohammad; Leonardis, Daniele; Kapeller, Christoph; Fusco, Gabriele; Guger, Cristoph; Hintermuller, Cristoph; Peer, Angelika; Frisoli, Antonio; Tecchia, Franco; Bergamasco, Massimo; Aglioti, Salvatore Maria
2017-09-01
The development of technological applications that allow people to control and embody external devices within social interaction settings represents a major goal for current and future brain-computer interface (BCI) systems. Prior research has suggested that embodied systems may ameliorate BCI end-user's experience and accuracy in controlling external devices. Along these lines, we developed an immersive P300-based BCI application with a head-mounted display for virtual-local and robotic-remote social interactions and explored in a group of healthy participants the role of proprioceptive feedback in the control of a virtual surrogate (Study 1). Moreover, we compared the performance of a small group of people with spinal cord injury (SCI) to a control group of healthy subjects during virtual and robotic social interactions (Study 2), where both groups received a proprioceptive stimulation. Our attempt to combine immersive environments, BCI technologies and neuroscience of body ownership suggests that providing realistic multisensory feedback still represents a challenge. Results have shown that healthy and people living with SCI used the BCI within the immersive scenarios with good levels of performance (as indexed by task accuracy, optimizations calls and Information Transfer Rate) and perceived control of the surrogates. Proprioceptive feedback did not contribute to alter performance measures and body ownership sensations. Further studies are necessary to test whether sensorimotor experience represents an opportunity to improve the use of future embodied BCI applications.
NASA Astrophysics Data System (ADS)
Zheng, Taixiong
2005-12-01
A neuro-fuzzy network based approach for robot motion in an unknown environment was proposed. In order to control the robot motion in an unknown environment, the behavior of the robot was classified into moving to the goal and avoiding obstacles. Then, according to the dynamics of the robot and the behavior character of the robot in an unknown environment, fuzzy control rules were introduced to control the robot motion. At last, a 6-layer neuro-fuzzy network was designed to merge from what the robot sensed to robot motion control. After being trained, the network may be used for robot motion control. Simulation results show that the proposed approach is effective for robot motion control in unknown environment.
Simonetti, Davide; Zollo, Loredana; Milighetti, Stefano; Miccinilli, Sandra; Bravi, Marco; Ranieri, Federico; Magrone, Giovanni; Guglielmelli, Eugenio; Di Lazzaro, Vincenzo; Sterzi, Silvia
2017-01-01
Today neurological diseases such as stroke represent one of the leading cause of long-term disability. Many research efforts have been focused on designing new and effective rehabilitation strategies. In particular, robotic treatment for upper limb stroke rehabilitation has received significant attention due to its ability to provide high-intensity and repetitive movement therapy with less effort than traditional methods. In addition, the development of non-invasive brain stimulation techniques such as transcranial Direct Current Stimulation (tDCS) has also demonstrated the capability of modulating brain excitability thus increasing motor performance. The combination of these two methods is expected to enhance functional and motor recovery after stroke; to this purpose, the current trends in this research field are presented and discussed through an in-depth analysis of the state-of-the-art. The heterogeneity and the restricted number of collected studies make difficult to perform a systematic review. However, the literature analysis of the published data seems to demonstrate that the association of tDCS with robotic training has the same clinical gain derived from robotic therapy alone. Future studies should investigate combined approach tailored to the individual patient's characteristics, critically evaluating the brain areas to be targeted and the induced functional changes. PMID:28588467
Subbian, Vignesh; Meunier, Jason M; Korfhagen, Joseph J; Ratcliff, Jonathan J; Shaw, George J; Beyette, Fred R
2014-01-01
Post-Concussion Syndrome (PCS) is a common sequelae of mild Traumatic Brain Injury (mTBI). Currently, there is no reliable test to determine which patients will develop PCS following an mTBI. As a result, clinicians are challenged to identify patients at high risk for subsequent PCS. Hence, there is a need to develop an objective test that can guide clinical risk stratification and predict the likelihood of PCS at the initial point of care in an Emergency Department (ED). This paper presents the results of robotic-assisted neurologic testing completed on mTBI patients in the ED and its ability to predict PCS at 3 weeks post-injury. Preliminary results show that abnormal proprioception, as measured using robotic testing is associated with higher risk of developing PCS following mTBI. In this pilot study, proprioceptive measures obtained through robotic testing had a 77% specificity (95CI: 46%-94%) and a 64% sensitivity (95CI: 41%-82%).
A Semisupervised Support Vector Machines Algorithm for BCI Systems
Qin, Jianzhao; Li, Yuanqing; Sun, Wei
2007-01-01
As an emerging technology, brain-computer interfaces (BCIs) bring us new communication interfaces which translate brain activities into control signals for devices like computers, robots, and so forth. In this study, we propose a semisupervised support vector machine (SVM) algorithm for brain-computer interface (BCI) systems, aiming at reducing the time-consuming training process. In this algorithm, we apply a semisupervised SVM for translating the features extracted from the electrical recordings of brain into control signals. This SVM classifier is built from a small labeled data set and a large unlabeled data set. Meanwhile, to reduce the time for training semisupervised SVM, we propose a batch-mode incremental learning method, which can also be easily applied to the online BCI systems. Additionally, it is suggested in many studies that common spatial pattern (CSP) is very effective in discriminating two different brain states. However, CSP needs a sufficient labeled data set. In order to overcome the drawback of CSP, we suggest a two-stage feature extraction method for the semisupervised learning algorithm. We apply our algorithm to two BCI experimental data sets. The offline data analysis results demonstrate the effectiveness of our algorithm. PMID:18368141
Contreras-Vidal, Jose L.; Grossman, Robert G.
2013-01-01
In this communication, a translational clinical brain-machine interface (BMI) roadmap for an EEG-based BMI to a robotic exoskeleton (NeuroRex) is presented. This multi-faceted project addresses important engineering and clinical challenges: It addresses the validation of an intelligent, self-balancing, robotic lower-body and trunk exoskeleton (Rex) augmented with EEG-based BMI capabilities to interpret user intent to assist a mobility-impaired person to walk independently. The goal is to improve the quality of life and health status of wheelchair-bounded persons by enabling standing and sitting, walking and backing, turning, ascending and descending stairs/curbs, and navigating sloping surfaces in a variety of conditions without the need for additional support or crutches. PMID:24110003
Advances in neuroprosthetic learning and control.
Carmena, Jose M
2013-01-01
Significant progress has occurred in the field of brain-machine interfaces (BMI) since the first demonstrations with rodents, monkeys, and humans controlling different prosthetic devices directly with neural activity. This technology holds great potential to aid large numbers of people with neurological disorders. However, despite this initial enthusiasm and the plethora of available robotic technologies, existing neural interfaces cannot as yet master the control of prosthetic, paralyzed, or otherwise disabled limbs. Here I briefly discuss recent advances from our laboratory into the neural basis of BMIs that should lead to better prosthetic control and clinically viable solutions, as well as new insights into the neurobiology of action.
Hand-in-hand advances in biomedical engineering and sensorimotor restoration.
Pisotta, Iolanda; Perruchoud, David; Ionta, Silvio
2015-05-15
Living in a multisensory world entails the continuous sensory processing of environmental information in order to enact appropriate motor routines. The interaction between our body and our brain is the crucial factor for achieving such sensorimotor integration ability. Several clinical conditions dramatically affect the constant body-brain exchange, but the latest developments in biomedical engineering provide promising solutions for overcoming this communication breakdown. The ultimate technological developments succeeded in transforming neuronal electrical activity into computational input for robotic devices, giving birth to the era of the so-called brain-machine interfaces. Combining rehabilitation robotics and experimental neuroscience the rise of brain-machine interfaces into clinical protocols provided the technological solution for bypassing the neural disconnection and restore sensorimotor function. Based on these advances, the recovery of sensorimotor functionality is progressively becoming a concrete reality. However, despite the success of several recent techniques, some open issues still need to be addressed. Typical interventions for sensorimotor deficits include pharmaceutical treatments and manual/robotic assistance in passive movements. These procedures achieve symptoms relief but their applicability to more severe disconnection pathologies is limited (e.g. spinal cord injury or amputation). Here we review how state-of-the-art solutions in biomedical engineering are continuously increasing expectances in sensorimotor rehabilitation, as well as the current challenges especially with regards to the translation of the signals from brain-machine interfaces into sensory feedback and the incorporation of brain-machine interfaces into daily activities. Copyright © 2015 Elsevier B.V. All rights reserved.
Robot-Aided Neurorehabilitation
Krebs, Hermano Igo; Hogan, Neville; Aisen, Mindy L.; Volpe, Bruce T.
2009-01-01
Our goal is to apply robotics and automation technology to assist, enhance, quantify, and document neurorehabilitation. This paper reviews a clinical trial involving 20 stroke patients with a prototype robot-aided rehabilitation facility developed at the Massachusetts Institute of Technology, Cambridge, (MIT) and tested at Burke Rehabilitation Hospital, White Plains, NY. It also presents our approach to analyze kinematic data collected in the robot-aided assessment procedure. In particular, we present evidence 1) that robot-aided therapy does not have adverse effects, 2) that patients tolerate the procedure, and 3) that peripheral manipulation of the impaired limb may influence brain recovery. These results are based on standard clinical assessment procedures. We also present one approach using kinematic data in a robot-aided assessment procedure. PMID:9535526
Usability test of KNRC self-feeding robot.
Song, Won-Kyung; Song, Won-Jin; Kim, Yale; Kim, Jongbae
2013-06-01
Various assistive robots for supporting the activities of daily living have been developed. However, not many of these have been introduced into the market because they were found to be impractical in actual scenarios. In this paper, we report on the usability test results of an assistive robot designed for self-feeding for people having disabilities, which includes those having spinal cord injury, cerebral palsy, and traumatic brain injury. First, we present three versions of a novel self-feeding robot (KNRC self-feeding robot), which is suitable for use with Korean food, including sticky rice. These robots have been improved based on participatory action design over a period of three years. Next, we discuss the usability tests of the KNRC self-feeding robots. People with disabilities participated in comparative tests between the KNRC self-feeding robot and the commercialized product named My Spoon. The KNRC self-feeding robot showed positive results in relation to satisfaction and performance compared to the commercialized robot when users ate Korean food, including sticky rice.
A finger exoskeleton for rehabilitation and brain image study.
Tang, Zhenjin; Sugano, Shigeki; Iwata, Hiroyasu
2013-06-01
This paper introduces the design, fabrication and evaluation of the second generation prototype of a magnetic resonance compatible finger rehabilitation robot. It can not only be used as a finger rehabilitation training tool after a stroke, but also to study the brain's recovery process during the rehabilitation therapy (ReT). The mechanical design of the current generation has overcome the disadvantage in the previous version[13], which can't provide precise finger trajectories during flexion and extension motion varying with different finger joints' torques. In addition, in order to study the brain activation under different training strategies, three control modes have been developed, compared to only one control mode in the last prototype. The current prototype, like the last version, uses an ultrasonic motor as its actuator to enable the patient to do extension and flexion rehabilitation exercises in two degrees of freedom (DOF) for each finger. Finally, experiments have been carried out to evaluate the performances of this device.
Zhao, Ming; Rattanatamrong, Prapaporn; DiGiovanna, Jack; Mahmoudi, Babak; Figueiredo, Renato J; Sanchez, Justin C; Príncipe, José C; Fortes, José A B
2008-01-01
Dynamic data-driven brain-machine interfaces (DDDBMI) have great potential to advance the understanding of neural systems and improve the design of brain-inspired rehabilitative systems. This paper presents a novel cyberinfrastructure that couples in vivo neurophysiology experimentation with massive computational resources to provide seamless and efficient support of DDDBMI research. Closed-loop experiments can be conducted with in vivo data acquisition, reliable network transfer, parallel model computation, and real-time robot control. Behavioral experiments with live animals are supported with real-time guarantees. Offline studies can be performed with various configurations for extensive analysis and training. A Web-based portal is also provided to allow users to conveniently interact with the cyberinfrastructure, conducting both experimentation and analysis. New motor control models are developed based on this approach, which include recursive least square based (RLS) and reinforcement learning based (RLBMI) algorithms. The results from an online RLBMI experiment shows that the cyberinfrastructure can successfully support DDDBMI experiments and meet the desired real-time requirements.
Turner, Duncan L.; Ramos-Murguialday, Ander; Birbaumer, Niels; Hoffmann, Ulrich; Luft, Andreas
2013-01-01
The recovery of functional movements following injury to the central nervous system (CNS) is multifaceted and is accompanied by processes occurring in the injured and non-injured hemispheres of the brain or above/below a spinal cord lesion. The changes in the CNS are the consequence of functional and structural processes collectively termed neuroplasticity and these may occur spontaneously and/or be induced by movement practice. The neurophysiological mechanisms underlying such brain plasticity may take different forms in different types of injury, for example stroke vs. spinal cord injury (SCI). Recovery of movement can be enhanced by intensive, repetitive, variable, and rewarding motor practice. To this end, robots that enable or facilitate repetitive movements have been developed to assist recovery and rehabilitation. Here, we suggest that some elements of robot-mediated training such as assistance and perturbation may have the potential to enhance neuroplasticity. Together the elemental components for developing integrated robot-mediated training protocols may form part of a neurorehabilitation framework alongside those methods already employed by therapists. Robots could thus open up a wider choice of options for delivering movement rehabilitation grounded on the principles underpinning neuroplasticity in the human CNS. PMID:24312073
The cortical activation pattern by a rehabilitation robotic hand: a functional NIRS study
Chang, Pyung-Hun; Lee, Seung-Hee; Gu, Gwang Min; Lee, Seung-Hyun; Jin, Sang-Hyun; Yeo, Sang Seok; Seo, Jeong Pyo; Jang, Sung Ho
2014-01-01
Introduction: Clarification of the relationship between external stimuli and brain response has been an important topic in neuroscience and brain rehabilitation. In the current study, using functional near infrared spectroscopy (fNIRS), we attempted to investigate cortical activation patterns generated during execution of a rehabilitation robotic hand. Methods: Ten normal subjects were recruited for this study. Passive movements of the right fingers were performed using a rehabilitation robotic hand at a frequency of 0.5 Hz. We measured values of oxy-hemoglobin (HbO), deoxy-hemoglobin (HbR) and total-hemoglobin (HbT) in five regions of interest: the primary sensory-motor cortex (SM1), hand somatotopy of the contralateral SM1, supplementary motor area (SMA), premotor cortex (PMC), and prefrontal cortex (PFC). Results: HbO and HbT values indicated significant activation in the left SM1, left SMA, left PMC, and left PFC during execution of the rehabilitation robotic hand (uncorrected, p < 0.01). By contrast, HbR value indicated significant activation only in the hand somatotopic area of the left SM1 (uncorrected, p < 0.01). Conclusions: Our results appear to indicate that execution of the rehabilitation robotic hand could induce cortical activation. PMID:24570660
Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures
Chaminade, Thierry; Zecca, Massimiliano; Blakemore, Sarah-Jayne; Takanishi, Atsuo; Frith, Chris D.; Micera, Silvestro; Dario, Paolo; Rizzolatti, Giacomo; Gallese, Vittorio; Umiltà, Maria Alessandra
2010-01-01
Background The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents. Methodology Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted. Principal Findings Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance. Conclusions Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions. Significance Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions. PMID:20657777
Robot-assisted procedures in pediatric neurosurgery.
De Benedictis, Alessandro; Trezza, Andrea; Carai, Andrea; Genovese, Elisabetta; Procaccini, Emidio; Messina, Raffaella; Randi, Franco; Cossu, Silvia; Esposito, Giacomo; Palma, Paolo; Amante, Paolina; Rizzi, Michele; Marras, Carlo Efisio
2017-05-01
OBJECTIVE During the last 3 decades, robotic technology has rapidly spread across several surgical fields due to the continuous evolution of its versatility, stability, dexterity, and haptic properties. Neurosurgery pioneered the development of robotics, with the aim of improving the quality of several procedures requiring a high degree of accuracy and safety. Moreover, robot-guided approaches are of special interest in pediatric patients, who often have altered anatomy and challenging relationships between the diseased and eloquent structures. Nevertheless, the use of robots has been rarely reported in children. In this work, the authors describe their experience using the ROSA device (Robotized Stereotactic Assistant) in the neurosurgical management of a pediatric population. METHODS Between 2011 and 2016, 116 children underwent ROSA-assisted procedures for a variety of diseases (epilepsy, brain tumors, intra- or extraventricular and tumor cysts, obstructive hydrocephalus, and movement and behavioral disorders). Each patient received accurate preoperative planning of optimal trajectories, intraoperative frameless registration, surgical treatment using specific instruments held by the robotic arm, and postoperative CT or MR imaging. RESULTS The authors performed 128 consecutive surgeries, including implantation of 386 electrodes for stereo-electroencephalography (36 procedures), neuroendoscopy (42 procedures), stereotactic biopsy (26 procedures), pallidotomy (12 procedures), shunt placement (6 procedures), deep brain stimulation procedures (3 procedures), and stereotactic cyst aspiration (3 procedures). For each procedure, the authors analyzed and discussed accuracy, timing, and complications. CONCLUSIONS To the best their knowledge, the authors present the largest reported series of pediatric neurosurgical cases assisted by robotic support. The ROSA system provided improved safety and feasibility of minimally invasive approaches, thus optimizing the surgical result, while minimizing postoperative morbidity.
[History of robotics: from archytas of tarentum until Da Vinci robot. (Part II)].
Sánchez-Martín, F M; Jiménez Schlegl, P; Millán Rodríguez, F; Salvador-Bayarri, J; Monllau Font, V; Palou Redorta, J; Villavicencio Mavrich, H
2007-03-01
Robotic surgery is a reality. In order to to understand how new robots work is interesting to know the history of ancient (see part i) and modern robotics. The desire to design automatic machines imitating humans continued for more than 4000 years. Archytas of Tarentum (at around 400 a.C.), Heron of Alexandria, Hsieh-Fec, Al-Jazari, Bacon, Turriano, Leonardo da Vinci, Vaucanson o von Kempelen were robot inventors. At 1942 Asimov published the three robotics laws. Mechanics, electronics and informatics advances at XXth century developed robots to be able to do very complex self governing works. At 1985 the robot PUMA 560 was employed to introduce a needle inside the brain. Later on, they were designed surgical robots like World First, Robodoc, Gaspar o Acrobot, Zeus, AESOP, Probot o PAKI-RCP. At 2000 the FDA approved the da Vinci Surgical System (Intuitive Surgical Inc, Sunnyvale, CA, USA), a very sophisticated robot to assist surgeons. Currently urological procedures like prostatectomy, cystectomy and nephrectomy are performed with the da Vinci, so urology has become a very suitable speciality to robotic surgery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruemmer, David J; Walton, Miles C
Methods and systems for controlling a plurality of robots through a single user interface include at least one robot display window for each of the plurality of robots with the at least one robot display window illustrating one or more conditions of a respective one of the plurality of robots. The user interface further includes at least one robot control window for each of the plurality of robots with the at least one robot control window configured to receive one or more commands for sending to the respective one of the plurality of robots. The user interface further includes amore » multi-robot common window comprised of information received from each of the plurality of robots.« less
Automatic control system generation for robot design validation
NASA Technical Reports Server (NTRS)
Bacon, James A. (Inventor); English, James D. (Inventor)
2012-01-01
The specification and drawings present a new method, system and software product for and apparatus for generating a robotic validation system for a robot design. The robotic validation system for the robot design of a robotic system is automatically generated by converting a robot design into a generic robotic description using a predetermined format, then generating a control system from the generic robotic description and finally updating robot design parameters of the robotic system with an analysis tool using both the generic robot description and the control system.
Emergent coordination underlying learning to reach to grasp with a brain-machine interface.
Vaidya, Mukta; Balasubramanian, Karthikeyan; Southerland, Joshua; Badreldin, Islam; Eleryan, Ahmed; Shattuck, Kelsey; Gururangan, Suchin; Slutzky, Marc; Osborne, Leslie; Fagg, Andrew; Oweiss, Karim; Hatsopoulos, Nicholas G
2018-04-01
The development of coordinated reach-to-grasp movement has been well studied in infants and children. However, the role of motor cortex during this development is unclear because it is difficult to study in humans. We took the approach of using a brain-machine interface (BMI) paradigm in rhesus macaques with prior therapeutic amputations to examine the emergence of novel, coordinated reach to grasp. Previous research has shown that after amputation, the cortical area previously involved in the control of the lost limb undergoes reorganization, but prior BMI work has largely relied on finding neurons that already encode specific movement-related information. In this study, we taught macaques to cortically control a robotic arm and hand through operant conditioning, using neurons that were not explicitly reach or grasp related. Over the course of training, stereotypical patterns emerged and stabilized in the cross-covariance between the reaching and grasping velocity profiles, between pairs of neurons involved in controlling reach and grasp, and to a comparable, but lesser, extent between other stable neurons in the network. In fact, we found evidence of this structured coordination between pairs composed of all combinations of neurons decoding reach or grasp and other stable neurons in the network. The degree of and participation in coordination was highly correlated across all pair types. Our approach provides a unique model for studying the development of novel, coordinated reach-to-grasp movement at the behavioral and cortical levels. NEW & NOTEWORTHY Given that motor cortex undergoes reorganization after amputation, our work focuses on training nonhuman primates with chronic amputations to use neurons that are not reach or grasp related to control a robotic arm to reach to grasp through the use of operant conditioning, mimicking early development. We studied the development of a novel, coordinated behavior at the behavioral and cortical level, and the neural plasticity in M1 associated with learning to use a brain-machine interface.
Comparison of three different techniques for camera and motion control of a teleoperated robot.
Doisy, Guillaume; Ronen, Adi; Edan, Yael
2017-01-01
This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mazzone, P; Arena, P; Cantelli, L; Spampinato, G; Sposato, S; Cozzolino, S; Demarinis, P; Muscato, G
2016-07-01
The use of robotics in neurosurgery and, particularly, in stereotactic neurosurgery, is becoming more and more adopted because of the great advantages that it offers. Robotic manipulators easily allow to achieve great precision, reliability, and rapidity in the positioning of surgical instruments or devices in the brain. The aim of this work was to experimentally verify a fully automatic "no hands" surgical procedure. The integration of neuroimaging to data for planning the surgery, followed by application of new specific surgical tools, permitted the realization of a fully automated robotic implantation of leads in brain targets. An anthropomorphic commercial manipulator was utilized. In a preliminary phase, a software to plan surgery was developed, and the surgical tools were tested first during a simulation and then on a skull mock-up. In such a way, several tools were developed and tested, and the basis for an innovative surgical procedure arose. The final experimentation was carried out on anesthetized "large white" pigs. The determination of stereotactic parameters for the correct planning to reach the intended target was performed with the same technique currently employed in human stereotactic neurosurgery, and the robotic system revealed to be reliable and precise in reaching the target. The results of this work strengthen the possibility that a neurosurgeon may be substituted by a machine, and may represent the beginning of a new approach in the current clinical practice. Moreover, this possibility may have a great impact not only on stereotactic functional procedures but also on the entire domain of neurosurgery.
Seepanomwan, Kristsana; Caligiore, Daniele; Cangelosi, Angelo; Baldassarre, Gianluca
2015-12-01
Mental rotation, a classic experimental paradigm of cognitive psychology, tests the capacity of humans to mentally rotate a seen object to decide if it matches a target object. In recent years, mental rotation has been investigated with brain imaging techniques to identify the brain areas involved. Mental rotation has also been investigated through the development of neural-network models, used to identify the specific mechanisms that underlie its process, and with neurorobotics models to investigate its embodied nature. Current models, however, have limited capacities to relate to neuro-scientific evidence, to generalise mental rotation to new objects, to suitably represent decision making mechanisms, and to allow the study of the effects of overt gestures on mental rotation. The work presented in this study overcomes these limitations by proposing a novel neurorobotic model that has a macro-architecture constrained by knowledge held on brain, encompasses a rather general mental rotation mechanism, and incorporates a biologically plausible decision making mechanism. The model was tested using the humanoid robot iCub in tasks requiring the robot to mentally rotate 2D geometrical images appearing on a computer screen. The results show that the robot gained an enhanced capacity to generalise mental rotation to new objects and to express the possible effects of overt movements of the wrist on mental rotation. The model also represents a further step in the identification of the embodied neural mechanisms that may underlie mental rotation in humans and might also give hints to enhance robots' planning capabilities. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Zhang, Chen; Sun, Chao; Gao, Liqiang; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang
2013-01-01
Bio-robots based on brain computer interface (BCI) suffer from the lack of considering the characteristic of the animals in navigation. This paper proposed a new method for bio-robots' automatic navigation combining the reward generating algorithm base on Reinforcement Learning (RL) with the learning intelligence of animals together. Given the graded electrical reward, the animal e.g. the rat, intends to seek the maximum reward while exploring an unknown environment. Since the rat has excellent spatial recognition, the rat-robot and the RL algorithm can convergent to an optimal route by co-learning. This work has significant inspiration for the practical development of bio-robots' navigation with hybrid intelligence.
P300 Chinese input system based on Bayesian LDA.
Jin, Jing; Allison, Brendan Z; Brunner, Clemens; Wang, Bei; Wang, Xingyu; Zhang, Jianhua; Neuper, Christa; Pfurtscheller, Gert
2010-02-01
A brain-computer interface (BCI) is a new communication channel between humans and computers that translates brain activity into recognizable command and control signals. Attended events can evoke P300 potentials in the electroencephalogram. Hence, the P300 has been used in BCI systems to spell, control cursors or robotic devices, and other tasks. This paper introduces a novel P300 BCI to communicate Chinese characters. To improve classification accuracy, an optimization algorithm (particle swarm optimization, PSO) is used for channel selection (i.e., identifying the best electrode configuration). The effects of different electrode configurations on classification accuracy were tested by Bayesian linear discriminant analysis offline. The offline results from 11 subjects show that this new P300 BCI can effectively communicate Chinese characters and that the features extracted from the electrodes obtained by PSO yield good performance.
Robot-assisted motor activation monitored by time-domain optical brain imaging
NASA Astrophysics Data System (ADS)
Steinkellner, O.; Wabnitz, H.; Schmid, S.; Steingräber, R.; Schmidt, H.; Krüger, J.; Macdonald, R.
2011-07-01
Robot-assisted motor rehabilitation proved to be an effective supplement to conventional hand-to-hand therapy in stroke patients. In order to analyze and understand motor learning and performance during rehabilitation it is desirable to develop a monitor to provide objective measures of the corresponding brain activity at the rehabilitation progress. We used a portable time-domain near-infrared reflectometer to monitor the hemodynamic brain response to distal upper extremity activities. Four healthy volunteers performed two different robot-assisted wrist/forearm movements, flexion-extension and pronation-supination in comparison with an unassisted squeeze ball exercise. A special headgear with four optical measurement positions to include parts of the pre- and postcentral gyrus provided a good overlap with the expected activation areas. Data analysis based on variance of time-of-flight distributions of photons through tissue was chosen to provide a suitable representation of intracerebral signals. In all subjects several of the four detection channels showed a response. In some cases indications were found of differences in localization of the activated areas for the various tasks.
Method and System for Controlling a Dexterous Robot Execution Sequence Using State Classification
NASA Technical Reports Server (NTRS)
Sanders, Adam M. (Inventor); Quillin, Nathaniel (Inventor); Platt, Robert J., Jr. (Inventor); Pfeiffer, Joseph (Inventor); Permenter, Frank Noble (Inventor)
2014-01-01
A robotic system includes a dexterous robot and a controller. The robot includes a plurality of robotic joints, actuators for moving the joints, and sensors for measuring a characteristic of the joints, and for transmitting the characteristics as sensor signals. The controller receives the sensor signals, and is configured for executing instructions from memory, classifying the sensor signals into distinct classes via the state classification module, monitoring a system state of the robot using the classes, and controlling the robot in the execution of alternative work tasks based on the system state. A method for controlling the robot in the above system includes receiving the signals via the controller, classifying the signals using the state classification module, monitoring the present system state of the robot using the classes, and controlling the robot in the execution of alternative work tasks based on the present system state.
Closing the sensorimotor loop: haptic feedback facilitates decoding of motor imagery
NASA Astrophysics Data System (ADS)
Gomez-Rodriguez, M.; Peters, J.; Hill, J.; Schölkopf, B.; Gharabaghi, A.; Grosse-Wentrup, M.
2011-06-01
The combination of brain-computer interfaces (BCIs) with robot-assisted physical therapy constitutes a promising approach to neurorehabilitation of patients with severe hemiparetic syndromes caused by cerebrovascular brain damage (e.g. stroke) and other neurological conditions. In such a scenario, a key aspect is how to reestablish the disrupted sensorimotor feedback loop. However, to date it is an open question how artificially closing the sensorimotor feedback loop influences the decoding performance of a BCI. In this paper, we answer this issue by studying six healthy subjects and two stroke patients. We present empirical evidence that haptic feedback, provided by a seven degrees of freedom robotic arm, facilitates online decoding of arm movement intention. The results support the feasibility of future rehabilitative treatments based on the combination of robot-assisted physical therapy with BCIs.
Horki, Petar; Neuper, Christa; Pfurtscheller, Gert; Müller-Putz, Gernot
2010-12-01
A brain-computer interface (BCI) provides a direct connection between the human brain and a computer. One type of BCI can be realized using steady-state visual evoked potentials (SSVEPs), resulting from repetitive stimulation. The aim of this study was the realization of an asynchronous SSVEP-BCI, based on canonical correlation analysis, suitable for the control of a 2-degrees of freedom (DoF) hand and elbow neuroprosthesis. To determine whether this BCI is suitable for the control of 2-DoF neuroprosthetic devices, online experiments with a virtual and a robotic limb feedback were conducted with eight healthy subjects and one tetraplegic patient. All participants were able to control the artificial limbs with the BCI. In the online experiments, the positive predictive value (PPV) varied between 69% and 83% and the false negative rate (FNR) varied between 1% and 17%. The spinal cord injured patient achieved PPV and FNR values within one standard deviation of the mean for all healthy subjects.
Modelling brain emergent behaviours through coevolution of neural agents.
Maniadakis, Michail; Trahanias, Panos
2006-06-01
Recently, many research efforts focus on modelling partial brain areas with the long-term goal to support cognitive abilities of artificial organisms. Existing models usually suffer from heterogeneity, which constitutes their integration very difficult. The present work introduces a computational framework to address brain modelling tasks, emphasizing on the integrative performance of substructures. Moreover, implemented models are embedded in a robotic platform to support its behavioural capabilities. We follow an agent-based approach in the design of substructures to support the autonomy of partial brain structures. Agents are formulated to allow the emergence of a desired behaviour after a certain amount of interaction with the environment. An appropriate collaborative coevolutionary algorithm, able to emphasize both the speciality of brain areas and their cooperative performance, is employed to support design specification of agent structures. The effectiveness of the proposed approach is illustrated through the implementation of computational models for motor cortex and hippocampus, which are successfully tested on a simulated mobile robot.
Intelligent robot control using an adaptive critic with a task control center and dynamic database
NASA Astrophysics Data System (ADS)
Hall, E. L.; Ghaffari, M.; Liao, X.; Alhaj Ali, S. M.
2006-10-01
The purpose of this paper is to describe the design, development and simulation of a real time controller for an intelligent, vision guided robot. The use of a creative controller that can select its own tasks is demonstrated. This creative controller uses a task control center and dynamic database. The dynamic database stores both global environmental information and local information including the kinematic and dynamic models of the intelligent robot. The kinematic model is very useful for position control and simulations. However, models of the dynamics of the manipulators are needed for tracking control of the robot's motions. Such models are also necessary for sizing the actuators, tuning the controller, and achieving superior performance. Simulations of various control designs are shown. Also, much of the model has also been used for the actual prototype Bearcat Cub mobile robot. This vision guided robot was designed for the Intelligent Ground Vehicle Contest. A novel feature of the proposed approach is that the method is applicable to both robot arm manipulators and robot bases such as wheeled mobile robots. This generality should encourage the development of more mobile robots with manipulator capability since both models can be easily stored in the dynamic database. The multi task controller also permits wide applications. The use of manipulators and mobile bases with a high-level control are potentially useful for space exploration, certain rescue robots, defense robots, and medical robotics aids.
Mergeable nervous systems for robots.
Mathews, Nithin; Christensen, Anders Lyhne; O'Grady, Rehan; Mondada, Francesco; Dorigo, Marco
2017-09-12
Robots have the potential to display a higher degree of lifetime morphological adaptation than natural organisms. By adopting a modular approach, robots with different capabilities, shapes, and sizes could, in theory, construct and reconfigure themselves as required. However, current modular robots have only been able to display a limited range of hardwired behaviors because they rely solely on distributed control. Here, we present robots whose bodies and control systems can merge to form entirely new robots that retain full sensorimotor control. Our control paradigm enables robots to exhibit properties that go beyond those of any existing machine or of any biological organism: the robots we present can merge to form larger bodies with a single centralized controller, split into separate bodies with independent controllers, and self-heal by removing or replacing malfunctioning body parts. This work takes us closer to robots that can autonomously change their size, form and function.Robots that can self-assemble into different morphologies are desired to perform tasks that require different physical capabilities. Mathews et al. design robots whose bodies and control systems can merge and split to form new robots that retain full sensorimotor control and act as a single entity.
Daud Albasini, Omar A.; Oboe, Roberto; Tonin, Paolo; Paolucci, Stefano; Sandrini, Giorgio; Piron, Lamberto
2013-01-01
Background. Haptic robots allow the exploitation of known motor learning mechanisms, representing a valuable option for motor treatment after stroke. The aim of this feasibility multicentre study was to test the clinical efficacy of a haptic prototype, for the recovery of hand function after stroke. Methods. A prospective pilot clinical trial was planned on 15 consecutive patients enrolled in 3 rehabilitation centre in Italy. All the framework features of the haptic robot (e.g., control loop, external communication, and graphic rendering for virtual reality) were implemented into a real-time MATLAB/Simulink environment, controlling a five-bar linkage able to provide forces up to 20 [N] at the end effector, used for finger and hand rehabilitation therapies. Clinical (i.e., Fugl-Meyer upper extremity scale; nine hold pegboard test) and kinematics (i.e., time; velocity; jerk metric; normalized jerk of standard movements) outcomes were assessed before and after treatment to detect changes in patients' motor performance. Reorganization of cortical activation was detected in one patient by fMRI. Results and Conclusions. All patients showed significant improvements in both clinical and kinematic outcomes. Additionally, fMRI results suggest that the proposed approach may promote a better cortical activation in the brain. PMID:24319496
Turolla, Andrea; Daud Albasini, Omar A; Oboe, Roberto; Agostini, Michela; Tonin, Paolo; Paolucci, Stefano; Sandrini, Giorgio; Venneri, Annalena; Piron, Lamberto
2013-01-01
Background. Haptic robots allow the exploitation of known motor learning mechanisms, representing a valuable option for motor treatment after stroke. The aim of this feasibility multicentre study was to test the clinical efficacy of a haptic prototype, for the recovery of hand function after stroke. Methods. A prospective pilot clinical trial was planned on 15 consecutive patients enrolled in 3 rehabilitation centre in Italy. All the framework features of the haptic robot (e.g., control loop, external communication, and graphic rendering for virtual reality) were implemented into a real-time MATLAB/Simulink environment, controlling a five-bar linkage able to provide forces up to 20 [N] at the end effector, used for finger and hand rehabilitation therapies. Clinical (i.e., Fugl-Meyer upper extremity scale; nine hold pegboard test) and kinematics (i.e., time; velocity; jerk metric; normalized jerk of standard movements) outcomes were assessed before and after treatment to detect changes in patients' motor performance. Reorganization of cortical activation was detected in one patient by fMRI. Results and Conclusions. All patients showed significant improvements in both clinical and kinematic outcomes. Additionally, fMRI results suggest that the proposed approach may promote a better cortical activation in the brain.
SU-F-P-31: Dosimetric Effects of Roll and Pitch Corrections Using Robotic Table
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamalui, M; Su, Z; Flampouri, S
Purpose: To quantify the dosimetric effect of roll and pitch corrections being performed by two types of robotic tables available at our institution: BrainLabTM 5DOF robotic table installed at VERO (BrainLab&MHI) dedicated SBRT linear accelerator and 6DOF robotic couch by IBA Proton Therapy with QFixTM couch top. Methods: Planning study used a thorax phantom (CIRSTM), scanned at 4DCT protocol; targets (IGTV, PTV) were determined according to the institutional lung site-specific standards. 12 CT sets were generated with Pitch and Roll angles ranging from −4 to +4 degrees each. 2 table tops were placed onto the scans according to the modality-specificmore » patient treatment workflows. The pitched/rolled CT sets were fused to the original CT scan and the verification treatment plans were generated (12 photon SBRT plans and 12 proton conventional fractionation lung plans). Then the CT sets were fused again to simulate the effect of patient roll/pitch corrections by the robotic table. DVH sets were evaluated for all cases. Results: The effect of not correcting the phantom position for roll/pitch in photon SBRT cases was reducing the target coverage by 2% as maximum; correcting the positional errors by robotic table varied the target coverage within 0.7%. in case of proton treatment, not correcting the phantom position led to the coverage loss up to 4%, applying the corrections using robotic table reduced the coverage variation to less than 2% for PTV and within 1% for IGTV. Conclusion: correcting the patient position by using robotic tables is highly preferable, despite the small dosimetric changes introduced by the devices.« less
Neurological and robot-controlled induction of an apparition.
Blanke, Olaf; Pozeg, Polona; Hara, Masayuki; Heydrich, Lukas; Serino, Andrea; Yamamoto, Akio; Higuchi, Toshiro; Salomon, Roy; Seeck, Margitta; Landis, Theodor; Arzy, Shahar; Herbelin, Bruno; Bleuler, Hannes; Rognini, Giulio
2014-11-17
Tales of ghosts, wraiths, and other apparitions have been reported in virtually all cultures. The strange sensation that somebody is nearby when no one is actually present and cannot be seen (feeling of a presence, FoP) is a fascinating feat of the human mind, and this apparition is often covered in the literature of divinity, occultism, and fiction. Although it is described by neurological and psychiatric patients and healthy individuals in different situations, it is not yet understood how the phenomenon is triggered by the brain. Here, we performed lesion analysis in neurological FoP patients, supported by an analysis of associated neurological deficits. Our data show that the FoP is an illusory own-body perception with well-defined characteristics that is associated with sensorimotor loss and caused by lesions in three distinct brain regions: temporoparietal, insular, and especially frontoparietal cortex. Based on these data and recent experimental advances of multisensory own-body illusions, we designed a master-slave robotic system that generated specific sensorimotor conflicts and enabled us to induce the FoP and related illusory own-body perceptions experimentally in normal participants. These data show that the illusion of feeling another person nearby is caused by misperceiving the source and identity of sensorimotor (tactile, proprioceptive, and motor) signals of one's own body. Our findings reveal the neural mechanisms of the FoP, highlight the subtle balance of brain mechanisms that generate the experience of "self" and "other," and advance the understanding of the brain mechanisms responsible for hallucinations in schizophrenia. Copyright © 2014 Elsevier Ltd. All rights reserved.
Electroencephalographic identifiers of motor adaptation learning
NASA Astrophysics Data System (ADS)
Özdenizci, Ozan; Yalçın, Mustafa; Erdoğan, Ahmetcan; Patoğlu, Volkan; Grosse-Wentrup, Moritz; Çetin, Müjdat
2017-08-01
Objective. Recent brain-computer interface (BCI) assisted stroke rehabilitation protocols tend to focus on sensorimotor activity of the brain. Relying on evidence claiming that a variety of brain rhythms beyond sensorimotor areas are related to the extent of motor deficits, we propose to identify neural correlates of motor learning beyond sensorimotor areas spatially and spectrally for further use in novel BCI-assisted neurorehabilitation settings. Approach. Electroencephalographic (EEG) data were recorded from healthy subjects participating in a physical force-field adaptation task involving reaching movements through a robotic handle. EEG activity recorded during rest prior to the experiment and during pre-trial movement preparation was used as features to predict motor adaptation learning performance across subjects. Main results. Subjects learned to perform straight movements under the force-field at different adaptation rates. Both resting-state and pre-trial EEG features were predictive of individual adaptation rates with relevance of a broad network of beta activity. Beyond sensorimotor regions, a parieto-occipital cortical component observed across subjects was involved strongly in predictions and a fronto-parietal cortical component showed significant decrease in pre-trial beta-powers for users with higher adaptation rates and increase in pre-trial beta-powers for users with lower adaptation rates. Significance. Including sensorimotor areas, a large-scale network of beta activity is presented as predictive of motor learning. Strength of resting-state parieto-occipital beta activity or pre-trial fronto-parietal beta activity can be considered in BCI-assisted stroke rehabilitation protocols with neurofeedback training or volitional control of neural activity for brain-robot interfaces to induce plasticity.
Optimal Control Method of Robot End Position and Orientation Based on Dynamic Tracking Measurement
NASA Astrophysics Data System (ADS)
Liu, Dalong; Xu, Lijuan
2018-01-01
In order to improve the accuracy of robot pose positioning and control, this paper proposed a dynamic tracking measurement robot pose optimization control method based on the actual measurement of D-H parameters of the robot, the parameters is taken with feedback compensation of the robot, according to the geometrical parameters obtained by robot pose tracking measurement, improved multi sensor information fusion the extended Kalan filter method, with continuous self-optimal regression, using the geometric relationship between joint axes for kinematic parameters in the model, link model parameters obtained can timely feedback to the robot, the implementation of parameter correction and compensation, finally we can get the optimal attitude angle, realize the robot pose optimization control experiments were performed. 6R dynamic tracking control of robot joint robot with independent research and development is taken as experimental subject, the simulation results show that the control method improves robot positioning accuracy, and it has the advantages of versatility, simplicity, ease of operation and so on.
Brain-Computer Interfaces in Medicine
Shih, Jerry J.; Krusienski, Dean J.; Wolpaw, Jonathan R.
2012-01-01
Brain-computer interfaces (BCIs) acquire brain signals, analyze them, and translate them into commands that are relayed to output devices that carry out desired actions. BCIs do not use normal neuromuscular output pathways. The main goal of BCI is to replace or restore useful function to people disabled by neuromuscular disorders such as amyotrophic lateral sclerosis, cerebral palsy, stroke, or spinal cord injury. From initial demonstrations of electroencephalography-based spelling and single-neuron-based device control, researchers have gone on to use electroencephalographic, intracortical, electrocorticographic, and other brain signals for increasingly complex control of cursors, robotic arms, prostheses, wheelchairs, and other devices. Brain-computer interfaces may also prove useful for rehabilitation after stroke and for other disorders. In the future, they might augment the performance of surgeons or other medical professionals. Brain-computer interface technology is the focus of a rapidly growing research and development enterprise that is greatly exciting scientists, engineers, clinicians, and the public in general. Its future achievements will depend on advances in 3 crucial areas. Brain-computer interfaces need signal-acquisition hardware that is convenient, portable, safe, and able to function in all environments. Brain-computer interface systems need to be validated in long-term studies of real-world use by people with severe disabilities, and effective and viable models for their widespread dissemination must be implemented. Finally, the day-to-day and moment-to-moment reliability of BCI performance must be improved so that it approaches the reliability of natural muscle-based function. PMID:22325364
Kim, Yeoun Jae; Seo, Jong Hyun; Kim, Hong Rae; Kim, Kwang Gi
2017-06-01
Clinicians who frequently perform ultrasound scanning procedures often suffer from musculoskeletal disorders, arthritis, and myalgias. To minimize their occurrence and to assist clinicians, ultrasound scanning robots have been developed worldwide. Although, to date, there is still no commercially available ultrasound scanning robot, many control methods have been suggested and researched. These control algorithms are either image based or force based. If the ultrasound scanning robot control algorithm was a combination of the two algorithms, it could benefit from the advantage of each one. However, there are no existing control methods for ultrasound scanning robots that combine force control and image analysis. Therefore, in this work, a control algorithm is developed for an ultrasound scanning robot using force feedback and ultrasound image analysis. A manipulator-type ultrasound scanning robot named 'NCCUSR' is developed and a control algorithm for this robot is suggested and verified. First, conventional hybrid position-force control is implemented for the robot and the hybrid position-force control algorithm is combined with ultrasound image analysis to fully control the robot. The control method is verified using a thyroid phantom. It was found that the proposed algorithm can be applied to control the ultrasound scanning robot and experimental outcomes suggest that the images acquired using the proposed control method can yield a rating score that is equivalent to images acquired directly by the clinicians. The proposed control method can be applied to control the ultrasound scanning robot. However, more work must be completed to verify the proposed control method in order to become clinically feasible. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
With the Development of Teaching Sumo Robot are Discussed
NASA Astrophysics Data System (ADS)
quan, Miao Zhi; Ke, Ma; Xin, Wei Jing
In recent years, with of robot technology progress and robot science activities, robot technology obtained fast development. The system USES the Atmega128 single-chip Atmel company as a core controller, was designed using a infrared to tube detection boundary, looking for each other, controller to tube receiving infrared data, and according to the data control motor state thus robot reached automatic control purposes. Against robot by single-chip microcomputer smallest system, By making the teaching purpose is to promote the robot sumo students' interests and let more students to participate in the robot research activities.
Robotic tilt table reduces the occurrence of orthostatic hypotension over time in vegetative states.
Taveggia, Giovanni; Ragusa, Ivana; Trani, Vincenzo; Cuva, Daniele; Angeretti, Cristina; Fontanella, Marco; Panciani, Pier Paolo; Borboni, Alberto
2015-06-01
The aim of this study is to evaluate the effects of verticalization with or without combined movement of the lower limbs in patients in a vegetative state or a minimally conscious state. In particular, we aimed to study whether, in the group with combined movement, there was better tolerance to verticalization. This was a randomized trial conducted in a neurorehabilitation hospital. Twelve patients with vegetative state and minimally conscious state 3-18 months after acute acquired brain injuries were included. Patients were randomized into A and B treatment groups. Study group A underwent verticalization with a tilt table at 65° and movimentation of the lower limbs with a robotic system for 30 min three times a week for 24 sessions. Control group B underwent the same rehabilitation treatment, with a robotic verticalization system, but an inactive lower-limb movement system. Systolic and diastolic blood pressure and heart rate were determined. Robotic movement of the lower limbs can reduce the occurrence of orthostatic hypotension in hemodynamically unstable patients. Despite the small number of patients involved (only eight patients completed the trial), our results indicate that blood pressures and heart rate can be stabilized better (with) by treatment with passive leg movements in hemodynamically unstable patients.
System and method for seamless task-directed autonomy for robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Curtis; Bruemmer, David; Few, Douglas
Systems, methods, and user interfaces are used for controlling a robot. An environment map and a robot designator are presented to a user. The user may place, move, and modify task designators on the environment map. The task designators indicate a position in the environment map and indicate a task for the robot to achieve. A control intermediary links task designators with robot instructions issued to the robot. The control intermediary analyzes a relative position between the task designators and the robot. The control intermediary uses the analysis to determine a task-oriented autonomy level for the robot and communicates targetmore » achievement information to the robot. The target achievement information may include instructions for directly guiding the robot if the task-oriented autonomy level indicates low robot initiative and may include instructions for directing the robot to determine a robot plan for achieving the task if the task-oriented autonomy level indicates high robot initiative.« less
Review on design and control aspects of ankle rehabilitation robots.
Jamwal, Prashant K; Hussain, Shahid; Xie, Sheng Q
2015-03-01
Ankle rehabilitation robots can play an important role in improving outcomes of the rehabilitation treatment by assisting therapists and patients in number of ways. Consequently, few robot designs have been proposed by researchers which fall under either of the two categories, namely, wearable robots or platform-based robots. This paper presents a review of both kinds of ankle robots along with a brief analysis of their design, actuation and control approaches. While reviewing these designs it was observed that most of them are undesirably inspired by industrial robot designs. Taking note of the design concerns of current ankle robots, few improvements in the ankle robot designs have also been suggested. Conventional position control or force control approaches, being used in the existing ankle robots, have been reviewed. Apparently, opportunities of improvement also exist in the actuation as well as control of ankle robots. Subsequently, a discussion on most recent research in the development of novel actuators and advanced controllers based on appropriate physical and cognitive human-robot interaction has also been included in this review. Implications for Rehabilitation Ankle joint functions are restricted/impaired as a consequence of stroke or injury during sports or otherwise. Robots can help in reinstating functions faster and can also work as tool for recording rehabilitation data useful for further analysis. Evolution of ankle robots with respect to their design and control aspects has been discussed in the present paper and a novel design with futuristic control approach has been proposed.
NASA Astrophysics Data System (ADS)
2010-07-01
WE RECOMMEND Good Practice in Science Teaching: What Research Has to Say Book explores and summarizes the research Steady State Bottle Kit Another gem from SEP Sciencescope Datalogging Balance Balance suits everyday use Sciencescope Spectrophotometer Device displays clear spectrum WORTH A LOOK The Babylonian Theorem Text explains ancient Egyptian mathematics BrainBox360 (Physics Edition) Video game tests your knowledge Teaching and Learning Science: Towards a Personalized Approach Book reveals how useful physics teachers really are PAPERSHOW Gadget kit is useful but has limitations Robotic Arm Kit with USB PC Interface Robot arm teaches programming WEB WATCH Simple applets teach complex topics
Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot.
Alexandrov, Alexei V; Lippi, Vittorio; Mergner, Thomas; Frolov, Alexander A; Hettich, Georg; Husek, Dusan
2017-01-01
Control of a multi-body system in both robots and humans may face the problem of destabilizing dynamic coupling effects arising between linked body segments. The state of the art solutions in robotics are full state feedback controllers. For human hip-ankle coordination, a more parsimonious and theoretically stable alternative to the robotics solution has been suggested in terms of the Eigenmovement (EM) control. Eigenmovements are kinematic synergies designed to describe the multi DoF system, and its control, with a set of independent, and hence coupling-free , scalar equations. This paper investigates whether the EM alternative shows "real-world robustness" against noisy and inaccurate sensors, mechanical non-linearities such as dead zones, and human-like feedback time delays when controlling hip-ankle movements of a balancing humanoid robot. The EM concept and the EM controller are introduced, the robot's dynamics are identified using a biomechanical approach, and robot tests are performed in a human posture control laboratory. The tests show that the EM controller provides stable control of the robot with proactive ("voluntary") movements and reactive balancing of stance during support surface tilts and translations. Although a preliminary robot-human comparison reveals similarities and differences, we conclude (i) the Eigenmovement concept is a valid candidate when different concepts of human sensorimotor control are considered, and (ii) that human-inspired robot experiments may help to decide in future the choice among the candidates and to improve the design of humanoid robots and robotic rehabilitation devices.
Human-Inspired Eigenmovement Concept Provides Coupling-Free Sensorimotor Control in Humanoid Robot
Alexandrov, Alexei V.; Lippi, Vittorio; Mergner, Thomas; Frolov, Alexander A.; Hettich, Georg; Husek, Dusan
2017-01-01
Control of a multi-body system in both robots and humans may face the problem of destabilizing dynamic coupling effects arising between linked body segments. The state of the art solutions in robotics are full state feedback controllers. For human hip-ankle coordination, a more parsimonious and theoretically stable alternative to the robotics solution has been suggested in terms of the Eigenmovement (EM) control. Eigenmovements are kinematic synergies designed to describe the multi DoF system, and its control, with a set of independent, and hence coupling-free, scalar equations. This paper investigates whether the EM alternative shows “real-world robustness” against noisy and inaccurate sensors, mechanical non-linearities such as dead zones, and human-like feedback time delays when controlling hip-ankle movements of a balancing humanoid robot. The EM concept and the EM controller are introduced, the robot's dynamics are identified using a biomechanical approach, and robot tests are performed in a human posture control laboratory. The tests show that the EM controller provides stable control of the robot with proactive (“voluntary”) movements and reactive balancing of stance during support surface tilts and translations. Although a preliminary robot-human comparison reveals similarities and differences, we conclude (i) the Eigenmovement concept is a valid candidate when different concepts of human sensorimotor control are considered, and (ii) that human-inspired robot experiments may help to decide in future the choice among the candidates and to improve the design of humanoid robots and robotic rehabilitation devices. PMID:28487646
Concurrent Path Planning with One or More Humanoid Robots
NASA Technical Reports Server (NTRS)
Reiland, Matthew J. (Inventor); Sanders, Adam M. (Inventor)
2014-01-01
A robotic system includes a controller and one or more robots each having a plurality of robotic joints. Each of the robotic joints is independently controllable to thereby execute a cooperative work task having at least one task execution fork, leading to multiple independent subtasks. The controller coordinates motion of the robot(s) during execution of the cooperative work task. The controller groups the robotic joints into task-specific robotic subsystems, and synchronizes motion of different subsystems during execution of the various subtasks of the cooperative work task. A method for executing the cooperative work task using the robotic system includes automatically grouping the robotic joints into task-specific subsystems, and assigning subtasks of the cooperative work task to the subsystems upon reaching a task execution fork. The method further includes coordinating execution of the subtasks after reaching the task execution fork.
NASA Astrophysics Data System (ADS)
Popov, E. P.; Iurevich, E. I.
The history and the current status of robotics are reviewed, as are the design, operation, and principal applications of industrial robots. Attention is given to programmable robots, robots with adaptive control and elements of artificial intelligence, and remotely controlled robots. The applications of robots discussed include mechanical engineering, cargo handling during transportation and storage, mining, and metallurgy. The future prospects of robotics are briefly outlined.
Brain-machine interfaces in neurorehabilitation of stroke.
Soekadar, Surjo R; Birbaumer, Niels; Slutzky, Marc W; Cohen, Leonardo G
2015-11-01
Stroke is among the leading causes of long-term disabilities leaving an increasing number of people with cognitive, affective and motor impairments depending on assistance in their daily life. While function after stroke can significantly improve in the first weeks and months, further recovery is often slow or non-existent in the more severe cases encompassing 30-50% of all stroke victims. The neurobiological mechanisms underlying recovery in those patients are incompletely understood. However, recent studies demonstrated the brain's remarkable capacity for functional and structural plasticity and recovery even in severe chronic stroke. As all established rehabilitation strategies require some remaining motor function, there is currently no standardized and accepted treatment for patients with complete chronic muscle paralysis. The development of brain-machine interfaces (BMIs) that translate brain activity into control signals of computers or external devices provides two new strategies to overcome stroke-related motor paralysis. First, BMIs can establish continuous high-dimensional brain-control of robotic devices or functional electric stimulation (FES) to assist in daily life activities (assistive BMI). Second, BMIs could facilitate neuroplasticity, thus enhancing motor learning and motor recovery (rehabilitative BMI). Advances in sensor technology, development of non-invasive and implantable wireless BMI-systems and their combination with brain stimulation, along with evidence for BMI systems' clinical efficacy suggest that BMI-related strategies will play an increasing role in neurorehabilitation of stroke. Copyright © 2014. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Hollars, M. G.; Cannon, R. H., Jr.; Alexander, H. L.; Morse, D. F.
1987-01-01
The Stanford University Aerospace Robotics Laboratory is actively developing and experimentally testing advanced robot control strategies for space robotic applications. Early experiments focused on control of very lightweight one-link manipulators and other flexible structures. The results are being extended to position and force control of mini-manipulators attached to flexible manipulators and multilink manipulators with flexible drive trains. Experimental results show that end-point sensing and careful dynamic modeling or adaptive control are key to the success of these control strategies. Free-flying space robot simulators that operate on an air cushion table have been built to test control strategies in which the dynamics of the base of the robot and the payload are important.
Dynamic analysis of space robot remote control system
NASA Astrophysics Data System (ADS)
Kulakov, Felix; Alferov, Gennady; Sokolov, Boris; Gorovenko, Polina; Sharlay, Artem
2018-05-01
The article presents analysis on construction of two-stage remote control for space robots. This control ensures efficiency of the robot control system at large delays in transmission of control signals from the ground control center to the local control system of the space robot. The conditions for control stability of and high transparency are found.
Research on Robot Pose Control Technology Based on Kinematics Analysis Model
NASA Astrophysics Data System (ADS)
Liu, Dalong; Xu, Lijuan
2018-01-01
In order to improve the attitude stability of the robot, proposes an attitude control method of robot based on kinematics analysis model, solve the robot walking posture transformation, grasping and controlling the motion planning problem of robot kinematics. In Cartesian space analytical model, using three axis accelerometer, magnetometer and the three axis gyroscope for the combination of attitude measurement, the gyroscope data from Calman filter, using the four element method for robot attitude angle, according to the centroid of the moving parts of the robot corresponding to obtain stability inertia parameters, using random sampling RRT motion planning method, accurate operation to any position control of space robot, to ensure the end effector along a prescribed trajectory the implementation of attitude control. The accurate positioning of the experiment is taken using MT-R robot as the research object, the test robot. The simulation results show that the proposed method has better robustness, and higher positioning accuracy, and it improves the reliability and safety of robot operation.
Experiments in Nonlinear Adaptive Control of Multi-Manipulator, Free-Flying Space Robots
NASA Technical Reports Server (NTRS)
Chen, Vincent Wei-Kang
1992-01-01
Sophisticated robots can greatly enhance the role of humans in space by relieving astronauts of low level, tedious assembly and maintenance chores and allowing them to concentrate on higher level tasks. Robots and astronauts can work together efficiently, as a team; but the robot must be capable of accomplishing complex operations and yet be easy to use. Multiple cooperating manipulators are essential to dexterity and can broaden greatly the types of activities the robot can achieve; adding adaptive control can ease greatly robot usage by allowing the robot to change its own controller actions, without human intervention, in response to changes in its environment. Previous work in the Aerospace Robotics Laboratory (ARL) have shown the usefulness of a space robot with cooperating manipulators. The research presented in this dissertation extends that work by adding adaptive control. To help achieve this high level of robot sophistication, this research made several advances to the field of nonlinear adaptive control of robotic systems. A nonlinear adaptive control algorithm developed originally for control of robots, but requiring joint positions as inputs, was extended here to handle the much more general case of manipulator endpoint-position commands. A new system modelling technique, called system concatenation was developed to simplify the generation of a system model for complicated systems, such as a free-flying multiple-manipulator robot system. Finally, the task-space concept was introduced wherein the operator's inputs specify only the robot's task. The robot's subsequent autonomous performance of each task still involves, of course, endpoint positions and joint configurations as subsets. The combination of these developments resulted in a new adaptive control framework that is capable of continuously providing full adaptation capability to the complex space-robot system in all modes of operation. The new adaptive control algorithm easily handles free-flying systems with multiple, interacting manipulators, and extends naturally to even larger systems. The new adaptive controller was experimentally demonstrated on an ideal testbed in the ARL-A first-ever experimental model of a multi-manipulator, free-flying space robot that is capable of capturing and manipulating free-floating objects without requiring human assistance. A graphical user interface enhanced the robot usability: it enabled an operator situated at a remote location to issue high-level task description commands to the robot, and to monitor robot activities as it then carried out each assignment autonomously.
O'Malley, Marcia K; Ro, Tony; Levin, Harvey S
2006-12-01
To describe 2 new ways of assessing and inducing neuroplasticity in the human brain--transcranial magnetic stimulation (TMS) and robotics--and to investigate and promote the recovery of motor function after brain damage. We identified recent articles and books directly bearing on TMS and robotics. Articles using these tools for purposes other than rehabilitation were excluded. From these studies, we emphasize the methodologic and technical details of these tools as applicable for assessing and inducing plasticity. Because both tools have only recently been used for rehabilitation, the majority of the articles selected for this review have been published only within the last 10 years. We used the PubMed and Compendex databases to find relevant peer-reviewed studies for this review. The studies were required to be relevant to rehabilitation and to use TMS or robotics methodologies. Guidelines were applied via independent extraction by multiple observers. Despite the limited amount of research using these procedures for assessing and inducing neuroplasticity, there is growing evidence that both TMS and robotics can be very effective, inexpensive, and convenient ways for assessing and inducing rehabilitation. Although TMS has primarily been used as an assessment tool for motor function, an increasing number of studies are using TMS as a tool to directly induce plasticity and improve motor function. Similarly, robotic devices have been used for rehabilitation because of their suitability for delivery of highly repeatable training. New directions in robotics-assisted rehabilitation are taking advantage of novel measurements that can be acquired via the devices, enabling unique methods of assessment of motor recovery. As refinements in technology and advances in our knowledge continue, TMS and robotics should play an increasing role in assessing and promoting the recovery of function. Ongoing and future studies combining TMS and robotics within the same populations may prove fruitful for a more detailed and comprehensive assessment of the central and peripheral changes in the nervous system during precisely induced recovery.
Computer hardware and software for robotic control
NASA Technical Reports Server (NTRS)
Davis, Virgil Leon
1987-01-01
The KSC has implemented an integrated system that coordinates state-of-the-art robotic subsystems. It is a sensor based real-time robotic control system performing operations beyond the capability of an off-the-shelf robot. The integrated system provides real-time closed loop adaptive path control of position and orientation of all six axes of a large robot; enables the implementation of a highly configurable, expandable testbed for sensor system development; and makes several smart distributed control subsystems (robot arm controller, process controller, graphics display, and vision tracking) appear as intelligent peripherals to a supervisory computer coordinating the overall systems.
Balasubramanian, Karthikeyan; Southerland, Joshua; Vaidya, Mukta; Qian, Kai; Eleryan, Ahmed; Fagg, Andrew H; Sluzky, Marc; Oweiss, Karim; Hatsopoulos, Nicholas
2013-01-01
Operant conditioning with biofeedback has been shown to be an effective method to modify neural activity to generate goal-directed actions in a brain-machine interface. It is particularly useful when neural activity cannot be mathematically mapped to motor actions of the actual body such as in the case of amputation. Here, we implement an operant conditioning approach with visual feedback in which an amputated monkey is trained to control a multiple degree-of-freedom robot to perform a reach-to-grasp behavior. A key innovation is that each controlled dimension represents a behaviorally relevant synergy among a set of joint degrees-of-freedom. We present a number of behavioral metrics by which to assess improvements in BMI control with exposure to the system. The use of non-human primates with chronic amputation is arguably the most clinically-relevant model of human amputation that could have direct implications for developing a neural prosthesis to treat humans with missing upper limbs.
Unified Approach To Control Of Motions Of Mobile Robots
NASA Technical Reports Server (NTRS)
Seraji, Homayoun
1995-01-01
Improved computationally efficient scheme developed for on-line coordinated control of both manipulation and mobility of robots that include manipulator arms mounted on mobile bases. Present scheme similar to one described in "Coordinated Control of Mobile Robotic Manipulators" (NPO-19109). Both schemes based on configuration-control formalism. Present one incorporates explicit distinction between holonomic and nonholonomic constraints. Several other prior articles in NASA Tech Briefs discussed aspects of configuration-control formalism. These include "Increasing the Dexterity of Redundant Robots" (NPO-17801), "Redundant Robot Can Avoid Obstacles" (NPO-17852), "Configuration-Control Scheme Copes with Singularities" (NPO-18556), "More Uses for Configuration Control of Robots" (NPO-18607/NPO-18608).
Design and implementation of self-balancing coaxial two wheel robot based on HSIC
NASA Astrophysics Data System (ADS)
Hu, Tianlian; Zhang, Hua; Dai, Xin; Xia, Xianfeng; Liu, Ran; Qiu, Bo
2007-12-01
This thesis has studied the control problem concerning position and orientation control of self-balancing coaxial two wheel robot based on the human simulated intelligent control (HSIC) theory. Adopting Lagrange equation, the dynamic model of self-balancing coaxial two-wheel Robot is built up, and the Sensory-motor Intelligent Schemas (SMIS) of HSIC controller for the robot is designed by analyzing its movement and simulating the human controller. In robot's motion process, by perceiving position and orientation of the robot and using multi-mode control strategy based on characteristic identification, the HSIC controller enables the robot to control posture. Utilizing Matlab/Simulink, a simulation platform is established and a motion controller is designed and realized based on RT-Linux real-time operating system, employing high speed ARM9 processor S3C2440 as kernel of the motion controller. The effectiveness of the new design is testified by the experiment.
Toward real-time tumor margin identification in image-guided robotic brain tumor resection
NASA Astrophysics Data System (ADS)
Hu, Danying; Jiang, Yang; Belykh, Evgenii; Gong, Yuanzheng; Preul, Mark C.; Hannaford, Blake; Seibel, Eric J.
2017-03-01
For patients with malignant brain tumors (glioblastomas), a safe maximal resection of tumor is critical for an increased survival rate. However, complete resection of the cancer is hard to achieve due to the invasive nature of these tumors, where the margins of the tumors become blurred from frank tumor to more normal brain tissue, but in which single cells or clusters of malignant cells may have invaded. Recent developments in fluorescence imaging techniques have shown great potential for improved surgical outcomes by providing surgeons intraoperative contrast-enhanced visual information of tumor in neurosurgery. The current near-infrared (NIR) fluorophores, such as indocyanine green (ICG), cyanine5.5 (Cy5.5), 5-aminolevulinic acid (5-ALA)-induced protoporphyrin IX (PpIX), are showing clinical potential to be useful in targeting and guiding resections of such tumors. Real-time tumor margin identification in NIR imaging could be helpful to both surgeons and patients by reducing the operation time and space required by other imaging modalities such as intraoperative MRI, and has the potential to integrate with robotically assisted surgery. In this paper, a segmentation method based on the Chan-Vese model was developed for identifying the tumor boundaries in an ex-vivo mouse brain from relatively noisy fluorescence images acquired by a multimodal scanning fiber endoscope (mmSFE). Tumor contours were achieved iteratively by minimizing an energy function formed by a level set function and the segmentation model. Quantitative segmentation metrics based on tumor-to-background (T/B) ratio were evaluated. Results demonstrated feasibility in detecting the brain tumor margins at quasi-real-time and has the potential to yield improved precision brain tumor resection techniques or even robotic interventions in the future.
In vivo robotics: the automation of neuroscience and other intact-system biological fields
Kodandaramaiah, Suhasa B.; Boyden, Edward S.; Forest, Craig R.
2013-01-01
Robotic and automation technologies have played a huge role in in vitro biological science, having proved critical for scientific endeavors such as genome sequencing and high-throughput screening. Robotic and automation strategies are beginning to play a greater role in in vivo and in situ sciences, especially when it comes to the difficult in vivo experiments required for understanding the neural mechanisms of behavior and disease. In this perspective, we discuss the prospects for robotics and automation to impact neuroscientific and intact-system biology fields. We discuss how robotic innovations might be created to open up new frontiers in basic and applied neuroscience, and present a concrete example with our recent automation of in vivo whole cell patch clamp electrophysiology of neurons in the living mouse brain. PMID:23841584
Social Robotics in Therapy of Apraxia of Speech
Alonso-Martín, Fernando
2018-01-01
Apraxia of speech is a motor speech disorder in which messages from the brain to the mouth are disrupted, resulting in an inability for moving lips or tongue to the right place to pronounce sounds correctly. Current therapies for this condition involve a therapist that in one-on-one sessions conducts the exercises. Our aim is to work in the line of robotic therapies in which a robot is able to perform partially or autonomously a therapy session, endowing a social robot with the ability of assisting therapists in apraxia of speech rehabilitation exercises. Therefore, we integrate computer vision and machine learning techniques to detect the mouth pose of the user and, on top of that, our social robot performs autonomously the different steps of the therapy using multimodal interaction. PMID:29713440
Wireless intraoral tongue control of an assistive robotic arm for individuals with tetraplegia.
Andreasen Struijk, Lotte N S; Egsgaard, Line Lindhardt; Lontis, Romulus; Gaihede, Michael; Bentsen, Bo
2017-11-06
For an individual with tetraplegia assistive robotic arms provide a potentially invaluable opportunity for rehabilitation. However, there is a lack of available control methods to allow these individuals to fully control the assistive arms. Here we show that it is possible for an individual with tetraplegia to use the tongue to fully control all 14 movements of an assistive robotic arm in a three dimensional space using a wireless intraoral control system, thus allowing for numerous activities of daily living. We developed a tongue-based robotic control method incorporating a multi-sensor inductive tongue interface. One abled-bodied individual and one individual with tetraplegia performed a proof of concept study by controlling the robot with their tongue using direct actuator control and endpoint control, respectively. After 30 min of training, the able-bodied experimental participant tongue controlled the assistive robot to pick up a roll of tape in 80% of the attempts. Further, the individual with tetraplegia succeeded in fully tongue controlling the assistive robot to reach for and touch a roll of tape in 100% of the attempts and to pick up the roll in 50% of the attempts. Furthermore, she controlled the robot to grasp a bottle of water and pour its contents into a cup; her first functional action in 19 years. To our knowledge, this is the first time that an individual with tetraplegia has been able to fully control an assistive robotic arm using a wireless intraoral tongue interface. The tongue interface used to control the robot is currently available for control of computers and of powered wheelchairs, and the robot employed in this study is also commercially available. Therefore, the presented results may translate into available solutions within reasonable time.
Wang, Ying; Lin, Xudong; Chen, Xi; Chen, Xian; Xu, Zhen; Zhang, Wenchong; Liao, Qinghai; Duan, Xin; Wang, Xin; Liu, Ming; Wang, Feng; He, Jufang; Shi, Peng
2017-10-01
Many nanomaterials can be used as sensors or transducers in biomedical research and they form the essential components of transformative novel biotechnologies. In this study, we present an all-optical method for tetherless remote control of neural activity using fully implantable micro-devices based on upconversion technology. Upconversion nanoparticles (UCNPs) were used as transducers to convert near-infrared (NIR) energy to visible light in order to stimulate neurons expressing different opsin proteins. In our setup, UCNPs were packaged in a glass micro-optrode to form an implantable device with superb long-term biocompatibility. We showed that remotely applied NIR illumination is able to reliably trigger spiking activity in rat brains. In combination with a robotic laser projection system, the upconversion-based tetherless neural stimulation technique was implemented to modulate brain activity in various regions, including the striatum, ventral tegmental area, and visual cortex. Using this system, we were able to achieve behavioral conditioning in freely moving animals. Notably, our microscale device was at least one order of magnitude smaller in size (∼100 μm in diameter) and two orders of magnitude lighter in weight (less than 1 mg) than existing wireless optogenetic devices based on light-emitting diodes. This feature allows simultaneous implantation of multiple UCNP-optrodes to achieve modulation of brain function to control complex animal behavior. We believe that this technology not only represents a novel practical application of upconversion nanomaterials, but also opens up new possibilities for remote control of neural activity in the brains of behaving animals. Copyright © 2017 Elsevier Ltd. All rights reserved.
Method and apparatus for automatic control of a humanoid robot
NASA Technical Reports Server (NTRS)
Abdallah, Muhammad E (Inventor); Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Sanders, Adam M (Inventor); Reiland, Matthew J (Inventor)
2013-01-01
A robotic system includes a humanoid robot having a plurality of joints adapted for force control with respect to an object acted upon by the robot, a graphical user interface (GUI) for receiving an input signal from a user, and a controller. The GUI provides the user with intuitive programming access to the controller. The controller controls the joints using an impedance-based control framework, which provides object level, end-effector level, and/or joint space-level control of the robot in response to the input signal. A method for controlling the robotic system includes receiving the input signal via the GUI, e.g., a desired force, and then processing the input signal using a host machine to control the joints via an impedance-based control framework. The framework provides object level, end-effector level, and/or joint space-level control of the robot, and allows for functional-based GUI to simplify implementation of a myriad of operating modes.
Three degree-of-freedom force feedback control for robotic mating of umbilical lines
NASA Technical Reports Server (NTRS)
Fullmer, R. Rees
1988-01-01
The use of robotic manipulators for the mating and demating of umbilical fuel lines to the Space Shuttle Vehicle prior to launch is investigated. Force feedback control is necessary to minimize the contact forces which develop during mating. The objective is to develop and demonstrate a working robotic force control system. Initial experimental force control tests with an ASEA IRB-90 industrial robot using the system's Adaptive Control capabilities indicated that control stability would by a primary problem. An investigation of the ASEA system showed a 0.280 second software delay between force input commands and the output of command voltages to the servo system. This computational delay was identified as the primary cause of the instability. Tests on a second path into the ASEA's control computer using the MicroVax II supervisory computer show that time delay would be comparable, offering no stability improvement. An alternative approach was developed where the digital control system of the robot was disconnected and an analog electronic force controller was used to control the robot's servosystem directly, allowing the robot to use force feedback control while in rigid contact with a moving three-degree-of-freedom target. An alternative approach was developed where the digital control system of the robot was disconnected and an analog electronic force controller was used to control the robot's servo system directly. This method allowed the robot to use force feedback control while in rigid contact with moving three degree-of-freedom target. Tests on this approach indicated adequate force feedback control even under worst case conditions. A strategy to digitally-controlled vision system was developed. This requires switching between the digital controller when using vision control and the analog controller when using force control, depending on whether or not the mating plates are in contact.
Evolution of Collective Behaviors for a Real Swarm of Aquatic Surface Robots.
Duarte, Miguel; Costa, Vasco; Gomes, Jorge; Rodrigues, Tiago; Silva, Fernando; Oliveira, Sancho Moura; Christensen, Anders Lyhne
2016-01-01
Swarm robotics is a promising approach for the coordination of large numbers of robots. While previous studies have shown that evolutionary robotics techniques can be applied to obtain robust and efficient self-organized behaviors for robot swarms, most studies have been conducted in simulation, and the few that have been conducted on real robots have been confined to laboratory environments. In this paper, we demonstrate for the first time a swarm robotics system with evolved control successfully operating in a real and uncontrolled environment. We evolve neural network-based controllers in simulation for canonical swarm robotics tasks, namely homing, dispersion, clustering, and monitoring. We then assess the performance of the controllers on a real swarm of up to ten aquatic surface robots. Our results show that the evolved controllers transfer successfully to real robots and achieve a performance similar to the performance obtained in simulation. We validate that the evolved controllers display key properties of swarm intelligence-based control, namely scalability, flexibility, and robustness on the real swarm. We conclude with a proof-of-concept experiment in which the swarm performs a complete environmental monitoring task by combining multiple evolved controllers.
Evolution of Collective Behaviors for a Real Swarm of Aquatic Surface Robots
Duarte, Miguel; Costa, Vasco; Gomes, Jorge; Rodrigues, Tiago; Silva, Fernando; Oliveira, Sancho Moura; Christensen, Anders Lyhne
2016-01-01
Swarm robotics is a promising approach for the coordination of large numbers of robots. While previous studies have shown that evolutionary robotics techniques can be applied to obtain robust and efficient self-organized behaviors for robot swarms, most studies have been conducted in simulation, and the few that have been conducted on real robots have been confined to laboratory environments. In this paper, we demonstrate for the first time a swarm robotics system with evolved control successfully operating in a real and uncontrolled environment. We evolve neural network-based controllers in simulation for canonical swarm robotics tasks, namely homing, dispersion, clustering, and monitoring. We then assess the performance of the controllers on a real swarm of up to ten aquatic surface robots. Our results show that the evolved controllers transfer successfully to real robots and achieve a performance similar to the performance obtained in simulation. We validate that the evolved controllers display key properties of swarm intelligence-based control, namely scalability, flexibility, and robustness on the real swarm. We conclude with a proof-of-concept experiment in which the swarm performs a complete environmental monitoring task by combining multiple evolved controllers. PMID:26999614
Controlling the autonomy of a reconnaissance robot
NASA Astrophysics Data System (ADS)
Dalgalarrondo, Andre; Dufourd, Delphine; Filliat, David
2004-09-01
In this paper, we present our research on the control of a mobile robot for indoor reconnaissance missions. Based on previous work concerning our robot control architecture HARPIC, we have developed a man machine interface and software components that allow a human operator to control a robot at different levels of autonomy. This work aims at studying how a robot could be helpful in indoor reconnaissance and surveillance missions in hostile environment. In such missions, since a soldier faces many threats and must protect himself while looking around and holding his weapon, he cannot devote his attention to the teleoperation of the robot. Moreover, robots are not yet able to conduct complex missions in a fully autonomous mode. Thus, in a pragmatic way, we have built a software that allows dynamic swapping between control modes (manual, safeguarded and behavior-based) while automatically performing map building and localization of the robot. It also includes surveillance functions like movement detection and is designed for multirobot extensions. We first describe the design of our agent-based robot control architecture and discuss the various ways to control and interact with a robot. The main modules and functionalities implementing those ideas in our architecture are detailed. More precisely, we show how we combine manual controls, obstacle avoidance, wall and corridor following, way point and planned travelling. Some experiments on a Pioneer robot equipped with various sensors are presented. Finally, we suggest some promising directions for the development of robots and user interfaces for hostile environment and discuss our planned future improvements.
Control strategies for robots in contact
NASA Astrophysics Data System (ADS)
Park, Jaeheung
In the field of robotics, there is a growing need to provide robots with the ability to interact with complex and unstructured environments. Operations in such environments pose significant challenges in terms of sensing, planning, and control. In particular, it is critical to design control algorithms that account for the dynamics of the robot and environment at multiple contacts. The work in this thesis focuses on the development of a control framework that addresses these issues. The approaches are based on the operational space control framework and estimation methods. By accounting for the dynamics of the robot and environment, modular and systematic methods are developed for robots interacting with the environment at multiple locations. The proposed force control approach demonstrates high performance in the presence of uncertainties. Building on this basic capability, new control algorithms have been developed for haptic teleoperation, multi-contact interaction with the environment, and whole body motion of non-fixed based robots. These control strategies have been experimentally validated through simulations and implementations on physical robots. The results demonstrate the effectiveness of the new control structure and its robustness to uncertainties. The contact control strategies presented in this thesis are expected to contribute to the needs in advanced controller design for humanoid and other complex robots interacting with their environments.
Control of wheeled mobile robot in restricted environment
NASA Astrophysics Data System (ADS)
Ali, Mohammed A. H.; En, Chang Yong
2018-03-01
This paper presents a simulation and practical control system for wheeled mobile robot in restricted environment. A wheeled mobile robot with 3 wheels is fabricated and controlled by proportional derivative active force control (PD-AFC) to move in a pre-planned restricted environment to maintain the tracking errors at zero level. A control system with two loops, outer by PD controller and inner loop by Active Force Control, are designed to control the wheeled mobile robot. Fuzzy logic controller is implemented in the Active force Control to estimate the inertia matrix that will be used to calculate the actual torque applied on the wheeled mobile robot. The mobile robot is tested in two different trajectories, namely are circular and straight path. The actual path and desired path are compared.
Control of autonomous robot using neural networks
NASA Astrophysics Data System (ADS)
Barton, Adam; Volna, Eva
2017-07-01
The aim of the article is to design a method of control of an autonomous robot using artificial neural networks. The introductory part describes control issues from the perspective of autonomous robot navigation and the current mobile robots controlled by neural networks. The core of the article is the design of the controlling neural network, and generation and filtration of the training set using ART1 (Adaptive Resonance Theory). The outcome of the practical part is an assembled Lego Mindstorms EV3 robot solving the problem of avoiding obstacles in space. To verify models of an autonomous robot behavior, a set of experiments was created as well as evaluation criteria. The speed of each motor was adjusted by the controlling neural network with respect to the situation in which the robot was found.
Wang, Yin
2015-01-01
Notwithstanding the significant role that human–robot interactions (HRI) will play in the near future, limited research has explored the neural correlates of feeling eerie in response to social robots. To address this empirical lacuna, the current investigation examined brain activity using functional magnetic resonance imaging while a group of participants (n = 26) viewed a series of human–human interactions (HHI) and HRI. Although brain sites constituting the mentalizing network were found to respond to both types of interactions, systematic neural variation across sites signaled diverging social-cognitive strategies during HHI and HRI processing. Specifically, HHI elicited increased activity in the left temporal–parietal junction indicative of situation-specific mental state attributions, whereas HRI recruited the precuneus and the ventromedial prefrontal cortex (VMPFC) suggestive of script-based social reasoning. Activity in the VMPFC also tracked feelings of eeriness towards HRI in a parametric manner, revealing a potential neural correlate for a phenomenon known as the uncanny valley. By demonstrating how understanding social interactions depends on the kind of agents involved, this study highlights pivotal sub-routes of impression formation and identifies prominent challenges in the use of humanoid robots. PMID:25911418
Rehabilitation robotics: pilot trial of a spatial extension for MIT-Manus
Krebs, Hermano I; Ferraro, Mark; Buerger, Stephen P; Newbery, Miranda J; Makiyama, Antonio; Sandmann, Michael; Lynch, Daniel; Volpe, Bruce T; Hogan, Neville
2004-01-01
Background Previous results with the planar robot MIT-MANUS demonstrated positive benefits in trials with over 250 stroke patients. Consistent with motor learning, the positive effects did not generalize to other muscle groups or limb segments. Therefore we are designing a new class of robots to exercise other muscle groups or limb segments. This paper presents basic engineering aspects of a novel robotic module that extends our approach to anti-gravity movements out of the horizontal plane and a pilot study with 10 outpatients. Patients were trained during the initial six-weeks with the planar module (i.e., performance-based training limited to horizontal movements with gravity compensation). This training was followed by six-weeks of robotic therapy that focused on performing vertical arm movements against gravity. The 12-week protocol includes three one-hour robot therapy sessions per week (total 36 robot treatment sessions). Results Pilot study demonstrated that the protocol was safe and well tolerated with no patient presenting any adverse effect. Consistent with our past experience with persons with chronic strokes, there was a statistically significant reduction in tone measurement from admission to discharge of performance-based planar robot therapy and we have not observed increases in muscle tone or spasticity during the anti-gravity training protocol. Pilot results showed also a reduction in shoulder-elbow impairment following planar horizontal training. Furthermore, it suggested an additional reduction in shoulder-elbow impairment following the anti-gravity training. Conclusion Our clinical experiments have focused on a fundamental question of whether task specific robotic training influences brain recovery. To date several studies demonstrate that in mature and damaged nervous systems, nurture indeed has an effect on nature. The improved recovery is most pronounced in the trained limb segments. We have now embarked on experiments that test whether we can continue to influence recovery, long after the acute insult, with a novel class of spatial robotic devices. This pilot results support the pursuit of further clinical trials to test efficacy and the pursuit of optimal therapy following brain injury. PMID:15679916
Rehabilitation robotics: pilot trial of a spatial extension for MIT-Manus.
Krebs, Hermano I; Ferraro, Mark; Buerger, Stephen P; Newbery, Miranda J; Makiyama, Antonio; Sandmann, Michael; Lynch, Daniel; Volpe, Bruce T; Hogan, Neville
2004-10-26
BACKGROUND: Previous results with the planar robot MIT-MANUS demonstrated positive benefits in trials with over 250 stroke patients. Consistent with motor learning, the positive effects did not generalize to other muscle groups or limb segments. Therefore we are designing a new class of robots to exercise other muscle groups or limb segments. This paper presents basic engineering aspects of a novel robotic module that extends our approach to anti-gravity movements out of the horizontal plane and a pilot study with 10 outpatients. Patients were trained during the initial six-weeks with the planar module (i.e., performance-based training limited to horizontal movements with gravity compensation). This training was followed by six-weeks of robotic therapy that focused on performing vertical arm movements against gravity. The 12-week protocol includes three one-hour robot therapy sessions per week (total 36 robot treatment sessions). RESULTS: Pilot study demonstrated that the protocol was safe and well tolerated with no patient presenting any adverse effect. Consistent with our past experience with persons with chronic strokes, there was a statistically significant reduction in tone measurement from admission to discharge of performance-based planar robot therapy and we have not observed increases in muscle tone or spasticity during the anti-gravity training protocol. Pilot results showed also a reduction in shoulder-elbow impairment following planar horizontal training. Furthermore, it suggested an additional reduction in shoulder-elbow impairment following the anti-gravity training. CONCLUSION: Our clinical experiments have focused on a fundamental question of whether task specific robotic training influences brain recovery. To date several studies demonstrate that in mature and damaged nervous systems, nurture indeed has an effect on nature. The improved recovery is most pronounced in the trained limb segments. We have now embarked on experiments that test whether we can continue to influence recovery, long after the acute insult, with a novel class of spatial robotic devices. This pilot results support the pursuit of further clinical trials to test efficacy and the pursuit of optimal therapy following brain injury.
Mylonas, N; Damianou, C
2014-03-01
A prototype magnetic resonance imaging (MRI)-compatible positioning device that navigates a high intensity focused ultrasound (HIFU) transducer is presented. The positioning device has three user-controlled degrees of freedom that allow access to brain targets using a lateral coupling approach. The positioning device can be used for the treatment of brain cancer (thermal mode ultrasound) or ischemic stroke (mechanical mode ultrasound). The positioning device incorporates only MRI compatible materials such as piezoelectric motors, ABS plastic, brass screws, and brass rack and pinion. The robot has the ability to accurately move the transducer thus creating overlapping lesions in rabbit brain in vivo. The registration and repeatability of the system was evaluated using tissues in vitro and gel phantom and was also tested in vivo in the brain of a rabbit. A simple, cost effective, portable positioning device has been developed which can be used in virtually any clinical MRI scanner since it can be placed on the table of the MRI scanner. This system can be used to treat in the future patients with brain cancer and ischemic stroke. Copyright © 2013 John Wiley & Sons, Ltd.
How do we think machines think? An fMRI study of alleged competition with an artificial intelligence
Chaminade, Thierry; Rosset, Delphine; Da Fonseca, David; Nazarian, Bruno; Lutcher, Ewald; Cheng, Gordon; Deruelle, Christine
2012-01-01
Mentalizing is defined as the inference of mental states of fellow humans, and is a particularly important skill for social interactions. Here we assessed whether activity in brain areas involved in mentalizing is specific to the processing of mental states or can be generalized to the inference of non-mental states by comparing brain responses during the interaction with an intentional and an artificial agent. Participants were scanned using fMRI during interactive rock-paper-scissors games while believing their opponent was a fellow human (Intentional agent, Int), a humanoid robot endowed with an artificial intelligence (Artificial agent, Art), or a computer playing randomly (Random agent, Rnd). Participants' subjective reports indicated that they adopted different stances against the three agents. The contrast of brain activity during interaction with the artificial and the random agents didn't yield any cluster at the threshold used, suggesting the absence of a reproducible stance when interacting with an artificial intelligence. We probed response to the artificial agent in regions of interest corresponding to clusters found in the contrast between the intentional and the random agents. In the precuneus involved in working memory, the posterior intraparietal suclus, in the control of attention and the dorsolateral prefrontal cortex, in executive functions, brain activity for Art was larger than for Rnd but lower than for Int, supporting the intrinsically engaging nature of social interactions. A similar pattern in the left premotor cortex and anterior intraparietal sulcus involved in motor resonance suggested that participants simulated human, and to a lesser extend humanoid robot actions, when playing the game. Finally, mentalizing regions, the medial prefrontal cortex and right temporoparietal junction, responded to the human only, supporting the specificity of mentalizing areas for interactions with intentional agents. PMID:22586381
Chaminade, Thierry; Rosset, Delphine; Da Fonseca, David; Nazarian, Bruno; Lutcher, Ewald; Cheng, Gordon; Deruelle, Christine
2012-01-01
Mentalizing is defined as the inference of mental states of fellow humans, and is a particularly important skill for social interactions. Here we assessed whether activity in brain areas involved in mentalizing is specific to the processing of mental states or can be generalized to the inference of non-mental states by comparing brain responses during the interaction with an intentional and an artificial agent. Participants were scanned using fMRI during interactive rock-paper-scissors games while believing their opponent was a fellow human (Intentional agent, Int), a humanoid robot endowed with an artificial intelligence (Artificial agent, Art), or a computer playing randomly (Random agent, Rnd). Participants' subjective reports indicated that they adopted different stances against the three agents. The contrast of brain activity during interaction with the artificial and the random agents didn't yield any cluster at the threshold used, suggesting the absence of a reproducible stance when interacting with an artificial intelligence. We probed response to the artificial agent in regions of interest corresponding to clusters found in the contrast between the intentional and the random agents. In the precuneus involved in working memory, the posterior intraparietal suclus, in the control of attention and the dorsolateral prefrontal cortex, in executive functions, brain activity for Art was larger than for Rnd but lower than for Int, supporting the intrinsically engaging nature of social interactions. A similar pattern in the left premotor cortex and anterior intraparietal sulcus involved in motor resonance suggested that participants simulated human, and to a lesser extend humanoid robot actions, when playing the game. Finally, mentalizing regions, the medial prefrontal cortex and right temporoparietal junction, responded to the human only, supporting the specificity of mentalizing areas for interactions with intentional agents.
Framework and Method for Controlling a Robotic System Using a Distributed Computer Network
NASA Technical Reports Server (NTRS)
Sanders, Adam M. (Inventor); Strawser, Philip A. (Inventor); Barajas, Leandro G. (Inventor); Permenter, Frank Noble (Inventor)
2015-01-01
A robotic system for performing an autonomous task includes a humanoid robot having a plurality of compliant robotic joints, actuators, and other integrated system devices that are controllable in response to control data from various control points, and having sensors for measuring feedback data at the control points. The system includes a multi-level distributed control framework (DCF) for controlling the integrated system components over multiple high-speed communication networks. The DCF has a plurality of first controllers each embedded in a respective one of the integrated system components, e.g., the robotic joints, a second controller coordinating the components via the first controllers, and a third controller for transmitting a signal commanding performance of the autonomous task to the second controller. The DCF virtually centralizes all of the control data and the feedback data in a single location to facilitate control of the robot across the multiple communication networks.
Controlling robots in the home: Factors that affect the performance of novice robot operators.
McGinn, Conor; Sena, Aran; Kelly, Kevin
2017-11-01
For robots to successfully integrate into everyday life, it is important that they can be effectively controlled by laypeople. However, the task of manually controlling mobile robots can be challenging due to demanding cognitive and sensorimotor requirements. This research explores the effect that the built environment has on the manual control of domestic service robots. In this study, a virtual reality simulation of a domestic robot control scenario was developed. The performance of fifty novice users was evaluated, and their subjective experiences recorded through questionnaires. Through quantitative and qualitative analysis, it was found that untrained operators frequently perform poorly at navigation-based robot control tasks. The study found that passing through doorways accounted for the largest number of collisions, and was consistently identified as a very difficult operation to perform. These findings suggest that homes and other human-orientated settings present significant challenges to robot control. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kinematics Control and Analysis of Industrial Robot
NASA Astrophysics Data System (ADS)
Zhu, Tongbo; Cai, Fan; Li, Yongmei; Liu, Wei
2018-03-01
The robot’s development present situation, basic principle and control system are introduced briefly. Research is mainly focused on the study of the robot’s kinematics and motion control. The structural analysis of a planar articulated robot (SCARA) robot is presented,the coordinate system is established to obtain the position and orientation matrix of the end effector,a method of robot kinematics analysis based on homogeneous transformation method is proposed, and the kinematics solution of the robot is obtained.Establishment of industrial robot’s kinematics equation and formula for positive kinematics by example. Finally,the kinematic analysis of this robot was verified by examples.It provides a basis for structural design and motion control.It has active significance to promote the motion control of industrial robot.
Determining of a robot workspace using the integration of a CAD system with a virtual control system
NASA Astrophysics Data System (ADS)
Herbuś, K.; Ociepka, P.
2016-08-01
The paper presents a method for determining the workspace of an industrial robot using an approach consisting in integration a 3D model of an industrial robot with a virtual control system. The robot model with his work environment, prepared for motion simulation, was created in the “Motion Simulation” module of the Siemens PLM NX software. In the mentioned model components of the “link” type were created which map the geometrical form of particular elements of the robot and the components of “joint” type mapping way of cooperation of components of the “link” type. In the paper is proposed the solution in which the control process of a virtual robot is similar to the control process of a real robot using the manual control panel (teach pendant). For this purpose, the control application “JOINT” was created, which provides the manipulation of a virtual robot in accordance with its internal control system. The set of procedures stored in an .xlsx file is the element integrating the 3D robot model working in the CAD/CAE class system with the elaborated control application.
Brain-computer interfaces in medicine.
Shih, Jerry J; Krusienski, Dean J; Wolpaw, Jonathan R
2012-03-01
Brain-computer interfaces (BCIs) acquire brain signals, analyze them, and translate them into commands that are relayed to output devices that carry out desired actions. BCIs do not use normal neuromuscular output pathways. The main goal of BCI is to replace or restore useful function to people disabled by neuromuscular disorders such as amyotrophic lateral sclerosis, cerebral palsy, stroke, or spinal cord injury. From initial demonstrations of electroencephalography-based spelling and single-neuron-based device control, researchers have gone on to use electroencephalographic, intracortical, electrocorticographic, and other brain signals for increasingly complex control of cursors, robotic arms, prostheses, wheelchairs, and other devices. Brain-computer interfaces may also prove useful for rehabilitation after stroke and for other disorders. In the future, they might augment the performance of surgeons or other medical professionals. Brain-computer interface technology is the focus of a rapidly growing research and development enterprise that is greatly exciting scientists, engineers, clinicians, and the public in general. Its future achievements will depend on advances in 3 crucial areas. Brain-computer interfaces need signal-acquisition hardware that is convenient, portable, safe, and able to function in all environments. Brain-computer interface systems need to be validated in long-term studies of real-world use by people with severe disabilities, and effective and viable models for their widespread dissemination must be implemented. Finally, the day-to-day and moment-to-moment reliability of BCI performance must be improved so that it approaches the reliability of natural muscle-based function. Copyright © 2012 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.
Robotic Head and Neck Surgery: History, Technical Evolution and the Future.
Garas, George; Arora, Asit
2018-06-20
The first application of robotic technology in surgery was described in 1985 when a robot was used to define the trajectory for a stereotactic brain biopsy. Following its successful application in a variety of surgical operations, the da Vinci® robot, the most widely used surgical robot at present, made its clinical debut in otorhinolaryngology and head and neck surgery in 2005 when the first transoral robotic surgery (TORS) resections of base of tongue neoplasms were reported. Subsequently, the indications for TORS rapidly expanded, and they now include tumours of the oropharynx, hypopharynx, parapharyngeal space, and supraglottic larynx, as well as obstructive sleep apnoea (OSA). The da Vinci® robot has also been successfully used for scarless-in-the-neck thyroidectomy and parathyroidectomy. At present, the main barrier to the wider uptake of robotic surgery is the prohibitive cost of the da Vinci® robotic system. Several novel, flexible surgical robots are currently being developed that are likely to not only enhance patient safety and expand current indications but also drive down costs, thus making this innovation more widely available. Future directions relate to overlay technology through augmented reality/AR that allows real-time image-guidance, miniaturisation (nanorobots), and the development of autonomous robots. © 2018 S. Karger AG, Basel.
Bearing-based localization for leader-follower formation control
Han, Qing; Ren, Shan; Lang, Hao; Zhang, Changliang
2017-01-01
The observability of the leader robot system and the leader-follower formation control are studied. First, the nonlinear observability is studied for when the leader robot observes landmarks. Second, the system is shown to be completely observable when the leader robot observes two different landmarks. When the leader robot system is observable, multi-robots can rapidly form and maintain a formation based on the bearing-only information that the follower robots observe from the leader robot. Finally, simulations confirm the effectiveness of the proposed formation control. PMID:28426706
Interactive robot control system and method of use
NASA Technical Reports Server (NTRS)
Abdallah, Muhammad E. (Inventor); Sanders, Adam M. (Inventor); Platt, Robert (Inventor); Reiland, Matthew J. (Inventor); Linn, Douglas Martin (Inventor)
2012-01-01
A robotic system includes a robot having joints, actuators, and sensors, and a distributed controller. The controller includes command-level controller, embedded joint-level controllers each controlling a respective joint, and a joint coordination-level controller coordinating motion of the joints. A central data library (CDL) centralizes all control and feedback data, and a user interface displays a status of each joint, actuator, and sensor using the CDL. A parameterized action sequence has a hierarchy of linked events, and allows the control data to be modified in real time. A method of controlling the robot includes transmitting control data through the various levels of the controller, routing all control and feedback data to the CDL, and displaying status and operation of the robot using the CDL. The parameterized action sequences are generated for execution by the robot, and a hierarchy of linked events is created within the sequence.
Digital redesign of the control system for the Robotics Research Corporation model K-1607 robot
NASA Technical Reports Server (NTRS)
Carroll, Robert L.
1989-01-01
The analog control system for positioning each link of the Robotics Research Corporation Model K-1607 robot manipulator was redesigned for computer control. In order to accomplish the redesign, a linearized model of the dynamic behavior of the robot was developed. The parameters of the model were determined by examination of the input-output data collected in closed-loop operation of the analog control system. The robot manipulator possesses seven degrees of freedom in its motion. The analog control system installed by the manufacturer of the robot attempts to control the positioning of each link without feedback from other links. Constraints on the design of a digital control system include: the robot cannot be disassembled for measurement of parameters; the digital control system must not include filtering operations if possible, because of lack of computer capability; and criteria of goodness of control system performing is lacking. The resulting design employs sampled-data position and velocity feedback. The criteria of the design permits the control system gain margin and phase margin, measured at the same frequencies, to be the same as that provided by the analog control system.
Adaptation of a haptic robot in a 3T fMRI.
Snider, Joseph; Plank, Markus; May, Larry; Liu, Thomas T; Poizner, Howard
2011-10-04
Functional magnetic resonance imaging (fMRI) provides excellent functional brain imaging via the BOLD signal with advantages including non-ionizing radiation, millimeter spatial accuracy of anatomical and functional data, and nearly real-time analyses. Haptic robots provide precise measurement and control of position and force of a cursor in a reasonably confined space. Here we combine these two technologies to allow precision experiments involving motor control with haptic/tactile environment interaction such as reaching or grasping. The basic idea is to attach an 8 foot end effecter supported in the center to the robot allowing the subject to use the robot, but shielding it and keeping it out of the most extreme part of the magnetic field from the fMRI machine (Figure 1). The Phantom Premium 3.0, 6DoF, high-force robot (SensAble Technologies, Inc.) is an excellent choice for providing force-feedback in virtual reality experiments, but it is inherently non-MR safe, introduces significant noise to the sensitive fMRI equipment, and its electric motors may be affected by the fMRI's strongly varying magnetic field. We have constructed a table and shielding system that allows the robot to be safely introduced into the fMRI environment and limits both the degradation of the fMRI signal by the electrically noisy motors and the degradation of the electric motor performance by the strongly varying magnetic field of the fMRI. With the shield, the signal to noise ratio (SNR: mean signal/noise standard deviation) of the fMRI goes from a baseline of ~380 to ~330, and ~250 without the shielding. The remaining noise appears to be uncorrelated and does not add artifacts to the fMRI of a test sphere (Figure 2). The long, stiff handle allows placement of the robot out of range of the most strongly varying parts of the magnetic field so there is no significant effect of the fMRI on the robot. The effect of the handle on the robot's kinematics is minimal since it is lightweight (~2.6 lbs) but extremely stiff 3/4" graphite and well balanced on the 3DoF joint in the middle. The end result is an fMRI compatible, haptic system with about 1 cubic foot of working space, and, when combined with virtual reality, it allows for a new set of experiments to be performed in the fMRI environment including naturalistic reaching, passive displacement of the limb and haptic perception, adaptation learning in varying force fields, or texture identification.
NASA Astrophysics Data System (ADS)
Schwartz, Andrew B.
2016-07-01
The target paper by Santello et al. [1] uses the observation that hand shape during grasping can be described by a small set of basic postures, or ;synergies,; to describe the possible neural basis of motor control during this complex behavior. In the literature, the term ;synergy; has been used with a number of different meanings and is still loosely defined, making it difficult to derive concrete analogs of corresponding neural structure. Here, I will define ;synergy; broadly, as a set of parameters bound together by a pattern of correlation. With this definition, it can be argued that behavioral synergies are just one facet of the correlational structuring used by the brain to generate behavior. As pointed out in the target article, the structure found in synergies is driven by the physical constraints of our bodies and our surroundings, combined with the behavioral control imparted by our nervous system. This control itself is based on correlational structure which is likely to be a fundamental property of brain function.
Open Issues in Evolutionary Robotics.
Silva, Fernando; Duarte, Miguel; Correia, Luís; Oliveira, Sancho Moura; Christensen, Anders Lyhne
2016-01-01
One of the long-term goals in evolutionary robotics is to be able to automatically synthesize controllers for real autonomous robots based only on a task specification. While a number of studies have shown the applicability of evolutionary robotics techniques for the synthesis of behavioral control, researchers have consistently been faced with a number of issues preventing the widespread adoption of evolutionary robotics for engineering purposes. In this article, we review and discuss the open issues in evolutionary robotics. First, we analyze the benefits and challenges of simulation-based evolution and subsequent deployment of controllers versus evolution on real robotic hardware. Second, we discuss specific evolutionary computation issues that have plagued evolutionary robotics: (1) the bootstrap problem, (2) deception, and (3) the role of genomic encoding and genotype-phenotype mapping in the evolution of controllers for complex tasks. Finally, we address the absence of standard research practices in the field. We also discuss promising avenues of research. Our underlying motivation is the reduction of the current gap between evolutionary robotics and mainstream robotics, and the establishment of evolutionary robotics as a canonical approach for the engineering of autonomous robots.
SU-G-JeP3-08: Robotic System for Ultrasound Tracking in Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhlemann, I; Graduate School for Computing in Medicine and Life Sciences, University of Luebeck; Jauer, P
Purpose: For safe and accurate real-time tracking of tumors for IGRT using 4D ultrasound, it is necessary to make use of novel, high-end force-sensitive lightweight robots designed for human-machine interaction. Such a robot will be integrated into an existing robotized ultrasound system for non-invasive 4D live tracking, using a newly developed real-time control and communication framework. Methods: The new KUKA LWR iiwa robot is used for robotized ultrasound real-time tumor tracking. Besides more precise probe contact pressure detection, this robot provides an additional 7th link, enhancing the dexterity of the kinematic and the mounted transducer. Several integrated, certified safety featuresmore » create a safe environment for the patients during treatment. However, to remotely control the robot for the ultrasound application, a real-time control and communication framework has to be developed. Based on a client/server concept, client-side control commands are received and processed by a central server unit and are implemented by a client module running directly on the robot’s controller. Several special functionalities for robotized ultrasound applications are integrated and the robot can now be used for real-time control of the image quality by adjusting the transducer position, and contact pressure. The framework was evaluated looking at overall real-time capability for communication and processing of three different standard commands. Results: Due to inherent, certified safety modules, the new robot ensures a safe environment for patients during tumor tracking. Furthermore, the developed framework shows overall real-time capability with a maximum average latency of 3.6 ms (Minimum 2.5 ms; 5000 trials). Conclusion: The novel KUKA LBR iiwa robot will advance the current robotized ultrasound tracking system with important features. With the developed framework, it is now possible to remotely control this robot and use it for robotized ultrasound tracking applications, including image quality control and target tracking.« less
Drive Control System for Pipeline Crawl Robot Based on CAN Bus
NASA Astrophysics Data System (ADS)
Chen, H. J.; Gao, B. T.; Zhang, X. H.; Deng2, Z. Q.
2006-10-01
Drive control system plays important roles in pipeline robot. In order to inspect the flaw and corrosion of seabed crude oil pipeline, an original mobile pipeline robot with crawler drive unit, power and monitor unit, central control unit, and ultrasonic wave inspection device is developed. The CAN bus connects these different function units and presents a reliable information channel. Considering the limited space, a compact hardware system is designed based on an ARM processor with two CAN controllers. With made-to-order CAN protocol for the crawl robot, an intelligent drive control system is developed. The implementation of the crawl robot demonstrates that the presented drive control scheme can meet the motion control requirements of the underwater pipeline crawl robot.
Coordination of multiple robot arms
NASA Technical Reports Server (NTRS)
Barker, L. K.; Soloway, D.
1987-01-01
Kinematic resolved-rate control from one robot arm is extended to the coordinated control of multiple robot arms in the movement of an object. The structure supports the general movement of one axis system (moving reference frame) with respect to another axis system (control reference frame) by one or more robot arms. The grippers of the robot arms do not have to be parallel or at any pre-disposed positions on the object. For multiarm control, the operator chooses the same moving and control reference frames for each of the robot arms. Consequently, each arm then moves as though it were carrying out the commanded motions by itself.
Research on robot mobile obstacle avoidance control based on visual information
NASA Astrophysics Data System (ADS)
Jin, Jiang
2018-03-01
Robots to detect obstacles and control robots to avoid obstacles has been a key research topic of robot control. In this paper, a scheme of visual information acquisition is proposed. By judging visual information, the visual information is transformed into the information source of path processing. In accordance with the established route, in the process of encountering obstacles, the algorithm real-time adjustment trajectory to meet the purpose of intelligent control of mobile robots. Simulation results show that, through the integration of visual sensing information, the obstacle information is fully obtained, while the real-time and accuracy of the robot movement control is guaranteed.
Feasibility of Synergy-Based Exoskeleton Robot Control in Hemiplegia.
Hassan, Modar; Kadone, Hideki; Ueno, Tomoyuki; Hada, Yasushi; Sankai, Yoshiyuki; Suzuki, Kenji
2018-06-01
Here, we present a study on exoskeleton robot control based on inter-limb locomotor synergies using a robot control method developed to target hemiparesis. The robot control is based on inter-limb locomotor synergies and kinesiological information from the non-paretic leg and a walking aid cane to generate motion patterns for the assisted leg. The developed synergy-based system was tested against an autonomous robot control system in five patients with hemiparesis and varying locomotor abilities. Three of the participants were able to walk using the robot. Results from these participants showed an improved spatial symmetry ratio and more consistent step length with the synergy-based method compared with that for the autonomous method, while the increase in the range of motion for the assisted joints was larger with the autonomous system. The kinematic synergy distribution of the participants walking without the robot suggests a relationship between each participant's synergy distribution and his/her ability to control the robot: participants with two independent synergies accounting for approximately 80% of the data variability were able to walk with the robot. This observation was not consistently apparent with conventional clinical measures such as the Brunnstrom stages. This paper contributes to the field of robot-assisted locomotion therapy by introducing the concept of inter-limb synergies, demonstrating performance differences between synergy-based and autonomous robot control, and investigating the range of disability in which the system is usable.
Juang, Chia-Feng; Lai, Min-Ge; Zeng, Wan-Ting
2015-09-01
This paper presents a method that allows two wheeled, mobile robots to navigate unknown environments while cooperatively carrying an object. In the navigation method, a leader robot and a follower robot cooperatively perform either obstacle boundary following (OBF) or target seeking (TS) to reach a destination. The two robots are controlled by fuzzy controllers (FC) whose rules are learned through an adaptive fusion of continuous ant colony optimization and particle swarm optimization (AF-CACPSO), which avoids the time-consuming task of manually designing the controllers. The AF-CACPSO-based evolutionary fuzzy control approach is first applied to the control of a single robot to perform OBF. The learning approach is then applied to achieve cooperative OBF with two robots, where an auxiliary FC designed with the AF-CACPSO is used to control the follower robot. For cooperative TS, a rule for coordination of the two robots is developed. To navigate cooperatively, a cooperative behavior supervisor is introduced to select between cooperative OBF and cooperative TS. The performance of the AF-CACPSO is verified through comparisons with various population-based optimization algorithms for the OBF learning problem. Simulations and experiments verify the effectiveness of the approach for cooperative navigation of two robots.
Research on the inspection robot for cable tunnel
NASA Astrophysics Data System (ADS)
Xin, Shihao
2017-03-01
Robot by mechanical obstacle, double end communication, remote control and monitoring software components. The mechanical obstacle part mainly uses the tracked mobile robot mechanism, in order to facilitate the design and installation of the robot, the other auxiliary swing arm; double side communication part used a combination of communication wire communication with wireless communication, great improve the communication range of the robot. When the robot is controlled by far detection range, using wired communication control, on the other hand, using wireless communication; remote control part mainly completes the inspection robot walking, navigation, positioning and identification of cloud platform control. In order to improve the reliability of its operation, the preliminary selection of IPC as the control core the movable body selection program hierarchical structure as a design basis; monitoring software part is the core part of the robot, which has a definite diagnosis Can be instead of manual simple fault judgment, instead the robot as a remote actuators, staff as long as the remote control can be, do not have to body at the scene. Four parts are independent of each other but are related to each other, the realization of the structure of independence and coherence, easy maintenance and coordination work. Robot with real-time positioning function and remote control function, greatly improves the IT operation. Robot remote monitor, to avoid the direct contact with the staff and line, thereby reducing the accident casualties, for the safety of the inspection work has far-reaching significance.
In vivo robotics: the automation of neuroscience and other intact-system biological fields.
Kodandaramaiah, Suhasa B; Boyden, Edward S; Forest, Craig R
2013-12-01
Robotic and automation technologies have played a huge role in in vitro biological science, having proved critical for scientific endeavors such as genome sequencing and high-throughput screening. Robotic and automation strategies are beginning to play a greater role in in vivo and in situ sciences, especially when it comes to the difficult in vivo experiments required for understanding the neural mechanisms of behavior and disease. In this perspective, we discuss the prospects for robotics and automation to influence neuroscientific and intact-system biology fields. We discuss how robotic innovations might be created to open up new frontiers in basic and applied neuroscience and present a concrete example with our recent automation of in vivo whole-cell patch clamp electrophysiology of neurons in the living mouse brain. © 2013 New York Academy of Sciences.
An Integrated Framework for Human-Robot Collaborative Manipulation.
Sheng, Weihua; Thobbi, Anand; Gu, Ye
2015-10-01
This paper presents an integrated learning framework that enables humanoid robots to perform human-robot collaborative manipulation tasks. Specifically, a table-lifting task performed jointly by a human and a humanoid robot is chosen for validation purpose. The proposed framework is split into two phases: 1) phase I-learning to grasp the table and 2) phase II-learning to perform the manipulation task. An imitation learning approach is proposed for phase I. In phase II, the behavior of the robot is controlled by a combination of two types of controllers: 1) reactive and 2) proactive. The reactive controller lets the robot take a reactive control action to make the table horizontal. The proactive controller lets the robot take proactive actions based on human motion prediction. A measure of confidence of the prediction is also generated by the motion predictor. This confidence measure determines the leader/follower behavior of the robot. Hence, the robot can autonomously switch between the behaviors during the task. Finally, the performance of the human-robot team carrying out the collaborative manipulation task is experimentally evaluated on a platform consisting of a Nao humanoid robot and a Vicon motion capture system. Results show that the proposed framework can enable the robot to carry out the collaborative manipulation task successfully.
Neural architectures for robot intelligence.
Ritter, H; Steil, J J; Nölker, C; Röthling, F; McGuire, P
2003-01-01
We argue that direct experimental approaches to elucidate the architecture of higher brains may benefit from insights gained from exploring the possibilities and limits of artificial control architectures for robot systems. We present some of our recent work that has been motivated by that view and that is centered around the study of various aspects of hand actions since these are intimately linked with many higher cognitive abilities. As examples, we report on the development of a modular system for the recognition of continuous hand postures based on neural nets, the use of vision and tactile sensing for guiding prehensile movements of a multifingered hand, and the recognition and use of hand gestures for robot teaching. Regarding the issue of learning, we propose to view real-world learning from the perspective of data-mining and to focus more strongly on the imitation of observed actions instead of purely reinforcement-based exploration. As a concrete example of such an effort we report on the status of an ongoing project in our laboratory in which a robot equipped with an attention system with a neurally inspired architecture is taught actions by using hand gestures in conjunction with speech commands. We point out some of the lessons learnt from this system, and discuss how systems of this kind can contribute to the study of issues at the junction between natural and artificial cognitive systems.
Robot vibration control using inertial damping forces
NASA Technical Reports Server (NTRS)
Lee, Soo Han; Book, Wayne J.
1991-01-01
This paper concerns the suppression of the vibration of a large flexible robot by inertial forces of a small robot which is located at the tip of the large robot. A controller for generating damping forces to a large robot is designed based on the two time scale model. The controller does not need to calculate the quasi-steady variables and is efficient in computation. Simulation results show the effectiveness of the inertial forces and the controller designed.
Robot vibration control using inertial damping forces
NASA Technical Reports Server (NTRS)
Lee, Soo Han; Book, Wayne J.
1989-01-01
The suppression is examined of the vibration of a large flexible robot by inertial forces of a small robot which is located at the tip of the large robot. A controller for generating damping forces to a large robot is designed based on the two time scale mode. The controller does not need to calculate the quasi-steady state variables and is efficient in computation. Simulation results show the effectiveness of the inertial forces and the controller designed.
Detecting the Intention to Move Upper Limbs from Electroencephalographic Brain Signals.
Gudiño-Mendoza, Berenice; Sanchez-Ante, Gildardo; Antelis, Javier M
2016-01-01
Early decoding of motor states directly from the brain activity is essential to develop brain-machine interfaces (BMI) for natural motor control of neuroprosthetic devices. Hence, this study aimed to investigate the detection of movement information before the actual movement occurs. This information piece could be useful to provide early control signals to drive BMI-based rehabilitation and motor assisted devices, thus providing a natural and active rehabilitation therapy. In this work, electroencephalographic (EEG) brain signals from six healthy right-handed participants were recorded during self-initiated reaching movements of the upper limbs. The analysis of these EEG traces showed that significant event-related desynchronization is present before and during the execution of the movements, predominantly in the motor-related α and β frequency bands and in electrodes placed above the motor cortex. This oscillatory brain activity was used to continuously detect the intention to move the limbs, that is, to identify the motor phase prior to the actual execution of the reaching movement. The results showed, first, significant classification between relax and movement intention and, second, significant detection of movement intention prior to the onset of the executed movement. On the basis of these results, detection of movement intention could be used in BMI settings to reduce the gap between mental motor processes and the actual movement performed by an assisted or rehabilitation robotic device.
NASA Astrophysics Data System (ADS)
Haq, R.; Prayitno, H.; Dzulkiflih; Sucahyo, I.; Rahmawati, E.
2018-03-01
In this article, the development of a low cost mobile robot based on PID controller and odometer for education is presented. PID controller and odometer is applied for controlling mobile robot position. Two-dimensional position vector in cartesian coordinate system have been inserted to robot controller as an initial and final position. Mobile robot has been made based on differential drive and sensor magnetic rotary encoder which measured robot position from a number of wheel rotation. Odometry methode use data from actuator movements for predicting change of position over time. The mobile robot is examined to get final position with three different heading angle 30°, 45° and 60° by applying various value of KP, KD and KI constant.
Brain-controlled muscle stimulation for the restoration of motor function
Ethier, Christian; Miller, Lee E
2014-01-01
Loss of the ability to move, as a consequence of spinal cord injury or neuromuscular disorder, has devastating consequences for the paralyzed individual, and great economic consequences for society. Functional Electrical Stimulation (FES) offers one means to restore some mobility to these individuals, improving not only their autonomy, but potentially their general health and well-being as well. FES uses electrical stimulation to cause the paralyzed muscles to contract. Existing clinical systems require the stimulation to be preprogrammed, with the patient typically using residual voluntary movement of another body part to trigger and control the patterned stimulation. The rapid development of neural interfacing in the past decade offers the promise of dramatically improved control for these patients, potentially allowing continuous control of FES through signals recorded from motor cortex, as the patient attempts to control the paralyzed body part. While application of these ‘Brain Machine Interfaces’ (BMIs) has undergone dramatic development for control of computer cursors and even robotic limbs, their use as an interface for FES has been much more limited. In this review, we consider both FES and BMI technologies and discuss the prospect for combining the two to provide important new options for paralyzed individuals. PMID:25447224
State-of-the-art robotic devices for ankle rehabilitation: Mechanism and control review.
Hussain, Shahid; Jamwal, Prashant K; Ghayesh, Mergen H
2017-12-01
There is an increasing research interest in exploring use of robotic devices for the physical therapy of patients suffering from stroke and spinal cord injuries. Rehabilitation of patients suffering from ankle joint dysfunctions such as drop foot is vital and therefore has called for the development of newer robotic devices. Several robotic orthoses and parallel ankle robots have been developed during the last two decades to augment the conventional ankle physical therapy of patients. A comprehensive review of these robotic ankle rehabilitation devices is presented in this article. Recent developments in the mechanism design, actuation and control are discussed. The study encompasses robotic devices for treadmill and over-ground training as well as platform-based parallel ankle robots. Control strategies for these robotic devices are deliberated in detail with an emphasis on the assist-as-needed training strategies. Experimental evaluations of the mechanism designs and various control strategies of these robotic ankle rehabilitation devices are also presented.
Soft Robotics: New Perspectives for Robot Bodyware and Control
Laschi, Cecilia; Cianchetti, Matteo
2014-01-01
The remarkable advances of robotics in the last 50 years, which represent an incredible wealth of knowledge, are based on the fundamental assumption that robots are chains of rigid links. The use of soft materials in robotics, driven not only by new scientific paradigms (biomimetics, morphological computation, and others), but also by many applications (biomedical, service, rescue robots, and many more), is going to overcome these basic assumptions and makes the well-known theories and techniques poorly applicable, opening new perspectives for robot design and control. The current examples of soft robots represent a variety of solutions for actuation and control. Though very first steps, they have the potential for a radical technological change. Soft robotics is not just a new direction of technological development, but a novel approach to robotics, unhinging its fundamentals, with the potential to produce a new generation of robots, in the support of humans in our natural environments. PMID:25022259
Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.
Modares, Hamidreza; Ranatunga, Isura; Lewis, Frank L; Popa, Dan O
2016-03-01
An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.
RoMPS concept review automatic control of space robot, volume 2
NASA Technical Reports Server (NTRS)
Dobbs, M. E.
1991-01-01
Topics related to robot operated materials processing in space (RoMPS) are presented in view graph form and include: (1) system concept; (2) Hitchhiker Interface Requirements; (3) robot axis control concepts; (4) Autonomous Experiment Management System; (5) Zymate Robot Controller; (6) Southwest SC-4 Computer; (7) oven control housekeeping data; and (8) power distribution.
Portable control device for networked mobile robots
Feddema, John T.; Byrne, Raymond H.; Bryan, Jon R.; Harrington, John J.; Gladwell, T. Scott
2002-01-01
A handheld control device provides a way for controlling one or multiple mobile robotic vehicles by incorporating a handheld computer with a radio board. The device and software use a personal data organizer as the handheld computer with an additional microprocessor and communication device on a radio board for use in controlling one robot or multiple networked robots.
Research on wheelchair robot control system based on EOG
NASA Astrophysics Data System (ADS)
Xu, Wang; Chen, Naijian; Han, Xiangdong; Sun, Jianbo
2018-04-01
The paper describes an intelligent wheelchair control system based on EOG. It can help disabled people improve their living ability. The system can acquire EOG signal from the user, detect the number of blink and the direction of glancing, and then send commands to the wheelchair robot via RS-232 to achieve the control of wheelchair robot. Wheelchair robot control system based on EOG is composed of processing EOG signal and human-computer interactive technology, which achieves a purpose of using conscious eye movement to control wheelchair robot.
ASTRAKAS, LOUKAS G.; NAQVI, SYED HASSAN ABBAS; KATEB, BABAK; TZIKA, A. ARIA
2012-01-01
The number of individuals suffering from stroke is increasing daily, and its consequences are a major contributor to invalidity in today’s society. Stroke rehabilitation is relatively new, having been hampered from the longstanding view that lost functions were not recoverable. Nowadays, robotic devices, which aid by stimulating brain plasticity, can assist in restoring movement compromised by stroke-induced pathological changes in the brain which can be monitored by MRI. Multiparametric magnetic resonance imaging (MRI) of stroke patients participating in a training program with a novel Magnetic Resonance Compatible Hand-Induced Robotic Device (MR_CHIROD) could yield a promising biomarker that, ultimately, will enhance our ability to advance hand motor recovery following chronic stroke. Using state-of-the art MRI in conjunction with MR_CHIROD-assisted therapy can provide novel biomarkers for stroke patient rehabilitation extracted by a meta-analysis of data. Successful completion of such studies may provide a ground breaking method for the future evaluation of stroke rehabilitation therapies. Their results will attest to the effectiveness of using MR-compatible hand devices with MRI to provide accurate monitoring during rehabilitative therapy. Furthermore, such results may identify biomarkers of brain plasticity that can be monitored during stroke patient rehabilitation. The potential benefit for chronic stroke patients is that rehabilitation may become possible for a longer period of time after stroke than previously thought, unveiling motor skill improvements possible even after six months due to retained brain plasticity. PMID:22426741
Exploring TeleRobotics: A Radio-Controlled Robot
ERIC Educational Resources Information Center
Deal, Walter F., III; Hsiung, Steve C.
2007-01-01
Robotics is a rich and exciting multidisciplinary area to study and learn about electronics and control technology. The interest in robotic devices and systems provides the technology teacher with an excellent opportunity to make many concrete connections between electronics, control technology, and computers and science, engineering, and…
An EMG Interface for the Control of Motion and Compliance of a Supernumerary Robotic Finger
Hussain, Irfan; Spagnoletti, Giovanni; Salvietti, Gionata; Prattichizzo, Domenico
2016-01-01
In this paper, we propose a novel electromyographic (EMG) control interface to control motion and joints compliance of a supernumerary robotic finger. The supernumerary robotic fingers are a recently introduced class of wearable robotics that provides users additional robotic limbs in order to compensate or augment the existing abilities of natural limbs without substituting them. Since supernumerary robotic fingers are supposed to closely interact and perform actions in synergy with the human limbs, the control principles of extra finger should have similar behavior as human’s ones including the ability of regulating the compliance. So that, it is important to propose a control interface and to consider the actuators and sensing capabilities of the robotic extra finger compatible to implement stiffness regulation control techniques. We propose EMG interface and a control approach to regulate the compliance of the device through servo actuators. In particular, we use a commercial EMG armband for gesture recognition to be associated with the motion control of the robotic device and surface one channel EMG electrodes interface to regulate the compliance of the robotic device. We also present an updated version of a robotic extra finger where the adduction/abduction motion is realized through ball bearing and spur gears mechanism. We have validated the proposed interface with two sets of experiments related to compensation and augmentation. In the first set of experiments, different bimanual tasks have been performed with the help of the robotic device and simulating a paretic hand since this novel wearable system can be used to compensate the missing grasping abilities in chronic stroke patients. In the second set, the robotic extra finger is used to enlarge the workspace and manipulation capability of healthy hands. In both sets, the same EMG control interface has been used. The obtained results demonstrate that the proposed control interface is intuitive and can successfully be used, not only to control the motion of a supernumerary robotic finger but also to regulate its compliance. The proposed approach can be exploited also for the control of different wearable devices that has to actively cooperate with the human limbs. PMID:27891088
Neuroprosthetic Decoder Training as Imitation Learning.
Merel, Josh; Carlson, David; Paninski, Liam; Cunningham, John P
2016-05-01
Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.
Active tactile exploration using a brain-machine-brain interface.
O'Doherty, Joseph E; Lebedev, Mikhail A; Ifft, Peter J; Zhuang, Katie Z; Shokur, Solaiman; Bleuler, Hannes; Nicolelis, Miguel A L
2011-10-05
Brain-machine interfaces use neuronal activity recorded from the brain to establish direct communication with external actuators, such as prosthetic arms. It is hoped that brain-machine interfaces can be used to restore the normal sensorimotor functions of the limbs, but so far they have lacked tactile sensation. Here we report the operation of a brain-machine-brain interface (BMBI) that both controls the exploratory reaching movements of an actuator and allows signalling of artificial tactile feedback through intracortical microstimulation (ICMS) of the primary somatosensory cortex. Monkeys performed an active exploration task in which an actuator (a computer cursor or a virtual-reality arm) was moved using a BMBI that derived motor commands from neuronal ensemble activity recorded in the primary motor cortex. ICMS feedback occurred whenever the actuator touched virtual objects. Temporal patterns of ICMS encoded the artificial tactile properties of each object. Neuronal recordings and ICMS epochs were temporally multiplexed to avoid interference. Two monkeys operated this BMBI to search for and distinguish one of three visually identical objects, using the virtual-reality arm to identify the unique artificial texture associated with each. These results suggest that clinical motor neuroprostheses might benefit from the addition of ICMS feedback to generate artificial somatic perceptions associated with mechanical, robotic or even virtual prostheses.
Google glass-based remote control of a mobile robot
NASA Astrophysics Data System (ADS)
Yu, Song; Wen, Xi; Li, Wei; Chen, Genshe
2016-05-01
In this paper, we present an approach to remote control of a mobile robot via a Google Glass with the multi-function and compact size. This wearable device provides a new human-machine interface (HMI) to control a robot without need for a regular computer monitor because the Google Glass micro projector is able to display live videos around robot environments. In doing it, we first develop a protocol to establish WI-FI connection between Google Glass and a robot and then implement five types of robot behaviors: Moving Forward, Turning Left, Turning Right, Taking Pause, and Moving Backward, which are controlled by sliding and clicking the touchpad located on the right side of the temple. In order to demonstrate the effectiveness of the proposed Google Glass-based remote control system, we navigate a virtual Surveyor robot to pass a maze. Experimental results demonstrate that the proposed control system achieves the desired performance.
D2 Delta Robot Structural Design and Kinematics Analysis
NASA Astrophysics Data System (ADS)
Yang, Xudong; wang, Song; Dong, Yu; Yang, Hai
2017-12-01
In this paper, a new type of Delta robot with only two degrees of freedom is proposed on the basis of multi - degree - of - freedom delta robot. In order to meet our application requirements, we have carried out structural design and analysis of the robot. Through SolidWorks modeling, combined with 3D printing technology to determine the final robot structure. In order to achieve the precise control of the robot, the kinematics analysis of the robot was carried out. The SimMechanics toolbox of MATLAB is used to establish the mechanism model, and the kinematics mathematical model is used to simulate the robot motion control in Matlab environment. Finally, according to the design mechanism, the working space of the robot is drawn by the graphic method, which lays the foundation for the motion control of the subsequent robot.
Control of free-flying space robot manipulator systems
NASA Technical Reports Server (NTRS)
Cannon, Robert H., Jr.
1990-01-01
New control techniques for self contained, autonomous free flying space robots were developed and tested experimentally. Free flying robots are envisioned as a key element of any successful long term presence in space. These robots must be capable of performing the assembly, maintenance, and inspection, and repair tasks that currently require human extravehicular activity (EVA). A set of research projects were developed and carried out using lab models of satellite robots and a flexible manipulator. The second generation space robot models use air cushion vehicle (ACV) technology to simulate in 2-D the drag free, zero g conditions of space. The current work is divided into 5 major projects: Global Navigation and Control of a Free Floating Robot, Cooperative Manipulation from a Free Flying Robot, Multiple Robot Cooperation, Thrusterless Robotic Locomotion, and Dynamic Payload Manipulation. These projects are examined in detail.
Robots, systems, and methods for hazard evaluation and visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Curtis W.; Bruemmer, David J.; Walton, Miles C.
A robot includes a hazard sensor, a locomotor, and a system controller. The robot senses a hazard intensity at a location of the robot, moves to a new location in response to the hazard intensity, and autonomously repeats the sensing and moving to determine multiple hazard levels at multiple locations. The robot may also include a communicator to communicate the multiple hazard levels to a remote controller. The remote controller includes a communicator for sending user commands to the robot and receiving the hazard levels from the robot. A graphical user interface displays an environment map of the environment proximatemore » the robot and a scale for indicating a hazard intensity. A hazard indicator corresponds to a robot position in the environment map and graphically indicates the hazard intensity at the robot position relative to the scale.« less
HERMIES-I: a mobile robot for navigation and manipulation experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weisbin, C.R.; Barhen, J.; de Saussure, G.
1985-01-01
The purpose of this paper is to report the current status of investigations ongoing at the Center for Engineering Systems Advanced Research (CESAR) in the areas of navigation and manipulation in unstructured environments. The HERMIES-I mobile robot, a prototype of a series which contains many of the major features needed for remote work in hazardous environments is discussed. Initial experimental work at CESAR has begun in the area of navigation. It briefly reviews some of the ongoing research in autonomous navigation and describes initial research with HERMIES-I and associated graphic simulation. Since the HERMIES robots will generally be composed ofmore » a variety of asynchronously controlled hardware components (such as manipulator arms, digital image sensors, sonars, etc.) it seems appropriate to consider future development of the HERMIES brain as a hypercube ensemble machine with concurrent computation and associated message passing. The basic properties of such a hypercube architecture are presented. Decision-making under uncertainty eventually permeates all of our work. Following a survey of existing analytical approaches, it was decided that a stronger theoretical basis is required. As such, this paper presents the framework for a recently developed hybrid uncertainty theory. 21 refs., 2 figs.« less
Daunoraviciene, Kristina; Adomaviciene, Ausra; Svirskis, Donatas; Griškevičius, Julius; Juocevicius, Alvydas
2018-05-18
Integration of the verticalization robot, Erigo, with functional electric stimulation and passive leg movements in the postacute rehabilitation of neurological patients could reduce the risk of secondary complications and improve functional outcomes (i.e. orthostatic hypotension, postural control and walking ability). The aim of this study was to estimate and quantify changes in the postacute stage, mainly related to heart rate and blood pressure in functional recovery, postural parameters, walking ability and psychoemotional reactions, during training using the verticalization robot Erigo. Six patients [three suffering from a stroke (ST) and three with spinal cord injuries (SCI)] participated in 10 sessions of physical therapy with the verticalization robot during primary inpatient rehabilitation. Functional state changes were assessed using clinical tests before and after the treatment, and the loading tolerance during Erigo training was noted. In early rehabilitation, Erigo training was safe and effective at improving orthostatic tolerance, posture and positive emotional reactions in both the ST and SCI patients (P< 0.05). In addition, advanced technologies were more effective at boosting the orthostatic tolerance in SCI patients, while they were more effective at increasing the dynamic balance and walking ability in ST patients (P< 0.05).
NASA Technical Reports Server (NTRS)
Stevens, H. D.; Miles, E. S.; Rock, S. J.; Cannon, R. H.
1994-01-01
Expanding man's presence in space requires capable, dexterous robots capable of being controlled from the Earth. Traditional 'hand-in-glove' control paradigms require the human operator to directly control virtually every aspect of the robot's operation. While the human provides excellent judgment and perception, human interaction is limited by low bandwidth, delayed communications. These delays make 'hand-in-glove' operation from Earth impractical. In order to alleviate many of the problems inherent to remote operation, Stanford University's Aerospace Robotics Laboratory (ARL) has developed the Object-Based Task-Level Control architecture. Object-Based Task-Level Control (OBTLC) removes the burden of teleoperation from the human operator and enables execution of tasks not possible with current techniques. OBTLC is a hierarchical approach to control where the human operator is able to specify high-level, object-related tasks through an intuitive graphical user interface. Infrequent task-level command replace constant joystick operations, eliminating communications bandwidth and time delay problems. The details of robot control and task execution are handled entirely by the robot and computer control system. The ARL has implemented the OBTLC architecture on a set of Free-Flying Space Robots. The capability of the OBTLC architecture has been demonstrated by controlling the ARL Free-Flying Space Robots from NASA Ames Research Center.
Human-like Compliance for Dexterous Robot Hands
NASA Technical Reports Server (NTRS)
Jau, Bruno M.
1995-01-01
This paper describes the Active Electromechanical Compliance (AEC) system that was developed for the Jau-JPL anthropomorphic robot. The AEC system imitates the functionality of the human muscle's secondary function, which is to control the joint's stiffness: AEC is implemented through servo controlling the joint drive train's stiffness. The control strategy, controlling compliant joints in teleoperation, is described. It enables automatic hybrid position and force control through utilizing sensory feedback from joint and compliance sensors. This compliant control strategy is adaptable for autonomous robot control as well. Active compliance enables dual arm manipulations, human-like soft grasping by the robot hand, and opens the way to many new robotics applications.
Upgrade of a GEP50 robot control system
NASA Astrophysics Data System (ADS)
Alounai, Ali T.; Gharsalli, Imed
2000-03-01
Recently the ASL at Tennessee Technological University was donated a GEP50 welder. The welding is done via off line point-to-point teaching. A state of the art robot was needed for research but because money was not available to purchase such an expensive item. It was therefore decided to upgrade the GEP50 control system to make the welder a multitasking robot. The robot has five degrees of freedom can be sufficient to pursue some research in robotics control. The problem was that the control system of the welder is limited to point-to-point control, using off-line teaching. To make the GEP50 a multitasking robot that can be controlled using different control strategies, the existing control system of the welder had to be replaced. The upgrade turned to be a low cost operation. This robot is currently in sue to test different advanced control strategies in the ASL. This work discusses all the steps and tasks undertaken during the upgrade operation. The hardware and software required or the upgrade are provided in this paper. The newly developed control system has been implemented and tested successfully.
Framework and Implications of Virtual Neurorobotics
Goodman, Philip H.; Zou, Quan; Dascalu, Sergiu-Mihai
2008-01-01
Despite decades of societal investment in artificial learning systems, truly “intelligent” systems have yet to be realized. These traditional models are based on input-output pattern optimization and/or cognitive production rule modeling. One response has been social robotics, using the interaction of human and robot to capture important cognitive dynamics such as cooperation and emotion; to date, these systems still incorporate traditional learning algorithms. More recently, investigators are focusing on the core assumptions of the brain “algorithm” itself—trying to replicate uniquely “neuromorphic” dynamics such as action potential spiking and synaptic learning. Only now are large-scale neuromorphic models becoming feasible, due to the availability of powerful supercomputers and an expanding supply of parameters derived from research into the brain's interdependent electrophysiological, metabolomic and genomic networks. Personal computer technology has also led to the acceptance of computer-generated humanoid images, or “avatars”, to represent intelligent actors in virtual realities. In a recent paper, we proposed a method of virtual neurorobotics (VNR) in which the approaches above (social-emotional robotics, neuromorphic brain architectures, and virtual reality projection) are hybridized to rapidly forward-engineer and develop increasingly complex, intrinsically intelligent systems. In this paper, we synthesize our research and related work in the field and provide a framework for VNR, with wider implications for research and practical applications. PMID:18982115
Visual control of navigation in insects and its relevance for robotics.
Srinivasan, Mandyam V
2011-08-01
Flying insects display remarkable agility, despite their diminutive eyes and brains. This review describes our growing understanding of how these creatures use visual information to stabilize flight, avoid collisions with objects, regulate flight speed, detect and intercept other flying insects such as mates or prey, navigate to a distant food source, and orchestrate flawless landings. It also outlines the ways in which these insights are now being used to develop novel, biologically inspired strategies for the guidance of autonomous, airborne vehicles. Copyright © 2011 Elsevier Ltd. All rights reserved.
Optimal control of 2-wheeled mobile robot at energy performance index
NASA Astrophysics Data System (ADS)
Kaliński, Krzysztof J.; Mazur, Michał
2016-03-01
The paper presents the application of the optimal control method at the energy performance index towards motion control of the 2-wheeled mobile robot. With the use of the proposed method of control the 2-wheeled mobile robot can realise effectively the desired trajectory. The problem of motion control of mobile robots is usually neglected and thus performance of the realisation of the high level control tasks is limited.
Chinellato, Eris; Del Pobil, Angel P
2009-06-01
The topic of vision-based grasping is being widely studied in humans and in other primates using various techniques and with different goals. The fundamental related findings are reviewed in this paper, with the aim of providing researchers from different fields, including intelligent robotics and neural computation, a comprehensive but accessible view on the subject. A detailed description of the principal sensorimotor processes and the brain areas involved is provided following a functional perspective, in order to make this survey especially useful for computational modeling and bio-inspired robotic applications.
Systems and Algorithms for Automated Collaborative Observation Using Networked Robotic Cameras
ERIC Educational Resources Information Center
Xu, Yiliang
2011-01-01
The development of telerobotic systems has evolved from Single Operator Single Robot (SOSR) systems to Multiple Operator Multiple Robot (MOMR) systems. The relationship between human operators and robots follows the master-slave control architecture and the requests for controlling robot actuation are completely generated by human operators. …
Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue
NASA Technical Reports Server (NTRS)
Zornetzer, Steve; Gage, Douglas
2005-01-01
Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.
A Unified Approach to Motion Control of Motion Robots
NASA Technical Reports Server (NTRS)
Seraji, H.
1994-01-01
This paper presents a simple on-line approach for motion control of mobile robots made up of a manipulator arm mounted on a mobile base. The proposed approach is equally applicable to nonholonomic mobile robots, such as rover-mounted manipulators and to holonomic mobile robots such as tracked robots or compound manipulators. The computational efficiency of the proposed control scheme makes it particularly suitable for real-time implementation.
NASA Astrophysics Data System (ADS)
Lee, Sam; Lucas, Nathan P.; Ellis, R. Darin; Pandya, Abhilash
2012-06-01
This paper presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semiautonomous nature for source localization tasks. The system combines augmented reality interfaces capabilities with human supervisor's ability to control multiple robots. The role of this human multi-robot interface is to allow an operator to control groups of heterogeneous robots in real time in a collaborative manner. It used advanced path planning algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. Each robot knows the environment and obstacles and can automatically generate a collision-free path to any user-selected target. It displayed sensor information from each individual robot directly on the robot in the video view. In addition, a sensor data fused AR view is displayed which helped the users pin point source information or help the operator with the goals of the mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions are tested for source detection tasks. Results show that the novel Augmented Reality multi-robot control (Point-and-Go and Path Planning) reduced mission completion times compared to the traditional joystick control for target detection missions. Usability tests and operator workload analysis are also investigated.
The research on visual industrial robot which adopts fuzzy PID control algorithm
NASA Astrophysics Data System (ADS)
Feng, Yifei; Lu, Guoping; Yue, Lulin; Jiang, Weifeng; Zhang, Ye
2017-03-01
The control system of six degrees of freedom visual industrial robot based on the control mode of multi-axis motion control cards and PC was researched. For the variable, non-linear characteristics of industrial robot`s servo system, adaptive fuzzy PID controller was adopted. It achieved better control effort. In the vision system, a CCD camera was used to acquire signals and send them to video processing card. After processing, PC controls the six joints` motion by motion control cards. By experiment, manipulator can operate with machine tool and vision system to realize the function of grasp, process and verify. It has influence on the manufacturing of the industrial robot.
Cartesian control of redundant robots
NASA Technical Reports Server (NTRS)
Colbaugh, R.; Glass, K.
1989-01-01
A Cartesian-space position/force controller is presented for redundant robots. The proposed control structure partitions the control problem into a nonredundant position/force trajectory tracking problem and a redundant mapping problem between Cartesian control input F is a set member of the set R(sup m) and robot actuator torque T is a set member of the set R(sup n) (for redundant robots, m is less than n). The underdetermined nature of the F yields T map is exploited so that the robot redundancy is utilized to improve the dynamic response of the robot. This dynamically optimal F yields T map is implemented locally (in time) so that it is computationally efficient for on-line control; however, it is shown that the map possesses globally optimal characteristics. Additionally, it is demonstrated that the dynamically optimal F yields T map can be modified so that the robot redundancy is used to simultaneously improve the dynamic response and realize any specified kinematic performance objective (e.g., manipulability maximization or obstacle avoidance). Computer simulation results are given for a four degree of freedom planar redundant robot under Cartesian control, and demonstrate that position/force trajectory tracking and effective redundancy utilization can be achieved simultaneously with the proposed controller.
Human-Robot Interaction: Status and Challenges.
Sheridan, Thomas B
2016-06-01
The current status of human-robot interaction (HRI) is reviewed, and key current research challenges for the human factors community are described. Robots have evolved from continuous human-controlled master-slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence for many applications and under human supervisory control. This mini-review describes HRI developments in four application areas and what are the challenges for human factors research. In addition to a plethora of research papers, evidence of success is manifest in live demonstrations of robot capability under various forms of human control. HRI is a rapidly evolving field. Specialized robots under human teleoperation have proven successful in hazardous environments and medical application, as have specialized telerobots under human supervisory control for space and repetitive industrial tasks. Research in areas of self-driving cars, intimate collaboration with humans in manipulation tasks, human control of humanoid robots for hazardous environments, and social interaction with robots is at initial stages. The efficacy of humanoid general-purpose robots has yet to be proven. HRI is now applied in almost all robot tasks, including manufacturing, space, aviation, undersea, surgery, rehabilitation, agriculture, education, package fetch and delivery, policing, and military operations. © 2016, Human Factors and Ergonomics Society.
Chen, Kai; Xiong, Bo; Ren, Yupeng; Dvorkin, Assaf Y; Gaebler-Spira, Deboah; Sisung, Charles E; Zhang, Li-Qun
2018-01-10
To evaluate the feasibility and effectiveness of a wearable robotic device in guiding isometric torque generation and passive-active movement training for ankle motor recovery in children with acute brain injury. Ten inpatient children with acute brain injury being treated in a rehabilitation hospital. Daily robot-guided ankle passive-active movement therapy for 15 sessions, including isometric torque generation under real-time feedback, stretch-ing, and active movement training with motivating games using a wearable ankle rehabilitation robot. Ankle biomechanical improvements induced by each training session including ankle range of motion (ROM), muscle strength, and clinical (Fugl-Meyer Lower-Extremity (FMLE), Pediatric Balance Scale (PBS)) and biomechanical (ankle ROM and muscle strength) outcomes over 15 training sessions. As training progressed, improvements in biomechanical performance measures followed logarithmic curves. Each training session increased median dorsiflexion active range of motion (AROM) 2.73° (standard deviation (SD) 1.14), dorsiflexion strength 0.87 Nm (SD 0.90), and plantarflexion strength 0.60 Nm (SD 1.19). After 15 training sessions the median FMLE score had increased from 14.0 (SD 10.11) to 23.0 (SD 11.4), PBS had increased from 33.0 (SD 19.99) to 50.0 (SD 23.13) (p < 0.05), median dorsiflexion and plantarflexion strength had improved from 0.21 Nm (SD 4.45) to 4.0 Nm (SD 7.63) and 8.33 Nm (SD 10.18) to 18.45 Nm (SD 14.41), respectively, median dorsiflexion AROM had improved from -10.45° (SD 12.01) to 11.87° (SD 20.69), and median dorsiflexion PROM increased from 20.0° (SD 9.04) to 25.0° (SD 8.03). Isometric torque generation with real-time feedback, stretching and active movement training helped promote neuroplasticity and improve motor performance in children with acute brain injury.
New Paradigms for Human-Robotic Collaboration During Human Planetary Exploration
NASA Astrophysics Data System (ADS)
Parrish, J. C.; Beaty, D. W.; Bleacher, J. E.
2017-02-01
Human exploration missions to other planetary bodies offer new paradigms for collaboration (control, interaction) between humans and robots beyond the methods currently used to control robots from Earth and robots in Earth orbit.
Dealing with the time-varying parameter problem of robot manipulators performing path tracking tasks
NASA Technical Reports Server (NTRS)
Song, Y. D.; Middleton, R. H.
1992-01-01
Many robotic applications involve time-varying payloads during the operation of the robot. It is therefore of interest to consider control schemes that deal with time-varying parameters. Using the properties of the element by element (or Hadarmad) product of matrices, we obtain the robot dynamics in parameter-isolated form, from which a new control scheme is developed. The controller proposed yields zero asymptotic tracking errors when applied to robotic systems with time-varying parameters by using a switching type control law. The results obtained are global in the initial state of the robot, and can be applied to rapidly varying systems.
NASA Astrophysics Data System (ADS)
Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu
2017-03-01
In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.
Cao, Jinghui; Xie, Sheng Quan; Das, Raj; Zhu, Guo L
2014-12-01
A large number of gait rehabilitation robots, together with a variety of control strategies, have been developed and evaluated during the last decade. Initially, control strategies applied to rehabilitation robots were adapted from those applied to traditional industrial robots. However, these strategies cannot optimise effectiveness of gait rehabilitation. As a result, researchers have been investigating control strategies tailored for the needs of rehabilitation. Among these control strategies, assisted-as-needed (AAN) control is one of the most popular research topics in this field. AAN training strategies have gained the theoretical and practical evidence based backup from motor learning principles and clinical studies. Various approaches to AAN training have been proposed and investigated by research groups all around the world. This article presents a review on control algorithms of gait rehabilitation robots to summarise related knowledge and investigate potential trends of development. There are existing review papers on control strategies of rehabilitation robots. The review by Marchal-Crespo and Reinkensmeyer (2009) had a broad cover of control strategies of all kinds of rehabilitation robots. Hussain et al. (2011) had specifically focused on treadmill gait training robots and covered a limited number of control implementations on them. This review article encompasses more detailed information on control strategies for robot assisted gait rehabilitation, but is not limited to treadmill based training. It also investigates the potential to further develop assist-as-needed gait training based on assessments of patients' ability. In this paper, control strategies are generally divided into the trajectory tracking control and AAN control. The review covers these two basic categories, as well as other control algorithm and technologies derived from them, such as biofeedback control. Assessments on human gait ability are also included to investigate how to further develop implementations based on assist-as-needed concept. For the consideration of effectiveness, clinical studies on robotic gait rehabilitation are reviewed and analysed from the viewpoint of control algorithm. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
Method for neural network control of motion using real-time environmental feedback
NASA Technical Reports Server (NTRS)
Buckley, Theresa M. (Inventor)
1997-01-01
A method of motion control for robotics and other automatically controlled machinery using a neural network controller with real-time environmental feedback. The method is illustrated with a two-finger robotic hand having proximity sensors and force sensors that provide environmental feedback signals. The neural network controller is taught to control the robotic hand through training sets using back- propagation methods. The training sets are created by recording the control signals and the feedback signal as the robotic hand or a simulation of the robotic hand is moved through a representative grasping motion. The data recorded is divided into discrete increments of time and the feedback data is shifted out of phase with the control signal data so that the feedback signal data lag one time increment behind the control signal data. The modified data is presented to the neural network controller as a training set. The time lag introduced into the data allows the neural network controller to account for the temporal component of the robotic motion. Thus trained, the neural network controlled robotic hand is able to grasp a wide variety of different objects by generalizing from the training sets.
Software development to support sensor control of robot arc welding
NASA Technical Reports Server (NTRS)
Silas, F. R., Jr.
1986-01-01
The development of software for a Digital Equipment Corporation MINC-23 Laboratory Computer to provide functions of a workcell host computer for Space Shuttle Main Engine (SSME) robotic welding is documented. Routines were written to transfer robot programs between the MINC and an Advanced Robotic Cyro 750 welding robot. Other routines provide advanced program editing features while additional software allows communicatin with a remote computer aided design system. Access to special robot functions were provided to allow advanced control of weld seam tracking and process control for future development programs.
Grosmaire, Anne-Gaëlle; Duret, Christophe
2017-01-01
Repetitive, active movement-based training promotes brain plasticity and motor recovery after stroke. Robotic therapy provides highly repetitive therapy that reduces motor impairment. However, the effect of assist-as-needed algorithms on patient participation and movement quality is not known. To analyze patient participation and motor performance during highly repetitive assist-as-needed upper limb robotic therapy in a retrospective study. Sixteen patients with sub-acute stroke carried out a 16-session upper limb robotic training program combined with usual care. The Fugl-Meyer Assessment (FMA) score was evaluated pre and post training. Robotic assistance parameters and Performance measures were compared within and across sessions. Robotic assistance did not change within-session and decreased between sessions during the training program. Motor performance did not decrease within-session and improved between sessions. Velocity-related assistance parameters improved more quickly than accuracy-related parameters. An assist-as-needed-based upper limb robotic training provided intense and repetitive rehabilitation and promoted patient participation and motor performance, facilitating motor recovery.
Kinematic control of robot with degenerate wrist
NASA Technical Reports Server (NTRS)
Barker, L. K.; Moore, M. C.
1984-01-01
Kinematic resolved rate equations allow an operator with visual feedback to dynamically control a robot hand. When the robot wrist is degenerate, the computed joint angle rates exceed operational limits, and unwanted hand movements can result. The generalized matrix inverse solution can also produce unwanted responses. A method is introduced to control the robot hand in the region of the degenerate robot wrist. The method uses a coordinated movement of the first and third joints of the robot wrist to locate the second wrist joint axis for movement of the robot hand in the commanded direction. The method does not entail infinite joint angle rates.
Research on Snake-Like Robot with Controllable Scales
NASA Astrophysics Data System (ADS)
Chen, Kailin; Zhao, Yuting; Chen, Shuping
The purpose of this paper is to propose a new structure for a snake-like robot. This type of snake-like robot is different from the normal snake-like robot because it has lots of controllable scales which have a large role in helping moving. Besides, a new form of robot gait named as linear motion mode is developed based on theoretical analysis for the new mechanical structure. Through simulation and analysis in simmechanics of matlab, we proved the validity of theories about the motion mode of snake-like robot. The proposed machine construction and control method for the designed motion is verified experimentally by the independent developed snake robot.
TROTER's (Tiny Robotic Operation Team Experiment): A new concept of space robots
NASA Technical Reports Server (NTRS)
Su, Renjeng
1990-01-01
In view of the future need of automation and robotics in space and the existing approaches to the problem, we proposed a new concept of robots for space construction. The new concept is based on the basic idea of decentralization. Decentralization occurs, on the one hand, in using teams of many cooperative robots for construction tasks. Redundancy and modular design are explored to achieve high reliability for team robotic operations. Reliability requirement on individual robots is greatly reduced. Another area of decentralization is manifested by the proposed control hierarchy which eventually includes humans in the loop. The control strategy is constrained by various time delays and calls for different levels of abstraction of the task dynamics. Such technology is needed for remote control of robots in an uncertain environment. Thus, concerns of human safety around robots are relaxed. This presentation also introduces the required technologies behind the new robotic concept.
2006-06-01
Scientific Research. 5PAM-Crash is a trademark of the ESI Group . 6MATLAB and SIMULINK are registered trademarks of the MathWorks. 14 maneuvers...Laboratory (ARL) to develop methodologies to evaluate robotic behavior algorithms that control the actions of individual robots or groups of robots...methodologies to evaluate robotic behavior algorithms that control the actions of individual robots or groups of robots acting as a team to perform a
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-01-01
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556
NASA Astrophysics Data System (ADS)
Tamura, Sho; Maeyama, Shoichi
Rescue robots have been actively developed since Hanshin-Awaji (Kobe) Earthquake. Recently, the rescue robot to reduce the risk of the secondary disaster on NBC terror and critical accident is also developed. For such a background, the development project of mobile RT system in the collapsed is started. This research also participates in this project. It is useful to use the image pointing for the control interface of the rescue robot because it can control the robot by the simple operation. However, the conventional method cannot work on a rough terrain. In this research, we propose the system which controls the robot to arrive the target position on the rough terrain. It is constructed the methods which put the destination into the vector, and control the 3D localizated robot to follow the vector. Finally, the proposed system is evaluated through experiments by remote control of a mobile robot in slope and cofirmed the feasibility.
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-03-25
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.
Statistical Signal Processing and the Motor Cortex
Brockwell, A.E.; Kass, R.E.; Schwartz, A.B.
2011-01-01
Over the past few decades, developments in technology have significantly improved the ability to measure activity in the brain. This has spurred a great deal of research into brain function and its relation to external stimuli, and has important implications in medicine and other fields. As a result of improved understanding of brain function, it is now possible to build devices that provide direct interfaces between the brain and the external world. We describe some of the current understanding of function of the motor cortex region. We then discuss a typical likelihood-based state-space model and filtering based approach to address the problems associated with building a motor cortical-controlled cursor or robotic prosthetic device. As a variation on previous work using this approach, we introduce the idea of using Markov chain Monte Carlo methods for parameter estimation in this context. By doing this instead of performing maximum likelihood estimation, it is possible to expand the range of possible models that can be explored, at a cost in terms of computational load. We demonstrate results obtained applying this methodology to experimental data gathered from a monkey. PMID:21765538
Biologically-inspired hexapod robot design and simulation
NASA Technical Reports Server (NTRS)
Espenschied, Kenneth S.; Quinn, Roger D.
1994-01-01
The design and construction of a biologically-inspired hexapod robot is presented. A previously developed simulation is modified to include models of the DC drive motors, the motor driver circuits and their transmissions. The application of this simulation to the design and development of the robot is discussed. The mechanisms thought to be responsible for the leg coordination of the walking stick insect were previously applied to control the straight-line locomotion of a robot. We generalized these rules for a robot walking on a plane. This biologically-inspired control strategy is used to control the robot in simulation. Numerical results show that the general body motion and performance of the simulated robot is similar to that of the robot based on our preliminary experimental results.
Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.
Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; de la Pena, Nonny; Slater, Mel
2016-05-25
We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robot's 'eyes' stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitor's 'consciousness' is transformed to the robot's body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.
Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.
Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; De La Pena, Nonny; Slater, Mel
2018-03-01
We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robots eyes stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitors consciousness is transformed to the robots body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.
Configuration-Control Scheme Copes With Singularities
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Colbaugh, Richard D.
1993-01-01
Improved configuration-control scheme for robotic manipulator having redundant degrees of freedom suppresses large joint velocities near singularities, at expense of small trajectory errors. Provides means to enforce order of priority of tasks assigned to robot. Basic concept of configuration control of redundant robot described in "Increasing The Dexterity Of Redundant Robots" (NPO-17801).
Design, development, and evaluation of an MRI-guided SMA spring-actuated neurosurgical robot
Ho, Mingyen; Kim, Yeongjin; Cheng, Shing Shin; Gullapalli, Rao; Desai, Jaydev P.
2015-01-01
In this paper, we present our work on the development of a magnetic resonance imaging (MRI)-compatible Minimally Invasive Neurosurgical Intracranial Robot (MINIR) comprising of shape memory alloy (SMA) spring actuators and tendon-sheath mechanism. We present the detailed modeling and analysis along with experimental results of the characterization of SMA spring actuators. Furthermore, to demonstrate image-feedback control, we used the images obtained from a camera to control the motion of the robot so that eventually continuous MR images could be used in the future to control the robot motion. Since the image tracking algorithm may fail in some situations, we also developed a temperature feedback control scheme which served as a backup controller for the robot. Experimental results demonstrated that both image feedback and temperature feedback can be used to control the motion of MINIR. A series of MRI compatibility tests were performed on the robot and the experimental results demonstrated that the robot is MRI compatible and no significant visual image distortion was observed in the MR images during robot operation. PMID:26622075
Application of dexterous space robotics technology to myoelectric prostheses
NASA Astrophysics Data System (ADS)
Hess, Clifford; Li, Larry C. H.; Farry, Kristin A.; Walker, Ian D.
1994-02-01
Future space missions will require robots equipped with highly dexterous robotic hands to perform a variety of tasks. A major technical challenge in making this possible is an improvement in the way these dexterous robotic hands are remotely controlled or teleoperated. NASA is currently investigating the feasibility of using myoelectric signals to teleoperate a dexterous robotic hand. In theory, myoelectric control of robotic hands will require little or no mechanical parts and will greatly reduce the bulk and weight usually found in dexterous robotic hand control devices. An improvement in myoelectric control of multifinger hands will also benefit prosthetics users. Therefore, as an effort to transfer dexterous space robotics technology to prosthetics applications and to benefit from existing myoelectric technology, NASA is collaborating with the Limbs of Love Foundation, the Institute for Rehabilitation and Research, and Rice University in developing improved myoelectric control multifinger hands and prostheses. In this paper, we will address the objectives and approaches of this collaborative effort and discuss the technical issues associated with myoelectric control of multifinger hands. We will also report our current progress and discuss plans for future work.
Application of dexterous space robotics technology to myoelectric prostheses
NASA Technical Reports Server (NTRS)
Hess, Clifford; Li, Larry C. H.; Farry, Kristin A.; Walker, Ian D.
1994-01-01
Future space missions will require robots equipped with highly dexterous robotic hands to perform a variety of tasks. A major technical challenge in making this possible is an improvement in the way these dexterous robotic hands are remotely controlled or teleoperated. NASA is currently investigating the feasibility of using myoelectric signals to teleoperate a dexterous robotic hand. In theory, myoelectric control of robotic hands will require little or no mechanical parts and will greatly reduce the bulk and weight usually found in dexterous robotic hand control devices. An improvement in myoelectric control of multifinger hands will also benefit prosthetics users. Therefore, as an effort to transfer dexterous space robotics technology to prosthetics applications and to benefit from existing myoelectric technology, NASA is collaborating with the Limbs of Love Foundation, the Institute for Rehabilitation and Research, and Rice University in developing improved myoelectric control multifinger hands and prostheses. In this paper, we will address the objectives and approaches of this collaborative effort and discuss the technical issues associated with myoelectric control of multifinger hands. We will also report our current progress and discuss plans for future work.
Design And Control Of Agricultural Robot For Tomato Plants Treatment And Harvesting
NASA Astrophysics Data System (ADS)
Sembiring, Arnes; Budiman, Arif; Lestari, Yuyun D.
2017-12-01
Although Indonesia is one of the biggest agricultural country in the world, implementation of robotic technology, otomation and efficiency enhancement in agriculture process hasn’t extensive yet. This research proposed a low cost agricultural robot architecture. The robot could help farmer to survey their farm area, treat the tomato plants and harvest the ripe tomatoes. Communication between farmer and robot was facilitated by wireless line using radio wave to reach wide area (120m radius). The radio wave was combinated with Bluetooth to simplify the communication between robot and farmer’s Android smartphone. The robot was equipped with a camera, so the farmers could survey the farm situation through 7 inch monitor display real time. The farmers controlled the robot and arm movement through an user interface in Android smartphone. The user interface contains control icons that allow farmers to control the robot movement (formard, reverse, turn right and turn left) and cut the spotty leaves or harvest the ripe tomatoes.
NASA Astrophysics Data System (ADS)
Billard, Aude
2000-10-01
This paper summarizes a number of experiments in biologically inspired robotics. The common feature to all experiments is the use of artificial neural networks as the building blocks for the controllers. The experiments speak in favor of using a connectionist approach for designing adaptive and flexible robot controllers, and for modeling neurological processes. I present 1) DRAMA, a novel connectionist architecture, which has general property for learning time series and extracting spatio-temporal regularities in multi-modal and highly noisy data; 2) Robota, a doll-shaped robot, which imitates and learns a proto-language; 3) an experiment in collective robotics, where a group of 4 to 15 Khepera robots learn dynamically the topography of an environment whose features change frequently; 4) an abstract, computational model of primate ability to learn by imitation; 5) a model for the control of locomotor gaits in a quadruped legged robot.
Review of control strategies for robotic movement training after neurologic injury.
Marchal-Crespo, Laura; Reinkensmeyer, David J
2009-06-16
There is increasing interest in using robotic devices to assist in movement training following neurologic injuries such as stroke and spinal cord injury. This paper reviews control strategies for robotic therapy devices. Several categories of strategies have been proposed, including, assistive, challenge-based, haptic simulation, and coaching. The greatest amount of work has been done on developing assistive strategies, and thus the majority of this review summarizes techniques for implementing assistive strategies, including impedance-, counterbalance-, and EMG- based controllers, as well as adaptive controllers that modify control parameters based on ongoing participant performance. Clinical evidence regarding the relative effectiveness of different types of robotic therapy controllers is limited, but there is initial evidence that some control strategies are more effective than others. It is also now apparent there may be mechanisms by which some robotic control approaches might actually decrease the recovery possible with comparable, non-robotic forms of training. In future research, there is a need for head-to-head comparison of control algorithms in randomized, controlled clinical trials, and for improved models of human motor recovery to provide a more rational framework for designing robotic therapy control strategies.
Review of control strategies for robotic movement training after neurologic injury
Marchal-Crespo, Laura; Reinkensmeyer, David J
2009-01-01
There is increasing interest in using robotic devices to assist in movement training following neurologic injuries such as stroke and spinal cord injury. This paper reviews control strategies for robotic therapy devices. Several categories of strategies have been proposed, including, assistive, challenge-based, haptic simulation, and coaching. The greatest amount of work has been done on developing assistive strategies, and thus the majority of this review summarizes techniques for implementing assistive strategies, including impedance-, counterbalance-, and EMG- based controllers, as well as adaptive controllers that modify control parameters based on ongoing participant performance. Clinical evidence regarding the relative effectiveness of different types of robotic therapy controllers is limited, but there is initial evidence that some control strategies are more effective than others. It is also now apparent there may be mechanisms by which some robotic control approaches might actually decrease the recovery possible with comparable, non-robotic forms of training. In future research, there is a need for head-to-head comparison of control algorithms in randomized, controlled clinical trials, and for improved models of human motor recovery to provide a more rational framework for designing robotic therapy control strategies. PMID:19531254
Some aspects of robotics calibration, design and control
NASA Technical Reports Server (NTRS)
Tawfik, Hazem
1990-01-01
The main objective is to introduce techniques in the areas of testing and calibration, design, and control of robotic systems. A statistical technique is described that analyzes a robot's performance and provides quantitative three-dimensional evaluation of its repeatability, accuracy, and linearity. Based on this analysis, a corrective action should be taken to compensate for any existing errors and enhance the robot's overall accuracy and performance. A comparison between robotics simulation software packages that were commercially available (SILMA, IGRIP) and that of Kennedy Space Center (ROBSIM) is also included. These computer codes simulate the kinematics and dynamics patterns of various robot arm geometries to help the design engineer in sizing and building the robot manipulator and control system. A brief discussion on an adaptive control algorithm is provided.
Mathematical model for adaptive control system of ASEA robot at Kennedy Space Center
NASA Technical Reports Server (NTRS)
Zia, Omar
1989-01-01
The dynamic properties and the mathematical model for the adaptive control of the robotic system presently under investigation at Robotic Application and Development Laboratory at Kennedy Space Center are discussed. NASA is currently investigating the use of robotic manipulators for mating and demating of fuel lines to the Space Shuttle Vehicle prior to launch. The Robotic system used as a testbed for this purpose is an ASEA IRB-90 industrial robot with adaptive control capabilities. The system was tested and it's performance with respect to stability was improved by using an analogue force controller. The objective of this research project is to determine the mathematical model of the system operating under force feedback control with varying dynamic internal perturbation in order to provide continuous stable operation under variable load conditions. A series of lumped parameter models are developed. The models include some effects of robot structural dynamics, sensor compliance, and workpiece dynamics.
Design and real-time control of a robotic system for fracture manipulation.
Dagnino, G; Georgilas, I; Tarassoli, P; Atkins, R; Dogramadzi, S
2015-08-01
This paper presents the design, development and control of a new robotic system for fracture manipulation. The objective is to improve the precision, ergonomics and safety of the traditional surgical procedure to treat joint fractures. The achievements toward this direction are here reported and include the design, the real-time control architecture and the evaluation of a new robotic manipulator system. The robotic manipulator is a 6-DOF parallel robot with the struts developed as linear actuators. The control architecture is also described here. The high-level controller implements a host-target structure composed by a host computer (PC), a real-time controller, and an FPGA. A graphical user interface was designed allowing the surgeon to comfortably automate and monitor the robotic system. The real-time controller guarantees the determinism of the control algorithms adding an extra level of safety for the robotic automation. The system's positioning accuracy and repeatability have been demonstrated showing a maximum positioning RMSE of 1.18 ± 1.14mm (translations) and 1.85 ± 1.54° (rotations).
Modeling and controlling a robotic convoy using guidance laws strategies.
Belkhouche, Fethi; Belkhouche, Boumediene
2005-08-01
This paper deals with the problem of modeling and controlling a robotic convoy. Guidance laws techniques are used to provide a mathematical formulation of the problem. The guidance laws used for this purpose are the velocity pursuit, the deviated pursuit, and the proportional navigation. The velocity pursuit equations model the robot's path under various sensors based control laws. A systematic study of the tracking problem based on this technique is undertaken. These guidance laws are applied to derive decentralized control laws for the angular and linear velocities. For the angular velocity, the control law is directly derived from the guidance laws after considering the relative kinematics equations between successive robots. The second control law maintains the distance between successive robots constant by controlling the linear velocity. This control law is derived by considering the kinematics equations between successive robots under the considered guidance law. Properties of the method are discussed and proven. Simulation results confirm the validity of our approach, as well as the validity of the properties of the method. Index Terms-Guidance laws, relative kinematics equations, robotic convoy, tracking.
Wang, Hesheng; Zhang, Runxi; Chen, Weidong; Wang, Xiaozhou; Pfeifer, Rolf
2017-08-01
Minimally invasive surgery attracts more and more attention because of the advantages of minimal trauma, less bleeding and pain and low complication rate. However, minimally invasive surgery for beating hearts is still a challenge. Our goal is to develop a soft robot surgical system for single-port minimally invasive surgery on a beating heart. The soft robot described in this paper is inspired by the octopus arm. Although the octopus arm is soft and has more degrees of freedom (DOFs), it can be controlled flexibly. The soft robot is driven by cables that are embedded into the soft robot manipulator and can control the direction of the end and middle of the soft robot manipulator. The forward, backward and rotation movement of the soft robot is driven by a propulsion plant. The soft robot can move freely by properly controlling the cables and the propulsion plant. The soft surgical robot system can perform different thoracic operations by changing surgical instruments. To evaluate the flexibility, controllability and reachability of the designed soft robot surgical system, some testing experiments have been conducted in vivo on a swine. Through the subxiphoid, the soft robot manipulator could enter into the thoracic cavity and pericardial cavity smoothly and perform some operations such as biopsy, ligation and ablation. The operations were performed successfully and did not cause any damage to the surrounding soft tissues. From the experiments, the flexibility, controllability and reachability of the soft robot surgical system have been verified. Also, it has been shown that this system can be used in the thoracic and pericardial cavity for different operations. Compared with other endoscopy robots, the soft robot surgical system is safer, has more DOFs and is more flexible for control. When performing operations in a beating heart, this system maybe more suitable than traditional endoscopy robots.
Direct adaptive control of a PUMA 560 industrial robot
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Lee, Thomas; Delpech, Michel
1989-01-01
The implementation and experimental validation of a new direct adaptive control scheme on a PUMA 560 industrial robot is described. The testbed facility consists of a Unimation PUMA 560 six-jointed robot and controller, and a DEC MicroVAX II computer which hosts the Robot Control C Library software. The control algorithm is implemented on the MicroVAX which acts as a digital controller for the PUMA robot, and the Unimation controller is effectively bypassed and used merely as an I/O device to interface the MicroVAX to the joint motors. The control algorithm for each robot joint consists of an auxiliary signal generated by a constant-gain Proportional plus Integral plus Derivative (PID) controller, and an adaptive position-velocity (PD) feedback controller with adjustable gains. The adaptive independent joint controllers compensate for the inter-joint couplings and achieve accurate trajectory tracking without the need for the complex dynamic model and parameter values of the robot. Extensive experimental results on PUMA joint control are presented to confirm the feasibility of the proposed scheme, in spite of strong interactions between joint motions. Experimental results validate the capabilities of the proposed control scheme. The control scheme is extremely simple and computationally very fast for concurrent processing with high sampling rates.
Pereira, José N; Silva, Porfírio; Lima, Pedro U; Martinoli, Alcherio
2014-01-01
The work described is part of a long term program of introducing institutional robotics, a novel framework for the coordination of robot teams that stems from institutional economics concepts. Under the framework, institutions are cumulative sets of persistent artificial modifications made to the environment or to the internal mechanisms of a subset of agents, thought to be functional for the collective order. In this article we introduce a formal model of institutional controllers based on Petri nets. We define executable Petri nets-an extension of Petri nets that takes into account robot actions and sensing-to design, program, and execute institutional controllers. We use a generalized stochastic Petri net view of the robot team controlled by the institutional controllers to model and analyze the stochastic performance of the resulting distributed robotic system. The ability of our formalism to replicate results obtained using other approaches is assessed through realistic simulations of up to 40 e-puck robots. In particular, we model a robot swarm and its institutional controller with the goal of maintaining wireless connectivity, and successfully compare our model predictions and simulation results with previously reported results, obtained by using finite state automaton models and controllers.
Adaptive model-based assistive control for pneumatic direct driven soft rehabilitation robots.
Wilkening, Andre; Ivlev, Oleg
2013-06-01
Assistive behavior and inherent compliance are assumed to be the essential properties for effective robot-assisted therapy in neurological as well as in orthopedic rehabilitation. This paper presents two adaptive model-based assistive controllers for pneumatic direct driven soft rehabilitation robots that are based on separated models of the soft-robot and the patient's extremity, in order to take into account the individual patient's behavior, effort and ability during control, what is assumed to be essential to relearn lost motor functions in neurological and facilitate muscle reconstruction in orthopedic rehabilitation. The high inherent compliance of soft-actuators allows for a general human-robot interaction and provides the base for effective and dependable assistive control. An inverse model of the soft-robot with estimated parameters is used to achieve robot transparency during treatment and inverse adaptive models of the individual patient's extremity allow the controllers to learn on-line the individual patient's behavior and effort and react in a way that assist the patient only as much as needed. The effectiveness of the controllers is evaluated with unimpaired subjects using a first prototype of a soft-robot for elbow training. Advantages and disadvantages of both controllers are analyzed and discussed.
Fast attainment of computer cursor control with noninvasively acquired brain signals
NASA Astrophysics Data System (ADS)
Bradberry, Trent J.; Gentili, Rodolphe J.; Contreras-Vidal, José L.
2011-06-01
Brain-computer interface (BCI) systems are allowing humans and non-human primates to drive prosthetic devices such as computer cursors and artificial arms with just their thoughts. Invasive BCI systems acquire neural signals with intracranial or subdural electrodes, while noninvasive BCI systems typically acquire neural signals with scalp electroencephalography (EEG). Some drawbacks of invasive BCI systems are the inherent risks of surgery and gradual degradation of signal integrity. A limitation of noninvasive BCI systems for two-dimensional control of a cursor, in particular those based on sensorimotor rhythms, is the lengthy training time required by users to achieve satisfactory performance. Here we describe a novel approach to continuously decoding imagined movements from EEG signals in a BCI experiment with reduced training time. We demonstrate that, using our noninvasive BCI system and observational learning, subjects were able to accomplish two-dimensional control of a cursor with performance levels comparable to those of invasive BCI systems. Compared to other studies of noninvasive BCI systems, training time was substantially reduced, requiring only a single session of decoder calibration (~20 min) and subject practice (~20 min). In addition, we used standardized low-resolution brain electromagnetic tomography to reveal that the neural sources that encoded observed cursor movement may implicate a human mirror neuron system. These findings offer the potential to continuously control complex devices such as robotic arms with one's mind without lengthy training or surgery.
NASA Astrophysics Data System (ADS)
Panfil, Wawrzyniec; Moczulski, Wojciech
2017-10-01
In the paper presented is a control system of a mobile robots group intended for carrying out inspection missions. The main research problem was to define such a control system in order to facilitate a cooperation of the robots resulting in realization of the committed inspection tasks. Many of the well-known control systems use auctions for tasks allocation, where a subject of an auction is a task to be allocated. It seems that in the case of missions characterized by much larger number of tasks than number of robots it will be better if robots (instead of tasks) are subjects of auctions. The second identified problem concerns the one-sided robot-to-task fitness evaluation. Simultaneous assessment of the robot-to-task fitness and task attractiveness for robot should affect positively for the overall effectiveness of the multi-robot system performance. The elaborated system allows to assign tasks to robots using various methods for evaluation of fitness between robots and tasks, and using some tasks allocation methods. There is proposed the method for multi-criteria analysis, which is composed of two assessments, i.e. robot's concurrency position for task among other robots and task's attractiveness for robot among other tasks. Furthermore, there are proposed methods for tasks allocation applying the mentioned multi-criteria analysis method. The verification of both the elaborated system and the proposed tasks' allocation methods was carried out with the help of simulated experiments. The object under test was a group of inspection mobile robots being a virtual counterpart of the real mobile-robot group.
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo (Editor)
1990-01-01
Various papers on intelligent control and adaptive systems are presented. Individual topics addressed include: control architecture for a Mars walking vehicle, representation for error detection and recovery in robot task plans, real-time operating system for robots, execution monitoring of a mobile robot system, statistical mechanics models for motion and force planning, global kinematics for manipulator planning and control, exploration of unknown mechanical assemblies through manipulation, low-level representations for robot vision, harmonic functions for robot path construction, simulation of dual behavior of an autonomous system. Also discussed are: control framework for hand-arm coordination, neural network approach to multivehicle navigation, electronic neural networks for global optimization, neural network for L1 norm linear regression, planning for assembly with robot hands, neural networks in dynamical systems, control design with iterative learning, improved fuzzy process control of spacecraft autonomous rendezvous using a genetic algorithm.
Master-slave robotic system for needle indentation and insertion.
Shin, Jaehyun; Zhong, Yongmin; Gu, Chengfan
2017-12-01
Bilateral control of a master-slave robotic system is a challenging issue in robotic-assisted minimally invasive surgery. It requires the knowledge on contact interaction between a surgical (slave) robot and soft tissues. This paper presents a master-slave robotic system for needle indentation and insertion. This master-slave robotic system is able to characterize the contact interaction between the robotic needle and soft tissues. A bilateral controller is implemented using a linear motor for robotic needle indentation and insertion. A new nonlinear state observer is developed to online monitor the contact interaction with soft tissues. Experimental results demonstrate the efficacy of the proposed master-slave robotic system for robotic needle indentation and needle insertion.
[Advanced Development for Space Robotics With Emphasis on Fault Tolerance Technology
NASA Technical Reports Server (NTRS)
Tesar, Delbert
1997-01-01
This report describes work developing fault tolerant redundant robotic architectures and adaptive control strategies for robotic manipulator systems which can dynamically accommodate drastic robot manipulator mechanism, sensor or control failures and maintain stable end-point trajectory control with minimum disturbance. Kinematic designs of redundant, modular, reconfigurable arms for fault tolerance were pursued at a fundamental level. The approach developed robotic testbeds to evaluate disturbance responses of fault tolerant concepts in robotic mechanisms and controllers. The development was implemented in various fault tolerant mechanism testbeds including duality in the joint servo motor modules, parallel and serial structural architectures, and dual arms. All have real-time adaptive controller technologies to react to mechanism or controller disturbances (failures) to perform real-time reconfiguration to continue the task operations. The developments fall into three main areas: hardware, software, and theoretical.
An analysis of value function learning with piecewise linear control
NASA Astrophysics Data System (ADS)
Tutsoy, Onder; Brown, Martin
2016-05-01
Reinforcement learning (RL) algorithms attempt to learn optimal control actions by iteratively estimating a long-term measure of system performance, the so-called value function. For example, RL algorithms have been applied to walking robots to examine the connection between robot motion and the brain, which is known as embodied cognition. In this paper, RL algorithms are analysed using an exemplar test problem. A closed form solution for the value function is calculated and this is represented in terms of a set of basis functions and parameters, which is used to investigate parameter convergence. The value function expression is shown to have a polynomial form where the polynomial terms depend on the plant's parameters and the value function's discount factor. It is shown that the temporal difference error introduces a null space for the differenced higher order basis associated with the effects of controller switching (saturated to linear control or terminating an experiment) apart from the time of the switch. This leads to slow convergence in the relevant subspace. It is also shown that badly conditioned learning problems can occur, and this is a function of the value function discount factor and the controller switching points. Finally, a comparison is performed between the residual gradient and TD(0) learning algorithms, and it is shown that the former has a faster rate of convergence for this test problem.
Research and development of service robot platform based on artificial psychology
NASA Astrophysics Data System (ADS)
Zhang, Xueyuan; Wang, Zhiliang; Wang, Fenhua; Nagai, Masatake
2007-12-01
Some related works about the control architecture of robot system are briefly summarized. According to the discussions above, this paper proposes control architecture of service robot based on artificial psychology. In this control architecture, the robot can obtain the cognition of environment through sensors, and then be handled with intelligent model, affective model and learning model, and finally express the reaction to the outside stimulation through its behavior. For better understanding the architecture, hierarchical structure is also discussed. The control system of robot can be divided into five layers, namely physical layer, drives layer, information-processing and behavior-programming layer, application layer and system inspection and control layer. This paper shows how to achieve system integration from hardware modules, software interface and fault diagnosis. Embedded system GENE-8310 is selected as the PC platform of robot APROS-I, and its primary memory media is CF card. The arms and body of the robot are constituted by 13 motors and some connecting fittings. Besides, the robot has a robot head with emotional facial expression, and the head has 13 DOFs. The emotional and intelligent model is one of the most important parts in human-machine interaction. In order to better simulate human emotion, an emotional interaction model for robot is proposed according to the theory of need levels of Maslom and mood information of Siminov. This architecture has already been used in our intelligent service robot.
Web Environment for Programming and Control of a Mobile Robot in a Remote Laboratory
ERIC Educational Resources Information Center
dos Santos Lopes, Maísa Soares; Gomes, Iago Pacheco; Trindade, Roque M. P.; da Silva, Alzira F.; de C. Lima, Antonio C.
2017-01-01
Remote robotics laboratories have been successfully used for engineering education. However, few of them use mobile robots to to teach computer science. This article describes a mobile robot Control and Programming Environment (CPE) and its pedagogical applications. The system comprises a remote laboratory for robotics, an online programming tool,…
Effect of a human-type communication robot on cognitive function in elderly women living alone.
Tanaka, Masaaki; Ishii, Akira; Yamano, Emi; Ogikubo, Hiroki; Okazaki, Masatsugu; Kamimura, Kazuro; Konishi, Yasuharu; Emoto, Shigeru; Watanabe, Yasuyoshi
2012-09-01
Considering the high prevalence of dementia, it would be of great value to develop effective tools to improve cognitive function. We examined the effects of a human-type communication robot on cognitive function in elderly women living alone. In this study, 34 healthy elderly female volunteers living alone were randomized to living with either a communication robot or a control robot at home for 8 weeks. The shape, voice, and motion features of the communication robot resemble those of a 3-year-old boy, while the control robot was not designed to talk or nod. Before living with the robot and 4 and 8 weeks after living with the robot, experiments were conducted to evaluate a variety of cognitive functions as well as saliva cortisol, sleep, and subjective fatigue, motivation, and healing. The Mini-Mental State Examination score, judgement, and verbal memory function were improved after living with the communication robot; those functions were not altered with the control robot. In addition, the saliva cortisol level was decreased, nocturnal sleeping hours tended to increase, and difficulty in maintaining sleep tended to decrease with the communication robot, although alterations were not shown with the control. The proportions of the participants in whom effects on attenuation of fatigue, enhancement of motivation, and healing could be recognized were higher in the communication robot group relative to the control group. This study demonstrates that living with a human-type communication robot may be effective for improving cognitive functions in elderly women living alone.
Control of a Robot Dancer for Enhancing Haptic Human-Robot Interaction in Waltz.
Hongbo Wang; Kosuge, K
2012-01-01
Haptic interaction between a human leader and a robot follower in waltz is studied in this paper. An inverted pendulum model is used to approximate the human's body dynamics. With the feedbacks from the force sensor and laser range finders, the robot is able to estimate the human leader's state by using an extended Kalman filter (EKF). To reduce interaction force, two robot controllers, namely, admittance with virtual force controller, and inverted pendulum controller, are proposed and evaluated in experiments. The former controller failed the experiment; reasons for the failure are explained. At the same time, the use of the latter controller is validated by experiment results.
Efficient Control Law Simulation for Multiple Mobile Robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Driessen, B.J.; Feddema, J.T.; Kotulski, J.D.
1998-10-06
In this paper we consider the problem of simulating simple control laws involving large numbers of mobile robots. Such simulation can be computationally prohibitive if the number of robots is large enough, say 1 million, due to the 0(N2 ) cost of each time step. This work therefore uses hierarchical tree-based methods for calculating the control law. These tree-based approaches have O(NlogN) cost per time step, thus allowing for efficient simulation involving a large number of robots. For concreteness, a decentralized control law which involves only the distance and bearing to the closest neighbor robot will be considered. The timemore » to calculate the control law for each robot at each time step is demonstrated to be O(logN).« less
Complete low-cost implementation of a teleoperated control system for a humanoid robot.
Cela, Andrés; Yebes, J Javier; Arroyo, Roberto; Bergasa, Luis M; Barea, Rafael; López, Elena
2013-01-24
Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system.
Complete Low-Cost Implementation of a Teleoperated Control System for a Humanoid Robot
Cela, Andrés; Yebes, J. Javier; Arroyo, Roberto; Bergasa, Luis M.; Barea, Rafael; López, Elena
2013-01-01
Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system. PMID:23348029
Innovations in prosthetic interfaces for the upper extremity.
Kung, Theodore A; Bueno, Reuben A; Alkhalefah, Ghadah K; Langhals, Nicholas B; Urbanchek, Melanie G; Cederna, Paul S
2013-12-01
Advancements in modern robotic technology have led to the development of highly sophisticated upper extremity prosthetic limbs. High-fidelity volitional control of these devices is dependent on the critical interface between the patient and the mechanical prosthesis. Recent innovations in prosthetic interfaces have focused on several control strategies. Targeted muscle reinnervation is currently the most immediately applicable prosthetic control strategy and is particularly indicated in proximal upper extremity amputations. Investigation into various brain interfaces has allowed acquisition of neuroelectric signals directly or indirectly from the central nervous system for prosthetic control. Peripheral nerve interfaces permit signal transduction from both motor and sensory nerves with a higher degree of selectivity. This article reviews the current developments in each of these interface systems and discusses the potential of these approaches to facilitate motor control and sensory feedback in upper extremity neuroprosthetic devices.
What do we learn about development from baby robots?
Oudeyer, Pierre-Yves
2017-01-01
Understanding infant development is one of the great scientific challenges of contemporary science. In addressing this challenge, robots have proven useful as they allow experimenters to model the developing brain and body and understand the processes by which new patterns emerge in sensorimotor, cognitive, and social domains. Robotics also complements traditional experimental methods in psychology and neuroscience, where only a few variables can be studied at the same time. Moreover, work with robots has enabled researchers to systematically explore the role of the body in shaping the development of skill. All told, this work has shed new light on development as a complex dynamical system. WIREs Cogn Sci 2017, 8:e1395. doi: 10.1002/wcs.1395 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.
How to make an autonomous robot as a partner with humans: design approach versus emergent approach.
Fujita, M
2007-01-15
In this paper, we discuss what factors are important to realize an autonomous robot as a partner with humans. We believe that it is important to interact with people without boring them, using verbal and non-verbal communication channels. We have already developed autonomous robots such as AIBO and QRIO, whose behaviours are manually programmed and designed. We realized, however, that this design approach has limitations; therefore we propose a new approach, intelligence dynamics, where interacting in a real-world environment using embodiment is considered very important. There are pioneering works related to this approach from brain science, cognitive science, robotics and artificial intelligence. We assert that it is important to study the emergence of entire sets of autonomous behaviours and present our approach towards this goal.
A Human Factors Analysis of Proactive Support in Human-Robot Teaming
2015-09-28
teammate is remotely controlling a robot while working with an intelligent robot teammate ‘Mary’. Our main result shows that the subjects generally...IEEE/RSJ Intl. Conference on Intelligent Robots and Systems Conference Date: September 28, 2015 A Human Factors Analysis of Proactive Support in Human...human teammate is remotely controlling a robot while working with an intelligent robot teammate ‘Mary’. Our main result shows that the subjects
Development of 6-DOF painting robot control system
NASA Astrophysics Data System (ADS)
Huang, Junbiao; Liu, Jianqun; Gao, Weiqiang
2017-01-01
With the development of society, the spraying technology of manufacturing industry in China has changed from the manual operation to the 6-DOF (Degree Of Freedom)robot automatic spraying. Spraying painting robot can not only complete the work which does harm to human being, but also improve the production efficiency and save labor costs. Control system is the most critical part of the 6-DOF robots, however, there is still a lack of relevant technology research in China. It is very necessary to study a kind of control system of 6-DOF spraying painting robots which is easy to operation, and has high efficiency and stable performance. With Googol controller platform, this paper develops programs based on Windows CE embedded systems to control the robot to finish the painting work. Software development is the core of the robot control system, including the direct teaching module, playback module, motion control module, setting module, man-machine interface, alarm module, log module, etc. All the development work of the entire software system has been completed, and it has been verified that the entire software works steady and efficient.
A force-controllable macro-micro manipulator and its application to medical robots
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Uecker, Darrin R.; Wang, Yulun
1994-01-01
This paper describes an 8-degrees-of-freedom macro-micro robot. This robot is capable of performing tasks that require accurate force control, such as polishing, finishing, grinding, deburring, and cleaning. The design of the macro-micro mechanism, the control algorithms, and the hardware/software implementation of the algorithms are described in this paper. Initial experimental results are reported. In addition, this paper includes a discussion of medical surgery and the role that force control may play. We introduce a new class of robotic systems collectively called Robotic Enhancement Technology (RET). RET systems introduce the combination of robotic manipulation with human control to perform manipulation tasks beyond the individual capability of either human or machine. The RET class of robotic systems offers new challenges in mechanism design, control-law development, and man/machine interface design. We believe force-controllable mechanisms such as the macro-micro structure we have developed are a necessary part of RET. Work in progress in the area of RET systems and their application to minimally invasive surgery is presented, along with future research directions.
Doroodgar, Barzin; Liu, Yugang; Nejat, Goldie
2014-12-01
Semi-autonomous control schemes can address the limitations of both teleoperation and fully autonomous robotic control of rescue robots in disaster environments by allowing a human operator to cooperate and share such tasks with a rescue robot as navigation, exploration, and victim identification. In this paper, we present a unique hierarchical reinforcement learning-based semi-autonomous control architecture for rescue robots operating in cluttered and unknown urban search and rescue (USAR) environments. The aim of the controller is to enable a rescue robot to continuously learn from its own experiences in an environment in order to improve its overall performance in exploration of unknown disaster scenes. A direction-based exploration technique is integrated in the controller to expand the search area of the robot via the classification of regions and the rubble piles within these regions. Both simulations and physical experiments in USAR-like environments verify the robustness of the proposed HRL-based semi-autonomous controller to unknown cluttered scenes with different sizes and varying types of configurations.
Improving the transparency of a rehabilitation robot by exploiting the cyclic behaviour of walking.
van Dijk, W; van der Kooij, H; Koopman, B; van Asseldonk, E H F; van der Kooij, H
2013-06-01
To promote active participation of neurological patients during robotic gait training, controllers, such as "assist as needed" or "cooperative control", are suggested. Apart from providing support, these controllers also require that the robot should be capable of resembling natural, unsupported, walking. This means that they should have a transparent mode, where the interaction forces between the human and the robot are minimal. Traditional feedback-control algorithms do not exploit the cyclic nature of walking to improve the transparency of the robot. The purpose of this study was to improve the transparent mode of robotic devices, by developing two controllers that use the rhythmic behavior of gait. Both controllers use adaptive frequency oscillators and kernel-based non-linear filters. Kernelbased non-linear filters can be used to estimate signals and their time derivatives, as a function of the gait phase. The first controller learns the motor angle, associated with a certain joint angle pattern, and acts as a feed-forward controller to improve the torque tracking (including the zero-torque mode). The second controller learns the state of the mechanical system and compensates for the dynamical effects (e.g. the acceleration of robot masses). Both controllers have been tested separately and in combination on a small subject population. Using the feedforward controller resulted in an improved torque tracking of at least 52 percent at the hip joint, and 61 percent at the knee joint. When both controllers were active simultaneously, the interaction power between the robot and the human leg was reduced by at least 40 percent at the thigh, and 43 percent at the shank. These results indicate that: if a robotic task is cyclic, the torque tracking and transparency can be improved by exploiting the predictions of adaptive frequency oscillator and kernel-based nonlinear filters.
Tool actuation and force feedback on robot-assisted microsurgery system
NASA Technical Reports Server (NTRS)
Das, Hari (Inventor); Ohm, Tim R. (Inventor); Boswell, Curtis D. (Inventor); Steele, Robert D. (Inventor)
2002-01-01
An input control device with force sensors is configured to sense hand movements of a surgeon performing a robot-assisted microsurgery. The sensed hand movements actuate a mechanically decoupled robot manipulator. A microsurgical manipulator, attached to the robot manipulator, is activated to move small objects and perform microsurgical tasks. A force-feedback element coupled to the robot manipulator and the input control device provides the input control device with an amplified sense of touch in the microsurgical manipulator.
Comparison of two techniques of robot-aided upper limb exercise training after stroke.
Stein, Joel; Krebs, Hermano Igo; Frontera, Walter R; Fasoli, Susan E; Hughes, Richard; Hogan, Neville
2004-09-01
This study examined whether incorporating progressive resistive training into robot-aided exercise training provides incremental benefits over active-assisted robot-aided exercise for the upper limb after stroke. A total of 47 individuals at least 1 yr poststroke were enrolled in this 6-wk training protocol. Paretic upper limb motor abilities were evaluated using clinical measures and a robot-based assessment to determine eligibility for robot-aided progressive resistive training at study entry. Subjects capable of participating in resistance training were randomized to receive either active-assisted robot-aided exercises or robot-aided progressive resistance training. Subjects who were incapable of participating in resistance training underwent active-assisted robotic therapy and were again screened for eligibility after 3 wks of robotic therapy. Those subjects capable of participating in resistance training at 3 wks were then randomized to receive either robot-aided resistance training or to continue with robot-aided active-assisted training. One subject withdrew due to unrelated medical issues, and data for the remaining 46 subjects were analyzed. Subjects in all groups showed improvement in measures of motor control (mean increase in Fugl-Meyer of 3.3; 95% confidence interval, 2.2-4.4) and maximal force (mean increase in maximal force of 3.5 N, P = 0.027) over the course of robot-aided exercise training. No differences in outcome measures were observed between the resistance training groups and the matched active-assisted training groups. Subjects' ability to perform the robotic task at the time of group assignment predicted the magnitude of the gain in motor control. The incorporation of robot-aided progressive resistance exercises into a program of robot-aided exercise did not favorably or negatively affect the gains in motor control or strength associated with this training, though interpretation of these results is limited by sample size. Individuals with better motor control at baseline experienced greater increases in motor control with robotic training.
[Control of intelligent car based on electroencephalogram and neurofeedback].
Li, Song; Xiong, Xin; Fu, Yunfa
2018-02-01
To improve the performance of brain-controlled intelligent car based on motor imagery (MI), a method based on neurofeedback (NF) with electroencephalogram (EEG) for controlling intelligent car is proposed. A mental strategy of MI in which the energy column diagram of EEG features related to the mental activity is presented to subjects with visual feedback in real time to train them to quickly master the skills of MI and regulate their EEG activity, and combination of multi-features fusion of MI and multi-classifiers decision were used to control the intelligent car online. The average, maximum and minimum accuracy of identifying instructions achieved by the trained group (trained by the designed feedback system before the experiment) were 85.71%, 90.47% and 76.19%, respectively and the corresponding accuracy achieved by the control group (untrained) were 73.32%, 80.95% and 66.67%, respectively. For the trained group, the average, longest and shortest time consuming were 92 s, 101 s, and 85 s, respectively, while for the control group the corresponding time were 115.7 s, 120 s, and 110 s, respectively. According to the results described above, it is expected that this study may provide a new idea for the follow-up development of brain-controlled intelligent robot by the neurofeedback with EEG related to MI.
NASA Technical Reports Server (NTRS)
Erickson, Jon D. (Editor)
1992-01-01
The present volume on cooperative intelligent robotics in space discusses sensing and perception, Space Station Freedom robotics, cooperative human/intelligent robot teams, and intelligent space robotics. Attention is given to space robotics reasoning and control, ground-based space applications, intelligent space robotics architectures, free-flying orbital space robotics, and cooperative intelligent robotics in space exploration. Topics addressed include proportional proximity sensing for telerobots using coherent lasar radar, ground operation of the mobile servicing system on Space Station Freedom, teleprogramming a cooperative space robotic workcell for space stations, and knowledge-based task planning for the special-purpose dextrous manipulator. Also discussed are dimensions of complexity in learning from interactive instruction, an overview of the dynamic predictive architecture for robotic assistants, recent developments at the Goddard engineering testbed, and parallel fault-tolerant robot control.
Lim, Hoon; Matsumoto, Nozomu; Cho, Byunghyun; Hong, Jaesung; Yamashita, Makoto; Hashizume, Makoto; Yi, Byung-Ju
2016-04-01
To develop an otological robot that can protect important organs from being injured. We developed a five degree-of-freedom robot for otological surgery. Unlike the other robots that were reported previously, our robot does not replace surgeon's procedures, but instead utilizes human-robot collaborative control. The robot basically releases all of the actuators so that the surgeon can manipulate the drill within the robot's working area with minimal restriction. When the drill reaches a forbidden area, the surgeon feels as if the drill hits a wall. When an engineer performed mastoidectomy using the robot for assistance, the facial nerve in the segmented region was always protected with a more than 2.5mm margin, which was almost the same as the pre-set safety margin of 3mm. Semi-manual drilling using human-robot collaborative control was feasible, and may hold a realistic prospect of clinical use in the near future. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
The Structure, Design, and Closed-Loop Motion Control of a Differential Drive Soft Robot.
Wu, Pang; Jiangbei, Wang; Yanqiong, Fei
2018-02-01
This article presents the structure, design, and motion control of an inchworm inspired pneumatic soft robot, which can perform differential movement. This robot mainly consists of two columns of pneumatic multi-airbags (actuators), one sensor, one baseboard, front feet, and rear feet. According to the different inflation time of left and right actuators, the robot can perform both linear and turning movements. The actuators of this robot are composed of multiple airbags, and the design of the airbags is analyzed. To deal with the nonlinear performance of the soft robot, we use radial basis function neural networks to train the turning ability of this robot on three different surfaces and create a mathematical model among coefficient of friction, deflection angle, and inflation time. Then, we establish the closed-loop automatic control model using three-axis electronic compass sensor. Finally, the automatic control model is verified by linear and turning movement experiments. According to the experiment, the robot can finish the linear and turning movements under the closed-loop control system.
Lyapunov vector function method in the motion stabilisation problem for nonholonomic mobile robot
NASA Astrophysics Data System (ADS)
Andreev, Aleksandr; Peregudova, Olga
2017-07-01
In this paper we propose a sampled-data control law in the stabilisation problem of nonstationary motion of nonholonomic mobile robot. We assume that the robot moves on a horizontal surface without slipping. The dynamical model of a mobile robot is considered. The robot has one front free wheel and two rear wheels which are controlled by two independent electric motors. We assume that the controls are piecewise constant signals. Controller design relies on the backstepping procedure with the use of Lyapunov vector-function method. Theoretical considerations are verified by numerical simulation.
Manipulator control and mechanization: A telerobot subsystem
NASA Technical Reports Server (NTRS)
Hayati, S.; Wilcox, B.
1987-01-01
The short- and long-term autonomous robot control activities in the Robotics and Teleoperators Research Group at the Jet Propulsion Laboratory (JPL) are described. This group is one of several involved in robotics and is an integral part of a new NASA robotics initiative called Telerobot program. A description of the architecture, hardware and software, and the research direction in manipulator control is given.
Melidis, Christos; Iizuka, Hiroyuki; Marocco, Davide
2018-05-01
In this paper, we present a novel approach to human-robot control. Taking inspiration from behaviour-based robotics and self-organisation principles, we present an interfacing mechanism, with the ability to adapt both towards the user and the robotic morphology. The aim is for a transparent mechanism connecting user and robot, allowing for a seamless integration of control signals and robot behaviours. Instead of the user adapting to the interface and control paradigm, the proposed architecture allows the user to shape the control motifs in their way of preference, moving away from the case where the user has to read and understand an operation manual, or it has to learn to operate a specific device. Starting from a tabula rasa basis, the architecture is able to identify control patterns (behaviours) for the given robotic morphology and successfully merge them with control signals from the user, regardless of the input device used. The structural components of the interface are presented and assessed both individually and as a whole. Inherent properties of the architecture are presented and explained. At the same time, emergent properties are presented and investigated. As a whole, this paradigm of control is found to highlight the potential for a change in the paradigm of robotic control, and a new level in the taxonomy of human in the loop systems.
Comparison of human and humanoid robot control of upright stance.
Peterka, Robert J
2009-01-01
There is considerable recent interest in developing humanoid robots. An important substrate for many motor actions in both humans and biped robots is the ability to maintain a statically or dynamically stable posture. Given the success of the human design, one would expect there are lessons to be learned in formulating a postural control mechanism for robots. In this study we limit ourselves to considering the problem of maintaining upright stance. Human stance control is compared to a suggested method for robot stance control called zero moment point (ZMP) compensation. Results from experimental and modeling studies suggest there are two important subsystems that account for the low- and mid-frequency (DC to approximately 1Hz) dynamic characteristics of human stance control. These subsystems are (1) a "sensory integration" mechanism whereby orientation information from multiple sensory systems encoding body kinematics (i.e. position, velocity) is flexibly combined to provide an overall estimate of body orientation while allowing adjustments (sensory re-weighting) that compensate for changing environmental conditions and (2) an "effort control" mechanism that uses kinetic-related (i.e., force-related) sensory information to reduce the mean deviation of body orientation from upright. Functionally, ZMP compensation is directly analogous to how humans appear to use kinetic feedback to modify the main sensory integration feedback loop controlling body orientation. However, a flexible sensory integration mechanism is missing from robot control leaving the robot vulnerable to instability in conditions where humans are able to maintain stance. We suggest the addition of a simple form of sensory integration to improve robot stance control. We also investigate how the biological constraint of feedback time delay influences the human stance control design. The human system may serve as a guide for improved robot control, but should not be directly copied because the constraints on robot and human control are different.
Positive position control of robotic manipulators
NASA Technical Reports Server (NTRS)
Baz, A.; Gumusel, L.
1989-01-01
The present, simple and accurate position-control algorithm, which is applicable to fast-moving and lightly damped robot arms, is based on the positive position feedback (PPF) strategy and relies solely on position sensors to monitor joint angles of robotic arms to furnish stable position control. The optimized tuned filters, in the form of a set of difference equations, manipulate position signals for robotic system performance. Attention is given to comparisons between this PPF-algorithm controller's experimentally ascertained performance characteristics and those of a conventional proportional controller.
Experiments in thrusterless robot locomotion control for space applications. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Jasper, Warren Joseph
1990-01-01
While performing complex assembly tasks or moving about in space, a space robot should minimize the amount of propellant consumed. A study is presented of space robot locomotion and orientation without the use of thrusters. The goal was to design a robot control paradigm that will perform thrusterless locomotion between two points on a structure, and to implement this paradigm on an experimental robot. A two arm free flying robot was constructed which floats on a cushion of air to simulate in 2-D the drag free, zero-g environment of space. The robot can impart momentum to itself by pushing off from an external structure in a coordinated two arm maneuver, and can then reorient itself by activating a momentum wheel. The controller design consists of two parts: a high level strategic controller and a low level dynamic controller. The control paradigm was verified experimentally by commanding the robot to push off from a structure with both arms, rotate 180 degs while translating freely, and then to catch itself on another structure. This method, based on the computed torque, provides a linear feedback law in momentum and its derivatives for a system of rigid bodies.
He, Yongtian; Nathan, Kevin; Venkatakrishnan, Anusha; Rovekamp, Roger; Beck, Christopher; Ozdemir, Recep; Francisco, Gerard E; Contreras-Vidal, Jose L
2014-01-01
Stroke remains a leading cause of disability, limiting independent ambulation in survivors, and consequently affecting quality of life (QOL). Recent technological advances in neural interfacing with robotic rehabilitation devices are promising in the context of gait rehabilitation. Here, the X1, NASA's powered robotic lower limb exoskeleton, is introduced as a potential diagnostic, assistive, and therapeutic tool for stroke rehabilitation. Additionally, the feasibility of decoding lower limb joint kinematics and kinetics during walking with the X1 from scalp electroencephalographic (EEG) signals--the first step towards the development of a brain-machine interface (BMI) system to the X1 exoskeleton--is demonstrated.
Comparison of tongue interface with keyboard for control of an assistive robotic arm.
Struijk, Lotte N S Andreasen; Lontis, Romulus
2017-07-01
This paper demonstrates how an assistive 6 DoF robotic arm with a gripper can be controlled manually using a tongue interface. The proposed method suggests that it possible for a user to manipulate the surroundings with his or her tongue using the inductive tongue control system as deployed in this study. The sensors of an inductive tongue-computer interface were mapped to the Cartesian control of an assistive robotic arm. The resulting control system was tested manually in order to compare manual control of the robot using a standard keyboard and using the tongue interface. Two healthy subjects controlled the robotic arm to precisely move a bottle of water from one location to another. The results shows that the tongue interface was able to fully control the robotic arm in a similar manner as the standard keyboard resulting in the same number of successful manipulations and an average increase in task duration of up to 30% as compared with the standard keyboard.
Controlling legs for locomotion-insights from robotics and neurobiology.
Buschmann, Thomas; Ewald, Alexander; von Twickel, Arndt; Büschges, Ansgar
2015-06-29
Walking is the most common terrestrial form of locomotion in animals. Its great versatility and flexibility has led to many attempts at building walking machines with similar capabilities. The control of walking is an active research area both in neurobiology and robotics, with a large and growing body of work. This paper gives an overview of the current knowledge on the control of legged locomotion in animals and machines and attempts to give walking control researchers from biology and robotics an overview of the current knowledge in both fields. We try to summarize the knowledge on the neurobiological basis of walking control in animals, emphasizing common principles seen in different species. In a section on walking robots, we review common approaches to walking controller design with a slight emphasis on biped walking control. We show where parallels between robotic and neurobiological walking controllers exist and how robotics and biology may benefit from each other. Finally, we discuss where research in the two fields diverges and suggest ways to bridge these gaps.
Kinematic equations for resolved-rate control of an industrial robot arm
NASA Technical Reports Server (NTRS)
Barker, L. K.
1983-01-01
An operator can use kinematic, resolved-rate equations to dynamically control a robot arm by watching its response to commanded inputs. Known resolved-rate equations for the control of a particular six-degree-of-freedom industrial robot arm and proceeds to simplify the equations for faster computations are derived. Methods for controlling the robot arm in regions which normally cause mathematical singularities in the resolved-rate equations are discussed.
Cooperative system and method using mobile robots for testing a cooperative search controller
Byrne, Raymond H.; Harrington, John J.; Eskridge, Steven E.; Hurtado, John E.
2002-01-01
A test system for testing a controller provides a way to use large numbers of miniature mobile robots to test a cooperative search controller in a test area, where each mobile robot has a sensor, a communication device, a processor, and a memory. A method of using a test system provides a way for testing a cooperative search controller using multiple robots sharing information and communicating over a communication network.
Workspace Safe Operation of a Force- or Impedance-Controlled Robot
NASA Technical Reports Server (NTRS)
Abdallah, Muhammad E. (Inventor); Hargrave, Brian (Inventor); Strawser, Philip A. (Inventor); Yamokoski, John D. (Inventor)
2013-01-01
A method of controlling a robotic manipulator of a force- or impedance-controlled robot within an unstructured workspace includes imposing a saturation limit on a static force applied by the manipulator to its surrounding environment, and may include determining a contact force between the manipulator and an object in the unstructured workspace, and executing a dynamic reflex when the contact force exceeds a threshold to thereby alleviate an inertial impulse not addressed by the saturation limited static force. The method may include calculating a required reflex torque to be imparted by a joint actuator to a robotic joint. A robotic system includes a robotic manipulator having an unstructured workspace and a controller that is electrically connected to the manipulator, and which controls the manipulator using force- or impedance-based commands. The controller, which is also disclosed herein, automatically imposes the saturation limit and may execute the dynamic reflex noted above.
Jiang, Zhongliang; Sun, Yu; Gao, Peng; Hu, Ying; Zhang, Jianwei
2016-01-01
Robots play more important roles in daily life and bring us a lot of convenience. But when people work with robots, there remain some significant differences in human-human interactions and human-robot interaction. It is our goal to make robots look even more human-like. We design a controller which can sense the force acting on any point of a robot and ensure the robot can move according to the force. First, a spring-mass-dashpot system was used to describe the physical model, and the second-order system is the kernel of the controller. Then, we can establish the state space equations of the system. In addition, the particle swarm optimization algorithm had been used to obtain the system parameters. In order to test the stability of system, the root-locus diagram had been shown in the paper. Ultimately, some experiments had been carried out on the robotic spinal surgery system, which is developed by our team, and the result shows that the new controller performs better during human-robot interaction.
Serendipitous Offline Learning in a Neuromorphic Robot.
Stewart, Terrence C; Kleinhans, Ashley; Mundy, Andrew; Conradt, Jörg
2016-01-01
We demonstrate a hybrid neuromorphic learning paradigm that learns complex sensorimotor mappings based on a small set of hard-coded reflex behaviors. A mobile robot is first controlled by a basic set of reflexive hand-designed behaviors. All sensor data is provided via a spike-based silicon retina camera (eDVS), and all control is implemented via spiking neurons simulated on neuromorphic hardware (SpiNNaker). Given this control system, the robot is capable of simple obstacle avoidance and random exploration. To train the robot to perform more complex tasks, we observe the robot and find instances where the robot accidentally performs the desired action. Data recorded from the robot during these times is then used to update the neural control system, increasing the likelihood of the robot performing that task in the future, given a similar sensor state. As an example application of this general-purpose method of training, we demonstrate the robot learning to respond to novel sensory stimuli (a mirror) by turning right if it is present at an intersection, and otherwise turning left. In general, this system can learn arbitrary relations between sensory input and motor behavior.
Benefits and problems of health-care robots in aged care settings: A comparison trial.
Broadbent, Elizabeth; Kerse, Ngaire; Peri, Kathryn; Robinson, Hayley; Jayawardena, Chandimal; Kuo, Tony; Datta, Chandan; Stafford, Rebecca; Butler, Haley; Jawalkar, Pratyusha; Amor, Maddy; Robins, Ben; MacDonald, Bruce
2016-03-01
This study investigated whether multiple health-care robots could have any benefits or cause any problems in an aged care facility. Fifty-three residents and 53 staff participated in a non-randomised controlled trial over 12 weeks. Six robots provided entertainment, communication and health-monitoring functions in staff rooms and activity lounges. These settings were compared to control settings without robots. There were no significant differences between groups in resident or staff outcomes, except a significant increase in job satisfaction in the control group only. The intervention group perceived the robots had more agency and experience than the control group did. Perceived agency of the robots decreased over time in both groups. Overall, we received very mixed responses with positive, neutral and negative comments. The robots had no major benefits or problems. Future research could give robots stronger operational roles, use more specific outcome measures, and perform cost-benefit analyses. © 2015 AJA Inc.
NASA Astrophysics Data System (ADS)
Murata, Naoya; Katsura, Seiichiro
Acquisition of information about the environment around a mobile robot is important for purposes such as controlling the robot from a remote location and in situations such as that when the robot is running autonomously. In many researches, audiovisual information is used. However, acquisition of information about force sensation, which is included in environmental information, has not been well researched. The mobile-hapto, which is a remote control system with force information, has been proposed, but the robot used for the system can acquire only the horizontal component of forces. For this reason, in this research, a three-wheeled mobile robot that consists of seven actuators was developed and its control system was constructed. It can get information on horizontal and vertical forces without using force sensors. By using this robot, detailed information on the forces in the environment can be acquired and the operability of the robot and its capability to adjust to the environment are expected to improve.
Autonomous stair-climbing with miniature jumping robots.
Stoeter, Sascha A; Papanikolopoulos, Nikolaos
2005-04-01
The problem of vision-guided control of miniature mobile robots is investigated. Untethered mobile robots with small physical dimensions of around 10 cm or less do not permit powerful onboard computers because of size and power constraints. These challenges have, in the past, reduced the functionality of such devices to that of a complex remote control vehicle with fancy sensors. With the help of a computationally more powerful entity such as a larger companion robot, the control loop can be closed. Using the miniature robot's video transmission or that of an observer to localize it in the world, control commands can be computed and relayed to the inept robot. The result is a system that exhibits autonomous capabilities. The framework presented here solves the problem of climbing stairs with the miniature Scout robot. The robot's unique locomotion mode, the jump, is employed to hop one step at a time. Methods for externally tracking the Scout are developed. A large number of real-world experiments are conducted and the results discussed.
Nonlinear disturbance observer based sliding mode control of a cable-driven rehabilitation robot.
Niu, Jie; Yang, Qianqian; Chen, Guangtao; Song, Rong
2017-07-01
This paper introduces a cable-driven robot for upper-limb rehabilitation. Kinematic and dynamic of this rehabilitation robot is analyzed. A sliding mode controller combined with a nonlinear disturbance observer is proposed to control this robot in the presence of disturbances. Simulation is carried out to prove the effectiveness of the proposed control scheme, and the results of the proposed controller is compared with a PID controller and a traditional sliding mode controller. Results show that the proposed controller can effectively improve the tracking performance as compared with the other two controllers and cause lower chattering as compared with a traditional sliding mode controller.
Combined virtual and real robotic test-bed for single operator control of multiple robots
NASA Astrophysics Data System (ADS)
Lee, Sam Y.-S.; Hunt, Shawn; Cao, Alex; Pandya, Abhilash
2010-04-01
Teams of heterogeneous robots with different dynamics or capabilities could perform a variety of tasks such as multipoint surveillance, cooperative transport and explorations in hazardous environments. In this study, we work with heterogeneous robots of semi-autonomous ground and aerial robots for contaminant localization. We developed a human interface system which linked every real robot to its virtual counterpart. A novel virtual interface has been integrated with Augmented Reality that can monitor the position and sensory information from video feed of ground and aerial robots in the 3D virtual environment, and improve user situational awareness. An operator can efficiently control the real multi-robots using the Drag-to-Move method on the virtual multi-robots. This enables an operator to control groups of heterogeneous robots in a collaborative way for allowing more contaminant sources to be pursued simultaneously. The advanced feature of the virtual interface system is guarded teleoperation. This can be used to prevent operators from accidently driving multiple robots into walls and other objects. Moreover, the feature of the image guidance and tracking is able to reduce operator workload.
Stiffness Control of Surgical Continuum Manipulators
Mahvash, Mohsen; Dupont, Pierre E.
2013-01-01
This paper introduces the first stiffness controller for continuum robots. The control law is based on an accurate approximation of a continuum robot’s coupled kinematic and static force model. To implement a desired tip stiffness, the controller drives the actuators to positions corresponding to a deflected robot configuration that produces the required tip force for the measured tip position. This approach provides several important advantages. First, it enables the use of robot deflection sensing as a means to both sense and control tip forces. Second, it enables stiffness control to be implemented by modification of existing continuum robot position controllers. The proposed controller is demonstrated experimentally in the context of a concentric tube robot. Results show that the stiffness controller achieves the desired stiffness in steady state, provides good dynamic performance, and exhibits stability during contact transitions. PMID:24273466
Navigation of a care and welfare robot
NASA Astrophysics Data System (ADS)
Yukawa, Toshihiro; Hosoya, Osamu; Saito, Naoki; Okano, Hideharu
2005-12-01
In this paper, we propose the development of a robot that can perform nursing tasks in a hospital. In a narrow environment such as a sickroom or a hallway, the robot must be able to move freely in arbitrary directions. Therefore, the robot needs to have high controllability and the capability to make precise movements. Our robot can recognize a line by using cameras, and can be controlled in the reference directions by means of comparison with original cell map information; furthermore, it moves safely on the basis of an original center-line established permanently in the building. Correspondence between the robot and a centralized control center enables the robot's autonomous movement in the hospital. Through a navigation system using cell map information, the robot is able to perform nursing tasks smoothly by changing the camera angle.
Improving brain-machine interface performance by decoding intended future movements
NASA Astrophysics Data System (ADS)
Willett, Francis R.; Suminski, Aaron J.; Fagg, Andrew H.; Hatsopoulos, Nicholas G.
2013-04-01
Objective. A brain-machine interface (BMI) records neural signals in real time from a subject's brain, interprets them as motor commands, and reroutes them to a device such as a robotic arm, so as to restore lost motor function. Our objective here is to improve BMI performance by minimizing the deleterious effects of delay in the BMI control loop. We mitigate the effects of delay by decoding the subject's intended movements a short time lead in the future. Approach. We use the decoded, intended future movements of the subject as the control signal that drives the movement of our BMI. This should allow the user's intended trajectory to be implemented more quickly by the BMI, reducing the amount of delay in the system. In our experiment, a monkey (Macaca mulatta) uses a future prediction BMI to control a simulated arm to hit targets on a screen. Main Results. Results from experiments with BMIs possessing different system delays (100, 200 and 300 ms) show that the monkey can make significantly straighter, faster and smoother movements when the decoder predicts the user's future intent. We also characterize how BMI performance changes as a function of delay, and explore offline how the accuracy of future prediction decoders varies at different time leads. Significance. This study is the first to characterize the effects of control delays in a BMI and to show that decoding the user's future intent can compensate for the negative effect of control delay on BMI performance.
Robotic long-distance telementoring in neurosurgery.
Mendez, Ivar; Hill, Ron; Clarke, David; Kolyvas, George; Walling, Simon
2005-03-01
To test the feasibility of long-distance telementoring in neurosurgery by providing subspecialized expertise in real time to another neurosurgeon performing a surgical procedure in a remote location. A robotic telecollaboration system (Socrates; Computer Motion, Inc., Santa Barbara, CA) capable of controlling the movements of a robotic arm, of handling two-way video, and of audio communication as well as transmission of neuronavigational data from the remote operating room was used for the telementoring procedures. Four integrated services digital network lines with a total speed of transmission of 512 kilobytes per second provided telecommunications between a large academic center (Halifax, Nova Scotia) and a community-based center (Saint John, New Brunswick) located 400 km away. Long-distance telementoring was used in three craniotomies for brain tumors, a craniotomy for an arteriovenous malformation, a carotid endarterectomy, and a lumbar laminectomy. There were no surgical complications during the procedures, and all patients had uneventful outcomes. The neurosurgeons in the remote location believed that the input from the mentors was useful in all of the cases and was crucial in the removal of a mesial temporal lobe glioma and resection of an occipital arteriovenous malformation. Our initial experience with long-distance robotic-assisted telementoring in six cases indicates that telementoring is feasible, reliable, and safe. Although still in its infancy, telementoring has the potential to improve surgical care, to enhance neurosurgical training, and to have a major impact on the delivery of neurosurgical services throughout the world.
Automated generation of weld path trajectories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sizemore, John M.; Hinman-Sweeney, Elaine Marie; Ames, Arlo Leroy
2003-06-01
AUTOmated GENeration of Control Programs for Robotic Welding of Ship Structure (AUTOGEN) is software that automates the planning and compiling of control programs for robotic welding of ship structure. The software works by evaluating computer representations of the ship design and the manufacturing plan. Based on this evaluation, AUTOGEN internally identifies and appropriately characterizes each weld. Then it constructs the robot motions necessary to accomplish the welds and determines for each the correct assignment of process control values. AUTOGEN generates these robot control programs completely without manual intervention or edits except to correct wrong or missing input data. Most shipmore » structure assemblies are unique or at best manufactured only a few times. Accordingly, the high cost inherent in all previous methods of preparing complex control programs has made robot welding of ship structures economically unattractive to the U.S. shipbuilding industry. AUTOGEN eliminates the cost of creating robot control programs. With programming costs eliminated, capitalization of robots to weld ship structures becomes economically viable. Robot welding of ship structures will result in reduced ship costs, uniform product quality, and enhanced worker safety. Sandia National Laboratories and Northrop Grumman Ship Systems worked with the National Shipbuilding Research Program to develop a means of automated path and process generation for robotic welding. This effort resulted in the AUTOGEN program, which has successfully demonstrated automated path generation and robot control. Although the current implementation of AUTOGEN is optimized for welding applications, the path and process planning capability has applicability to a number of industrial applications, including painting, riveting, and adhesive delivery.« less
Conscious brain-to-brain communication in humans using non-invasive technologies.
Grau, Carles; Ginhoux, Romuald; Riera, Alejandro; Nguyen, Thanh Lam; Chauvat, Hubert; Berg, Michel; Amengual, Julià L; Pascual-Leone, Alvaro; Ruffini, Giulio
2014-01-01
Human sensory and motor systems provide the natural means for the exchange of information between individuals, and, hence, the basis for human civilization. The recent development of brain-computer interfaces (BCI) has provided an important element for the creation of brain-to-brain communication systems, and precise brain stimulation techniques are now available for the realization of non-invasive computer-brain interfaces (CBI). These technologies, BCI and CBI, can be combined to realize the vision of non-invasive, computer-mediated brain-to-brain (B2B) communication between subjects (hyperinteraction). Here we demonstrate the conscious transmission of information between human brains through the intact scalp and without intervention of motor or peripheral sensory systems. Pseudo-random binary streams encoding words were transmitted between the minds of emitter and receiver subjects separated by great distances, representing the realization of the first human brain-to-brain interface. In a series of experiments, we established internet-mediated B2B communication by combining a BCI based on voluntary motor imagery-controlled electroencephalographic (EEG) changes with a CBI inducing the conscious perception of phosphenes (light flashes) through neuronavigated, robotized transcranial magnetic stimulation (TMS), with special care taken to block sensory (tactile, visual or auditory) cues. Our results provide a critical proof-of-principle demonstration for the development of conscious B2B communication technologies. More fully developed, related implementations will open new research venues in cognitive, social and clinical neuroscience and the scientific study of consciousness. We envision that hyperinteraction technologies will eventually have a profound impact on the social structure of our civilization and raise important ethical issues.
Conscious Brain-to-Brain Communication in Humans Using Non-Invasive Technologies
Grau, Carles; Ginhoux, Romuald; Riera, Alejandro; Nguyen, Thanh Lam; Chauvat, Hubert; Berg, Michel; Amengual, Julià L.; Pascual-Leone, Alvaro; Ruffini, Giulio
2014-01-01
Human sensory and motor systems provide the natural means for the exchange of information between individuals, and, hence, the basis for human civilization. The recent development of brain-computer interfaces (BCI) has provided an important element for the creation of brain-to-brain communication systems, and precise brain stimulation techniques are now available for the realization of non-invasive computer-brain interfaces (CBI). These technologies, BCI and CBI, can be combined to realize the vision of non-invasive, computer-mediated brain-to-brain (B2B) communication between subjects (hyperinteraction). Here we demonstrate the conscious transmission of information between human brains through the intact scalp and without intervention of motor or peripheral sensory systems. Pseudo-random binary streams encoding words were transmitted between the minds of emitter and receiver subjects separated by great distances, representing the realization of the first human brain-to-brain interface. In a series of experiments, we established internet-mediated B2B communication by combining a BCI based on voluntary motor imagery-controlled electroencephalographic (EEG) changes with a CBI inducing the conscious perception of phosphenes (light flashes) through neuronavigated, robotized transcranial magnetic stimulation (TMS), with special care taken to block sensory (tactile, visual or auditory) cues. Our results provide a critical proof-of-principle demonstration for the development of conscious B2B communication technologies. More fully developed, related implementations will open new research venues in cognitive, social and clinical neuroscience and the scientific study of consciousness. We envision that hyperinteraction technologies will eventually have a profound impact on the social structure of our civilization and raise important ethical issues. PMID:25137064
Control Of A Serpentine Robot For Inspection Tasks
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Colbaugh, Richard D.; Glass, Kristin L.
1996-01-01
Efficient, robust kinematic control scheme developed to control serpentine robot designed to inspect complex structure. Takes full advantage of multiple redundant degrees of freedom of robot to provide considerable dexterity for maneuvering through workspace cluttered with stationary obstacles at initially unknown positions. Control scheme produces slithering motion.
Reactive navigation for autonomous guided vehicle using neuro-fuzzy techniques
NASA Astrophysics Data System (ADS)
Cao, Jin; Liao, Xiaoqun; Hall, Ernest L.
1999-08-01
A Neuro-fuzzy control method for navigation of an Autonomous Guided Vehicle robot is described. Robot navigation is defined as the guiding of a mobile robot to a desired destination or along a desired path in an environment characterized by as terrain and a set of distinct objects, such as obstacles and landmarks. The autonomous navigate ability and road following precision are mainly influenced by its control strategy and real-time control performance. Neural network and fuzzy logic control techniques can improve real-time control performance for mobile robot due to its high robustness and error-tolerance ability. For a mobile robot to navigate automatically and rapidly, an important factor is to identify and classify mobile robots' currently perceptual environment. In this paper, a new approach of the current perceptual environment feature identification and classification, which are based on the analysis of the classifying neural network and the Neuro- fuzzy algorithm, is presented. The significance of this work lies in the development of a new method for mobile robot navigation.
Zygomalas, Apollon; Giokas, Konstantinos; Koutsouris, Dimitrios
2014-01-01
Aim. Modular mini-robots can be used in novel minimally invasive surgery techniques like natural orifice transluminal endoscopic surgery (NOTES) and laparoendoscopic single site (LESS) surgery. The control of these miniature assistants is complicated. The aim of this study is the in silico investigation of a remote controlling interface for modular miniature robots which can be used in minimally invasive surgery. Methods. The conceptual controlling system was developed, programmed, and simulated using professional robotics simulation software. Three different modes of control were programmed. The remote controlling surgical interface was virtually designed as a high scale representation of the respective modular mini-robot, therefore a modular controlling system itself. Results. With the proposed modular controlling system the user could easily identify the conformation of the modular mini-robot and adequately modify it as needed. The arrangement of each module was always known. The in silico investigation gave useful information regarding the controlling mode, the adequate speed of rearrangements, and the number of modules needed for efficient working tasks. Conclusions. The proposed conceptual model may promote the research and development of more sophisticated modular controlling systems. Modular surgical interfaces may improve the handling and the dexterity of modular miniature robots during minimally invasive procedures. PMID:25295187
Zygomalas, Apollon; Giokas, Konstantinos; Koutsouris, Dimitrios
2014-01-01
Aim. Modular mini-robots can be used in novel minimally invasive surgery techniques like natural orifice transluminal endoscopic surgery (NOTES) and laparoendoscopic single site (LESS) surgery. The control of these miniature assistants is complicated. The aim of this study is the in silico investigation of a remote controlling interface for modular miniature robots which can be used in minimally invasive surgery. Methods. The conceptual controlling system was developed, programmed, and simulated using professional robotics simulation software. Three different modes of control were programmed. The remote controlling surgical interface was virtually designed as a high scale representation of the respective modular mini-robot, therefore a modular controlling system itself. Results. With the proposed modular controlling system the user could easily identify the conformation of the modular mini-robot and adequately modify it as needed. The arrangement of each module was always known. The in silico investigation gave useful information regarding the controlling mode, the adequate speed of rearrangements, and the number of modules needed for efficient working tasks. Conclusions. The proposed conceptual model may promote the research and development of more sophisticated modular controlling systems. Modular surgical interfaces may improve the handling and the dexterity of modular miniature robots during minimally invasive procedures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ravishankar, A.S. Ghosal, A.
1999-01-01
The dynamics of a feedback-controlled rigid robot is most commonly described by a set of nonlinear ordinary differential equations. In this paper, the authors analyze these equations, representing the feedback-controlled motion of two- and three-degrees-of-freedom rigid robots with revolute (R) and prismatic (P) joints in the absence of compliance, friction, and potential energy, for the possibility of chaotic motions. The authors first study the unforced or inertial motions of the robots, and show that when the Gaussian or Riemannian curvature of the configuration space of a robot is negative, the robot equations can exhibit chaos. If the curvature is zeromore » or positive, then the robot equations cannot exhibit chaos. The authors show that among the two-degrees-of-freedom robots, the PP and the PR robot have zero Gaussian curvature while the RP and RR robots have negative Gaussian curvatures. For the three-degrees-of-freedom robots, they analyze the two well-known RRP and RRR configurations of the Stanford arm and the PUMA manipulator, respectively, and derive the conditions for negative curvature and possible chaotic motions. The criteria of negative curvature cannot be used for the forced or feedback-controlled motions. For the forced motion, the authors resort to the well-known numerical techniques and compute chaos maps, Poincare maps, and bifurcation diagrams. Numerical results are presented for the two-degrees-of-freedom RP and RR robots, and the authors show that these robot equations can exhibit chaos for low controller gains and for large underestimated models. From the bifurcation diagrams, the route to chaos appears to be through period doubling.« less
Neuroprosthetic Decoder Training as Imitation Learning
Merel, Josh; Paninski, Liam; Cunningham, John P.
2016-01-01
Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user’s intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user’s intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector. PMID:27191387
Comparison of Human and Humanoid Robot Control of Upright Stance
Peterka, Robert J.
2009-01-01
There is considerable recent interest in developing humanoid robots. An important substrate for many motor actions in both humans and biped robots is the ability to maintain a statically or dynamically stable posture. Given the success of the human design, one would expect there are lessons to be learned in formulating a postural control mechanism for robots. In this study we limit ourselves to considering the problem of maintaining upright stance. Human stance control is compared to a suggested method for robot stance control called zero moment point (ZMP) compensation. Results from experimental and modeling studies suggest there are two important subsystems that account for the low- and mid-frequency (DC to ~1 Hz) dynamic characteristics of human stance control. These subsystems are 1) a “sensory integration” mechanism whereby orientation information from multiple sensory systems encoding body kinematics (i.e. position, velocity) is flexibly combined to provide an overall estimate of body orientation while allowing adjustments (sensory re-weighting) that compensate for changing environmental conditions, and 2) an “effort control” mechanism that uses kinetic-related (i.e., force-related) sensory information to reduce the mean deviation of body orientation from upright. Functionally, ZMP compensation is directly analogous to how humans appear to use kinetic feedback to modify the main sensory integration feedback loop controlling body orientation. However, a flexible sensory integration mechanism is missing from robot control leaving the robot vulnerable to instability in conditions were humans are able to maintain stance. We suggest the addition of a simple form of sensory integration to improve robot stance control. We also investigate how the biological constraint of feedback time delay influences the human stance control design. The human system may serve as a guide for improved robot control, but should not be directly copied because the constraints on robot and human control are different. PMID:19665564
Fast Grasp Contact Computation for a Serial Robot
NASA Technical Reports Server (NTRS)
Hargrave, Brian (Inventor); Shi, Jianying (Inventor); Diftler, Myron A. (Inventor)
2015-01-01
A system includes a controller and a serial robot having links that are interconnected by a joint, wherein the robot can grasp a three-dimensional (3D) object in response to a commanded grasp pose. The controller receives input information, including the commanded grasp pose, a first set of information describing the kinematics of the robot, and a second set of information describing the position of the object to be grasped. The controller also calculates, in a two-dimensional (2D) plane, a set of contact points between the serial robot and a surface of the 3D object needed for the serial robot to achieve the commanded grasp pose. A required joint angle is then calculated in the 2D plane between the pair of links using the set of contact points. A control action is then executed with respect to the motion of the serial robot using the required joint angle.
Evolving self-assembly in autonomous homogeneous robots: experiments with two physical robots.
Ampatzis, Christos; Tuci, Elio; Trianni, Vito; Christensen, Anders Lyhne; Dorigo, Marco
2009-01-01
This research work illustrates an approach to the design of controllers for self-assembling robots in which the self-assembly is initiated and regulated by perceptual cues that are brought forth by the physical robots through their dynamical interactions. More specifically, we present a homogeneous control system that can achieve assembly between two modules (two fully autonomous robots) of a mobile self-reconfigurable system without a priori introduced behavioral or morphological heterogeneities. The controllers are dynamic neural networks evolved in simulation that directly control all the actuators of the two robots. The neurocontrollers cause the dynamic specialization of the robots by allocating roles between them based solely on their interaction. We show that the best evolved controller proves to be successful when tested on a real hardware platform, the swarm-bot. The performance achieved is similar to the one achieved by existing modular or behavior-based approaches, also due to the effect of an emergent recovery mechanism that was neither explicitly rewarded by the fitness function, nor observed during the evolutionary simulation. Our results suggest that direct access to the orientations or intentions of the other agents is not a necessary condition for robot coordination: Our robots coordinate without direct or explicit communication, contrary to what is assumed by most research works in collective robotics. This work also contributes to strengthening the evidence that evolutionary robotics is a design methodology that can tackle real-world tasks demanding fine sensory-motor coordination.
Grimm, Florian; Walter, Armin; Spüler, Martin; Naros, Georgios; Rosenstiel, Wolfgang; Gharabaghi, Alireza
2016-01-01
Brain-machine interface-controlled (BMI) neurofeedback training aims to modulate cortical physiology and is applied during neurorehabilitation to increase the responsiveness of the brain to subsequent physiotherapy. In a parallel line of research, robotic exoskeletons are used in goal-oriented rehabilitation exercises for patients with severe motor impairment to extend their range of motion (ROM) and the intensity of training. Furthermore, neuromuscular electrical stimulation (NMES) is applied in neurologically impaired patients to restore muscle strength by closing the sensorimotor loop. In this proof-of-principle study, we explored an integrated approach for providing assistance as needed to amplify the task-related ROM and the movement-related brain modulation during rehabilitation exercises of severely impaired patients. For this purpose, we combined these three approaches (BMI, NMES, and exoskeleton) in an integrated neuroprosthesis and studied the feasibility of this device in seven severely affected chronic stroke patients who performed wrist flexion and extension exercises while receiving feedback via a virtual environment. They were assisted by a gravity-compensating, seven degree-of-freedom exoskeleton which was attached to the paretic arm. NMES was applied to the wrist extensor and flexor muscles during the exercises and was controlled by a hybrid BMI based on both sensorimotor cortical desynchronization (ERD) and electromyography (EMG) activity. The stimulation intensity was individualized for each targeted muscle and remained subthreshold, i.e., induced no overt support. The hybrid BMI controlled the stimulation significantly better than the offline analyzed ERD (p = 0.028) or EMG (p = 0.021) modality alone. Neuromuscular stimulation could be well integrated into the exoskeleton-based training and amplified both the task-related ROM (p = 0.009) and the movement-related brain modulation (p = 0.019). Combining a hybrid BMI with neuromuscular stimulation and antigravity assistance augments upper limb function and brain activity during rehabilitation exercises and may thus provide a novel restorative framework for severely affected stroke patients. PMID:27555805
Grimm, Florian; Walter, Armin; Spüler, Martin; Naros, Georgios; Rosenstiel, Wolfgang; Gharabaghi, Alireza
2016-01-01
Brain-machine interface-controlled (BMI) neurofeedback training aims to modulate cortical physiology and is applied during neurorehabilitation to increase the responsiveness of the brain to subsequent physiotherapy. In a parallel line of research, robotic exoskeletons are used in goal-oriented rehabilitation exercises for patients with severe motor impairment to extend their range of motion (ROM) and the intensity of training. Furthermore, neuromuscular electrical stimulation (NMES) is applied in neurologically impaired patients to restore muscle strength by closing the sensorimotor loop. In this proof-of-principle study, we explored an integrated approach for providing assistance as needed to amplify the task-related ROM and the movement-related brain modulation during rehabilitation exercises of severely impaired patients. For this purpose, we combined these three approaches (BMI, NMES, and exoskeleton) in an integrated neuroprosthesis and studied the feasibility of this device in seven severely affected chronic stroke patients who performed wrist flexion and extension exercises while receiving feedback via a virtual environment. They were assisted by a gravity-compensating, seven degree-of-freedom exoskeleton which was attached to the paretic arm. NMES was applied to the wrist extensor and flexor muscles during the exercises and was controlled by a hybrid BMI based on both sensorimotor cortical desynchronization (ERD) and electromyography (EMG) activity. The stimulation intensity was individualized for each targeted muscle and remained subthreshold, i.e., induced no overt support. The hybrid BMI controlled the stimulation significantly better than the offline analyzed ERD (p = 0.028) or EMG (p = 0.021) modality alone. Neuromuscular stimulation could be well integrated into the exoskeleton-based training and amplified both the task-related ROM (p = 0.009) and the movement-related brain modulation (p = 0.019). Combining a hybrid BMI with neuromuscular stimulation and antigravity assistance augments upper limb function and brain activity during rehabilitation exercises and may thus provide a novel restorative framework for severely affected stroke patients.
Embedded diagnostic, prognostic, and health management system and method for a humanoid robot
NASA Technical Reports Server (NTRS)
Barajas, Leandro G. (Inventor); Strawser, Philip A (Inventor); Sanders, Adam M (Inventor); Reiland, Matthew J (Inventor)
2013-01-01
A robotic system includes a humanoid robot with multiple compliant joints, each moveable using one or more of the actuators, and having sensors for measuring control and feedback data. A distributed controller controls the joints and other integrated system components over multiple high-speed communication networks. Diagnostic, prognostic, and health management (DPHM) modules are embedded within the robot at the various control levels. Each DPHM module measures, controls, and records DPHM data for the respective control level/connected device in a location that is accessible over the networks or via an external device. A method of controlling the robot includes embedding a plurality of the DPHM modules within multiple control levels of the distributed controller, using the DPHM modules to measure DPHM data within each of the control levels, and recording the DPHM data in a location that is accessible over at least one of the high-speed communication networks.
Learning for intelligent mobile robots
NASA Astrophysics Data System (ADS)
Hall, Ernest L.; Liao, Xiaoqun; Alhaj Ali, Souma M.
2003-10-01
Unlike intelligent industrial robots which often work in a structured factory setting, intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. However, such machines have many potential applications in medicine, defense, industry and even the home that make their study important. Sensors such as vision are needed. However, in many applications some form of learning is also required. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots. During the past 20 years, the use of intelligent industrial robots that are equipped not only with motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. However, relatively little has been done concerning learning. Adaptive and robust control permits one to achieve point to point and controlled path operation in a changing environment. This problem can be solved with a learning control. In the unstructured environment, the terrain and consequently the load on the robot"s motors are constantly changing. Learning the parameters of a proportional, integral and derivative controller (PID) and artificial neural network provides an adaptive and robust control. Learning may also be used for path following. Simulations that include learning may be conducted to see if a robot can learn its way through a cluttered array of obstacles. If a situation is performed repetitively, then learning can also be used in the actual application. To reach an even higher degree of autonomous operation, a new level of learning is required. Recently learning theories such as the adaptive critic have been proposed. In this type of learning a critic provides a grade to the controller of an action module such as a robot. The creative control process is used that is "beyond the adaptive critic." A mathematical model of the creative control process is presented that illustrates the use for mobile robots. Examples from a variety of intelligent mobile robot applications are also presented. The significance of this work is in providing a greater understanding of the applications of learning to mobile robots that could lead to many applications.
Vu, Dinh-Son; Allard, Ulysse Cote; Gosselin, Clement; Routhier, Francois; Gosselin, Benoit; Campeau-Lecours, Alexandre
2017-07-01
Robotic assistive devices enhance the autonomy of individuals living with physical disabilities in their day-to-day life. Although the first priority for such devices is safety, they must also be intuitive and efficient from an engineering point of view in order to be adopted by a broad range of users. This is especially true for assistive robotic arms, as they are used for the complex control tasks of daily living. One challenge in the control of such assistive robots is the management of the end-effector orientation which is not always intuitive for the human operator, especially for neophytes. This paper presents a novel orientation control algorithm designed for robotic arms in the context of human-robot interaction. This work aims at making the control of the robot's orientation easier and more intuitive for the user, in particular, individuals living with upper limb disabilities. The performance and intuitiveness of the proposed orientation control algorithm is assessed through two experiments with 25 able-bodied subjects and shown to significantly improve on both aspects.
Control of free-flying space robot manipulator systems
NASA Technical Reports Server (NTRS)
Cannon, Robert H., Jr.
1989-01-01
Control techniques for self-contained, autonomous free-flying space robots are being tested and developed. Free-flying space robots are envisioned as a key element of any successful long term presence in space. These robots must be capable of performing the assembly, maintenance, and inspection, and repair tasks that currently require astronaut extra-vehicular activity (EVA). Use of robots will provide economic savings as well as improved astronaut safety by reducing and in many cases, eliminating the need for human EVA. The focus of the work is to develop and carry out a set of research projects using laboratory models of satellite robots. These devices use air-cushion-vehicle (ACV) technology to simulate in two dimensions the drag-free, zero-g conditions of space. Current work is divided into six major projects or research areas. Fixed-base cooperative manipulation work represents our initial entry into multiple arm cooperation and high-level control with a sophisticated user interface. The floating-base cooperative manipulation project strives to transfer some of the technologies developed in the fixed-base work onto a floating base. The global control and navigation experiment seeks to demonstrate simultaneous control of the robot manipulators and the robot base position so that tasks can be accomplished while the base is undergoing a controlled motion. The multiple-vehicle cooperation project's goal is to demonstrate multiple free-floating robots working in teams to carry out tasks too difficult or complex for a single robot to perform. The Location Enhancement Arm Push-off (LEAP) activity's goal is to provide a viable alternative to expendable gas thrusters for vehicle propulsion wherein the robot uses its manipulators to throw itself from place to place. Because the successful execution of the LEAP technique requires an accurate model of the robot and payload mass properties, it was deemed an attractive testbed for adaptive control technology.
A Gradient Optimization Approach to Adaptive Multi-Robot Control
2009-09-01
implemented for deploying a group of three flying robots with downward facing cameras to monitor an environment on the ground. Thirdly, the multi-robot...theoretically proven, and implemented on multi-robot platforms. Thesis Supervisor: Daniela Rus Title: Professor of Electrical Engineering and Computer...often nonlinear, and they are coupled through a network which changes over time. Thirdly, implementing multi-robot controllers requires maintaining mul
Robotics research projects report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsia, T.C.
The research results of the Robotics Research Laboratory are summarized. Areas of research include robotic control, a stand-alone vision system for industrial robots, and sensors other than vision that would be useful for image ranging, including ultrasonic and infra-red devices. One particular project involves RHINO, a 6-axis robotic arm that can be manipulated by serial transmission of ASCII command strings to its interfaced controller. (LEW)
NASA Technical Reports Server (NTRS)
Ballhaus, W. L.; Alder, L. J.; Chen, V. W.; Dickson, W. C.; Ullman, M. A.; Wilson, E.
1993-01-01
Over the last ten years, the Stanford Aerospace Robotics Laboratory (ARL) has developed a hardware facility in which a number of space robotics issues have been, and continue to be addressed. This paper reviews two of the current ARL research areas: navigation and control of free flying space robots, and modeling and control of extremely flexible space structures.
Characteristics of Behavior of Robots with Emotion Model
NASA Astrophysics Data System (ADS)
Sato, Shigehiko; Nozawa, Akio; Ide, Hideto
Cooperated multi robots system has much dominance in comparison with single robot system. It is able to adapt to various circumstances and has a flexibility for variation of tasks. However it has still problems to control each robot, though methods for control multi robots system have been studied. Recently, the robots have been coming into real scene. And emotion and sensitivity of the robots have been widely studied. In this study, human emotion model based on psychological interaction was adapt to multi robots system to achieve methods for organization of multi robots. The characteristics of behavior of multi robots system achieved through computer simulation were analyzed. As a result, very complexed and interesting behavior was emerged even though it has rather simple configuration. And it has flexiblity in various circumstances. Additional experiment with actual robots will be conducted based on the emotion model.
Differential-Drive Mobile Robot Control Design based-on Linear Feedback Control Law
NASA Astrophysics Data System (ADS)
Nurmaini, Siti; Dewi, Kemala; Tutuko, Bambang
2017-04-01
This paper deals with the problem of how to control differential driven mobile robot with simple control law. When mobile robot moves from one position to another to achieve a position destination, it always produce some errors. Therefore, a mobile robot requires a certain control law to drive the robot’s movement to the position destination with a smallest possible error. In this paper, in order to reduce position error, a linear feedback control is proposed with pole placement approach to regulate the polynoms desired. The presented work leads to an improved understanding of differential-drive mobile robot (DDMR)-based kinematics equation, which will assist to design of suitable controllers for DDMR movement. The result show by using the linier feedback control method with pole placement approach the position error is reduced and fast convergence is achieved.
Adaptive Control Of Remote Manipulator
NASA Technical Reports Server (NTRS)
Seraji, Homayoun
1989-01-01
Robotic control system causes remote manipulator to follow closely reference trajectory in Cartesian reference frame in work space, without resort to computationally intensive mathematical model of robot dynamics and without knowledge of robot and load parameters. System, derived from linear multivariable theory, uses relatively simple feedforward and feedback controllers with model-reference adaptive control.
Bilateral Impedance Control For Telemanipulators
NASA Technical Reports Server (NTRS)
Moore, Christopher L.
1993-01-01
Telemanipulator system includes master robot manipulated by human operator, and slave robot performing tasks at remote location. Two robots electronically coupled so slave robot moves in response to commands from master robot. Teleoperation greatly enhanced if forces acting on slave robot fed back to operator, giving operator feeling he or she manipulates remote environment directly. Main advantage of bilateral impedance control: enables arbitrary specification of desired performance characteristics for telemanipulator system. Relationship between force and position modulated at both ends of system to suit requirements of task.
Bruemmer, David J [Idaho Falls, ID
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
Virtual Sensors for Advanced Controllers in Rehabilitation Robotics.
Mancisidor, Aitziber; Zubizarreta, Asier; Cabanes, Itziar; Portillo, Eva; Jung, Je Hyung
2018-03-05
In order to properly control rehabilitation robotic devices, the measurement of interaction force and motion between patient and robot is an essential part. Usually, however, this is a complex task that requires the use of accurate sensors which increase the cost and the complexity of the robotic device. In this work, we address the development of virtual sensors that can be used as an alternative of actual force and motion sensors for the Universal Haptic Pantograph (UHP) rehabilitation robot for upper limbs training. These virtual sensors estimate the force and motion at the contact point where the patient interacts with the robot using the mathematical model of the robotic device and measurement through low cost position sensors. To demonstrate the performance of the proposed virtual sensors, they have been implemented in an advanced position/force controller of the UHP rehabilitation robot and experimentally evaluated. The experimental results reveal that the controller based on the virtual sensors has similar performance to the one using direct measurement (less than 0.005 m and 1.5 N difference in mean error). Hence, the developed virtual sensors to estimate interaction force and motion can be adopted to replace actual precise but normally high-priced sensors which are fundamental components for advanced control of rehabilitation robotic devices.
Control strategy for a dual-arm maneuverable space robot
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1987-01-01
A simple strategy for the attitude control and arm coordination of a maneuverable space robot with dual arms is proposed. The basic task for the robot consists of the placement of marked rigid solid objects with specified pairs of gripping points and a specified direction of approach for gripping. The strategy consists of three phases each of which involves only elementary rotational and translational collision-free maneuvers of the robot body. Control laws for these elementary maneuvers are derived by using a body-referenced dynamic model of the dual-arm robot.
Fiani, Brian; Quadri, Syed A; Farooqui, Mudassir; Cathel, Alessandra; Berman, Blake; Noel, Jerry; Siddiqi, Javed
2018-04-03
Whenever any new technology is introduced into the healthcare system, it should satisfy all three pillars of the iron triangle of health care, which are quality, cost-effectiveness, and accessibility. There has been quite advancement in the field of spine surgery in the last two decades with introduction of new technological modalities such as CAN and surgical robotic devices. MAZOR SpineAssist/Renaissance was the first robotic system to be approved for the use in spine surgeries in the USA in 2004. In this review, the authors sought to determine if the current literature supports this technology to be cost-effective, accessible, and improve the quality of care for individuals and populations by increasing the likelihood of desired health outcomes. Robotic-assisted surgery seems to provide perfection in surgical ergonomics and surgical dexterity, consequently improving patient outcomes. A lot of data is present on the accuracy, effectiveness, and safety of the robotic-guided technology which reflects remarkable improvements in quality of care, making its utility convincingly undisputable. The technology has been claimed to be cost-effective but there seems to be lack of data in the literature on this topic to validate this claim. Apart from just the outcome parameters, there is an immense need of studies on real-time cost-efficacy, patient perspective, surgeon and resident learning curve, and their experience with this new technology. Furthermore, new studies looking into increased utilities of this technology, such as brain and spine tumor resection, deep brain stimulation procedures, and osteotomies in deformity surgery, might authenticate the cost of the equipment.
Control of a Serpentine Robot for Inspection Tasks
NASA Technical Reports Server (NTRS)
Colbaugh, R.; Glass, K.; Seraji, H.
1994-01-01
This paper presents a simple and robust kinematic control scheme for the JPL serpentine robot system. The proposed strategy is developed using the dampened-least-squares/configuration control methodology, and permits the considerable dexterity of the JPL serpentine robot to be effectively utilized for maneuvering in the congested and uncertain workspaces often encountered in inspection tasks. Computer simulation results are given for the 20 degree-of-freedom (DOF) manipulator system obtained by mounting the twelve DOF serpentine robot at the end-effector of an eight DOF Robotics Research arm/lathe-bed system. These simulations demonstrate that the proposed approach provides an effective method of controlling this complex system.
Initial experiments in thrusterless locomotion control of a free-flying robot
NASA Technical Reports Server (NTRS)
Jasper, W. J.; Cannon, R. H., Jr.
1990-01-01
A two-arm free-flying robot has been constructed to study thrusterless locomotion in space. This is accomplished by pushing off or landing on a large structure in a coordinated two-arm maneuver. A new control method, called system momentum control, allows the robot to follow desired momentum trajectories and thus leap or crawl from one structure to another. The robot floats on an air-cushion, simulating in two dimensions the drag-free zero-g environment of space. The control paradigm has been verified experimentally by commanding the robot to push off a bar with both arms, rotate 180 degrees, and catch itself on another bar.
SVR versus neural-fuzzy network controllers for the sagittal balance of a biped robot.
Ferreira, João P; Crisóstomo, Manuel M; Coimbra, A Paulo
2009-12-01
The real-time balance control of an eight-link biped robot using a zero moment point (ZMP) dynamic model is difficult due to the processing time of the corresponding equations. To overcome this limitation, two alternative intelligent computing control techniques were compared: one based on support vector regression (SVR) and another based on a first-order Takagi-Sugeno-Kang (TSK)-type neural-fuzzy (NF) network. Both methods use the ZMP error and its variation as inputs and the output is the correction of the robot's torso necessary for its sagittal balance. The SVR and the NF were trained based on simulation data and their performance was verified with a real biped robot. Two performance indexes are proposed to evaluate and compare the online performance of the two control methods. The ZMP is calculated by reading four force sensors placed under each robot's foot. The gait implemented in this biped is similar to a human gait that was acquired and adapted to the robot's size. Some experiments are presented and the results show that the implemented gait combined either with the SVR controller or with the TSK NF network controller can be used to control this biped robot. The SVR and the NF controllers exhibit similar stability, but the SVR controller runs about 50 times faster.
Hierarchical Compliance Control of a Soft Ankle Rehabilitation Robot Actuated by Pneumatic Muscles.
Liu, Quan; Liu, Aiming; Meng, Wei; Ai, Qingsong; Xie, Sheng Q
2017-01-01
Traditional compliance control of a rehabilitation robot is implemented in task space by using impedance or admittance control algorithms. The soft robot actuated by pneumatic muscle actuators (PMAs) is becoming prominent for patients as it enables the compliance being adjusted in each active link, which, however, has not been reported in the literature. This paper proposes a new compliance control method of a soft ankle rehabilitation robot that is driven by four PMAs configured in parallel to enable three degrees of freedom movement of the ankle joint. A new hierarchical compliance control structure, including a low-level compliance adjustment controller in joint space and a high-level admittance controller in task space, is designed. An adaptive compliance control paradigm is further developed by taking into account patient's active contribution and movement ability during a previous period of time, in order to provide robot assistance only when it is necessarily required. Experiments on healthy and impaired human subjects were conducted to verify the adaptive hierarchical compliance control scheme. The results show that the robot hierarchical compliance can be online adjusted according to the participant's assessment. The robot reduces its assistance output when participants contribute more and vice versa , thus providing a potentially feasible solution to the patient-in-loop cooperative training strategy.
Hierarchical Compliance Control of a Soft Ankle Rehabilitation Robot Actuated by Pneumatic Muscles
Liu, Quan; Liu, Aiming; Meng, Wei; Ai, Qingsong; Xie, Sheng Q.
2017-01-01
Traditional compliance control of a rehabilitation robot is implemented in task space by using impedance or admittance control algorithms. The soft robot actuated by pneumatic muscle actuators (PMAs) is becoming prominent for patients as it enables the compliance being adjusted in each active link, which, however, has not been reported in the literature. This paper proposes a new compliance control method of a soft ankle rehabilitation robot that is driven by four PMAs configured in parallel to enable three degrees of freedom movement of the ankle joint. A new hierarchical compliance control structure, including a low-level compliance adjustment controller in joint space and a high-level admittance controller in task space, is designed. An adaptive compliance control paradigm is further developed by taking into account patient’s active contribution and movement ability during a previous period of time, in order to provide robot assistance only when it is necessarily required. Experiments on healthy and impaired human subjects were conducted to verify the adaptive hierarchical compliance control scheme. The results show that the robot hierarchical compliance can be online adjusted according to the participant’s assessment. The robot reduces its assistance output when participants contribute more and vice versa, thus providing a potentially feasible solution to the patient-in-loop cooperative training strategy. PMID:29255412
Allothetic and idiothetic sensor fusion in rat-inspired robot localization
NASA Astrophysics Data System (ADS)
Weitzenfeld, Alfredo; Fellous, Jean-Marc; Barrera, Alejandra; Tejera, Gonzalo
2012-06-01
We describe a spatial cognition model based on the rat's brain neurophysiology as a basis for new robotic navigation architectures. The model integrates allothetic (external visual landmarks) and idiothetic (internal kinesthetic information) cues to train either rat or robot to learn a path enabling it to reach a goal from multiple starting positions. It stands in contrast to most robotic architectures based on SLAM, where a map of the environment is built to provide probabilistic localization information computed from robot odometry and landmark perception. Allothetic cues suffer in general from perceptual ambiguity when trying to distinguish between places with equivalent visual patterns, while idiothetic cues suffer from imprecise motions and limited memory recalls. We experiment with both types of cues in different maze configurations by training rats and robots to find the goal starting from a fixed location, and then testing them to reach the same target from new starting locations. We show that the robot, after having pre-explored a maze, can find a goal with improved efficiency, and is able to (1) learn the correct route to reach the goal, (2) recognize places already visited, and (3) exploit allothetic and idiothetic cues to improve on its performance. We finally contrast our biologically-inspired approach to more traditional robotic approaches and discuss current work in progress.
Spectrally queued feature selection for robotic visual odometery
NASA Astrophysics Data System (ADS)
Pirozzo, David M.; Frederick, Philip A.; Hunt, Shawn; Theisen, Bernard; Del Rose, Mike
2011-01-01
Over the last two decades, research in Unmanned Vehicles (UV) has rapidly progressed and become more influenced by the field of biological sciences. Researchers have been investigating mechanical aspects of varying species to improve UV air and ground intrinsic mobility, they have been exploring the computational aspects of the brain for the development of pattern recognition and decision algorithms and they have been exploring perception capabilities of numerous animals and insects. This paper describes a 3 month exploratory applied research effort performed at the US ARMY Research, Development and Engineering Command's (RDECOM) Tank Automotive Research, Development and Engineering Center (TARDEC) in the area of biologically inspired spectrally augmented feature selection for robotic visual odometry. The motivation for this applied research was to develop a feasibility analysis on multi-spectrally queued feature selection, with improved temporal stability, for the purposes of visual odometry. The intended application is future semi-autonomous Unmanned Ground Vehicle (UGV) control as the richness of data sets required to enable human like behavior in these systems has yet to be defined.
2010-01-01
Background Manual body weight supported treadmill training and robot-aided treadmill training are frequently used techniques for the gait rehabilitation of individuals after stroke and spinal cord injury. Current evidence suggests that robot-aided gait training may be improved by making robotic behavior more patient-cooperative. In this study, we have investigated the immediate effects of patient-cooperative versus non-cooperative robot-aided gait training on individuals with incomplete spinal cord injury (iSCI). Methods Eleven patients with iSCI participated in a single training session with the gait rehabilitation robot Lokomat. The patients were exposed to four different training modes in random order: During both non-cooperative position control and compliant impedance control, fixed timing of movements was provided. During two variants of the patient-cooperative path control approach, free timing of movements was enabled and the robot provided only spatial guidance. The two variants of the path control approach differed in the amount of additional support, which was either individually adjusted or exaggerated. Joint angles and torques of the robot as well as muscle activity and heart rate of the patients were recorded. Kinematic variability, interaction torques, heart rate and muscle activity were compared between the different conditions. Results Patients showed more spatial and temporal kinematic variability, reduced interaction torques, a higher increase of heart rate and more muscle activity in the patient-cooperative path control mode with individually adjusted support than in the non-cooperative position control mode. In the compliant impedance control mode, spatial kinematic variability was increased and interaction torques were reduced, but temporal kinematic variability, heart rate and muscle activity were not significantly higher than in the position control mode. Conclusions Patient-cooperative robot-aided gait training with free timing of movements made individuals with iSCI participate more actively and with larger kinematic variability than non-cooperative, position-controlled robot-aided gait training. PMID:20828422
Lee, Kit-Hang; Fu, Denny K.C.; Leong, Martin C.W.; Chow, Marco; Fu, Hing-Choi; Althoefer, Kaspar; Sze, Kam Yim; Yeung, Chung-Kwong
2017-01-01
Abstract Bioinspired robotic structures comprising soft actuation units have attracted increasing research interest. Taking advantage of its inherent compliance, soft robots can assure safe interaction with external environments, provided that precise and effective manipulation could be achieved. Endoscopy is a typical application. However, previous model-based control approaches often require simplified geometric assumptions on the soft manipulator, but which could be very inaccurate in the presence of unmodeled external interaction forces. In this study, we propose a generic control framework based on nonparametric and online, as well as local, training to learn the inverse model directly, without prior knowledge of the robot's structural parameters. Detailed experimental evaluation was conducted on a soft robot prototype with control redundancy, performing trajectory tracking in dynamically constrained environments. Advanced element formulation of finite element analysis is employed to initialize the control policy, hence eliminating the need for random exploration in the robot's workspace. The proposed control framework enabled a soft fluid-driven continuum robot to follow a 3D trajectory precisely, even under dynamic external disturbance. Such enhanced control accuracy and adaptability would facilitate effective endoscopic navigation in complex and changing environments. PMID:29251567
Lee, Kit-Hang; Fu, Denny K C; Leong, Martin C W; Chow, Marco; Fu, Hing-Choi; Althoefer, Kaspar; Sze, Kam Yim; Yeung, Chung-Kwong; Kwok, Ka-Wai
2017-12-01
Bioinspired robotic structures comprising soft actuation units have attracted increasing research interest. Taking advantage of its inherent compliance, soft robots can assure safe interaction with external environments, provided that precise and effective manipulation could be achieved. Endoscopy is a typical application. However, previous model-based control approaches often require simplified geometric assumptions on the soft manipulator, but which could be very inaccurate in the presence of unmodeled external interaction forces. In this study, we propose a generic control framework based on nonparametric and online, as well as local, training to learn the inverse model directly, without prior knowledge of the robot's structural parameters. Detailed experimental evaluation was conducted on a soft robot prototype with control redundancy, performing trajectory tracking in dynamically constrained environments. Advanced element formulation of finite element analysis is employed to initialize the control policy, hence eliminating the need for random exploration in the robot's workspace. The proposed control framework enabled a soft fluid-driven continuum robot to follow a 3D trajectory precisely, even under dynamic external disturbance. Such enhanced control accuracy and adaptability would facilitate effective endoscopic navigation in complex and changing environments.
Verification hybrid control of a wheeled mobile robot and manipulator
NASA Astrophysics Data System (ADS)
Muszynska, Magdalena; Burghardt, Andrzej; Kurc, Krzysztof; Szybicki, Dariusz
2016-04-01
In this article, innovative approaches to realization of the wheeled mobile robots and manipulator tracking are presented. Conceptions include application of the neural-fuzzy systems to compensation of the controlled system's nonlinearities in the tracking control task. Proposed control algorithms work on-line, contain structure, that adapt to the changeable work conditions of the controlled systems, and do not require the preliminary learning. The algorithm was verification on the real object which was a Scorbot - ER 4pc robotic manipulator and a Pioneer - 2DX mobile robot.
Innovation in robotic surgery: the Indian scenario.
Deshpande, Suresh V
2015-01-01
Robotics is the science. In scientific words a "Robot" is an electromechanical arm device with a computer interface, a combination of electrical, mechanical, and computer engineering. It is a mechanical arm that performs tasks in Industries, space exploration, and science. One such idea was to make an automated arm - A robot - In laparoscopy to control the telescope-camera unit electromechanically and then with a computer interface using voice control. It took us 5 long years from 2004 to bring it to the level of obtaining a patent. That was the birth of the Swarup Robotic Arm (SWARM) which is the first and the only Indian contribution in the field of robotics in laparoscopy as a total voice controlled camera holding robotic arm developed without any support by industry or research institutes.
Six axis force feedback input device
NASA Technical Reports Server (NTRS)
Ohm, Timothy (Inventor)
1998-01-01
The present invention is a low friction, low inertia, six-axis force feedback input device comprising an arm with double-jointed, tendon-driven revolute joints, a decoupled tendon-driven wrist, and a base with encoders and motors. The input device functions as a master robot manipulator of a microsurgical teleoperated robot system including a slave robot manipulator coupled to an amplifier chassis, which is coupled to a control chassis, which is coupled to a workstation with a graphical user interface. The amplifier chassis is coupled to the motors of the master robot manipulator and the control chassis is coupled to the encoders of the master robot manipulator. A force feedback can be applied to the input device and can be generated from the slave robot to enable a user to operate the slave robot via the input device without physically viewing the slave robot. Also, the force feedback can be generated from the workstation to represent fictitious forces to constrain the input device's control of the slave robot to be within imaginary predetermined boundaries.
Autonomous Motion Learning for Intra-Vehicular Activity Space Robot
NASA Astrophysics Data System (ADS)
Watanabe, Yutaka; Yairi, Takehisa; Machida, Kazuo
Space robots will be needed in the future space missions. So far, many types of space robots have been developed, but in particular, Intra-Vehicular Activity (IVA) space robots that support human activities should be developed to reduce human-risks in space. In this paper, we study the motion learning method of an IVA space robot with the multi-link mechanism. The advantage point is that this space robot moves using reaction force of the multi-link mechanism and contact forces from the wall as space walking of an astronaut, not to use a propulsion. The control approach is determined based on a reinforcement learning with the actor-critic algorithm. We demonstrate to clear effectiveness of this approach using a 5-link space robot model by simulation. First, we simulate that a space robot learn the motion control including contact phase in two dimensional case. Next, we simulate that a space robot learn the motion control changing base attitude in three dimensional case.
Özdem, Ceylan; Wiese, Eva; Wykowska, Agnieszka; Müller, Hermann; Brass, Marcel; Van Overwalle, Frank
2017-10-01
Attributing mind to interaction partners has been shown to increase the social relevance we ascribe to others' actions and to modulate the amount of attention dedicated to them. However, it remains unclear how the relationship between higher-order mind attribution and lower-level attention processes is established in the brain. In this neuroimaging study, participants saw images of an anthropomorphic robot that moved its eyes left- or rightwards to signal the appearance of an upcoming stimulus in the same (valid cue) or opposite location (invalid cue). Independently, participants' beliefs about the intentionality underlying the observed eye movements were manipulated by describing the eye movements as under human control or preprogrammed. As expected, we observed a validity effect behaviorally and neurologically (increased response times and activation in the invalid vs. valid condition). More importantly, we observed that this effect was more pronounced for the condition in which the robot's behavior was believed to be controlled by a human, as opposed to be preprogrammed. This interaction effect between cue validity and belief was, however, only found at the neural level and was manifested as a significant increase of activation in bilateral anterior temporoparietal junction.
Robotically-adjustable microstereotactic frames for image-guided neurosurgery
NASA Astrophysics Data System (ADS)
Kratchman, Louis B.; Fitzpatrick, J. Michael
2013-03-01
Stereotactic frames are a standard tool for neurosurgical targeting, but are uncomfortable for patients and obstruct the surgical field. Microstereotactic frames are more comfortable for patients, provide better access to the surgical site, and have grown in popularity as an alternative to traditional stereotactic devices. However, clinically available microstereotactic frames require either lengthy manufacturing delays or expensive image guidance systems. We introduce a robotically-adjusted, disposable microstereotactic frame for deep brain stimulation surgery that eliminates the drawbacks of existing microstereotactic frames. Our frame can be automatically adjusted in the operating room using a preoperative plan in less than five minutes. A validation study on phantoms shows that our approach provides a target positioning error of 0.14 mm, which exceeds the required accuracy for deep brain stimulation surgery.
Planar maneuvering control of underwater snake robots using virtual holonomic constraints.
Kohl, Anna M; Kelasidi, Eleni; Mohammadi, Alireza; Maggiore, Manfredi; Pettersen, Kristin Y
2016-11-24
This paper investigates the problem of planar maneuvering control for bio-inspired underwater snake robots that are exposed to unknown ocean currents. The control objective is to make a neutrally buoyant snake robot which is subject to hydrodynamic forces and ocean currents converge to a desired planar path and traverse the path with a desired velocity. The proposed feedback control strategy enforces virtual constraints which encode biologically inspired gaits on the snake robot configuration. The virtual constraints, parametrized by states of dynamic compensators, are used to regulate the orientation and forward speed of the snake robot. A two-state ocean current observer based on relative velocity sensors is proposed. It enables the robot to follow the path in the presence of unknown constant ocean currents. The efficacy of the proposed control algorithm for several biologically inspired gaits is verified both in simulations for different path geometries and in experiments.
Planning and Control for Microassembly of Structures Composed of Stress-Engineered MEMS Microrobots
Donald, Bruce R.; Levey, Christopher G.; Paprotny, Igor; Rus, Daniela
2013-01-01
We present control strategies that implement planar microassembly using groups of stress-engineered MEMS microrobots (MicroStressBots) controlled through a single global control signal. The global control signal couples the motion of the devices, causing the system to be highly underactuated. In order for the robots to assemble into arbitrary planar shapes despite the high degree of underactuation, it is desirable that each robot be independently maneuverable (independently controllable). To achieve independent control, we fabricated robots that behave (move) differently from one another in response to the same global control signal. We harnessed this differentiation to develop assembly control strategies, where the assembly goal is a desired geometric shape that can be obtained by connecting the chassis of individual robots. We derived and experimentally tested assembly plans that command some of the robots to make progress toward the goal, while other robots are constrained to remain in small circular trajectories (closed-loop orbits) until it is their turn to move into the goal shape. Our control strategies were tested on systems of fabricated MicroStressBots. The robots are 240–280 μm × 60 μm × 7–20 μm in size and move simultaneously within a single operating environment. We demonstrated the feasibility of our control scheme by accurately assembling five different types of planar microstructures. PMID:23580796
Mamdani Fuzzy System for Indoor Autonomous Mobile Robot
NASA Astrophysics Data System (ADS)
Khan, M. K. A. Ahamed; Rashid, Razif; Elamvazuthi, I.
2011-06-01
Several control algorithms for autonomous mobile robot navigation have been proposed in the literature. Recently, the employment of non-analytical methods of computing such as fuzzy logic, evolutionary computation, and neural networks has demonstrated the utility and potential of these paradigms for intelligent control of mobile robot navigation. In this paper, Mamdani fuzzy system for an autonomous mobile robot is developed. The paper begins with the discussion on the conventional controller and then followed by the description of fuzzy logic controller in detail.
Trajectory tracking control for a nonholonomic mobile robot under ROS
NASA Astrophysics Data System (ADS)
Lakhdar Besseghieur, Khadir; Trębiński, Radosław; Kaczmarek, Wojciech; Panasiuk, Jarosław
2018-05-01
In this paper, the implementation of the trajectory tracking control strategy on a ROS-based mobile robot is considered. Our test-bench is the nonholonomic mobile robot ‘TURTLEBOT’. ROS facilitates considerably setting-up a suitable environment to test the designed controller. Our aim is to develop a framework using ROS concepts so that a trajectory tracking controller can be implemented on any ROS-enabled mobile robot. Practical experiments with ‘TURTLEBOT’ are conducted to assess the framework reliability.
A simple highly efficient non invasive EMG-based HMI.
Vitiello, N; Olcese, U; Oddo, C M; Carpaneto, J; Micera, S; Carrozza, M C; Dario, P
2006-01-01
Muscle activity recorded non-invasively is sufficient to control a mobile robot if it is used in combination with an algorithm for its asynchronous analysis. In this paper, we show that several subjects successfully can control the movements of a robot in a structured environment made up of six rooms by contracting two different muscles using a simple algorithm. After a small training period, subjects were able to control the robot with performances comparable to those achieved manually controlling the robot.
A Symbiotic Brain-Machine Interface through Value-Based Decision Making
Mahmoudi, Babak; Sanchez, Justin C.
2011-01-01
Background In the development of Brain Machine Interfaces (BMIs), there is a great need to enable users to interact with changing environments during the activities of daily life. It is expected that the number and scope of the learning tasks encountered during interaction with the environment as well as the pattern of brain activity will vary over time. These conditions, in addition to neural reorganization, pose a challenge to decoding neural commands for BMIs. We have developed a new BMI framework in which a computational agent symbiotically decoded users' intended actions by utilizing both motor commands and goal information directly from the brain through a continuous Perception-Action-Reward Cycle (PARC). Methodology The control architecture designed was based on Actor-Critic learning, which is a PARC-based reinforcement learning method. Our neurophysiology studies in rat models suggested that Nucleus Accumbens (NAcc) contained a rich representation of goal information in terms of predicting the probability of earning reward and it could be translated into an evaluative feedback for adaptation of the decoder with high precision. Simulated neural control experiments showed that the system was able to maintain high performance in decoding neural motor commands during novel tasks or in the presence of reorganization in the neural input. We then implanted a dual micro-wire array in the primary motor cortex (M1) and the NAcc of rat brain and implemented a full closed-loop system in which robot actions were decoded from the single unit activity in M1 based on an evaluative feedback that was estimated from NAcc. Conclusions Our results suggest that adapting the BMI decoder with an evaluative feedback that is directly extracted from the brain is a possible solution to the problem of operating BMIs in changing environments with dynamic neural signals. During closed-loop control, the agent was able to solve a reaching task by capturing the action and reward interdependency in the brain. PMID:21423797
Software and electronic developments for TUG - T60 robotic telescope
NASA Astrophysics Data System (ADS)
Parmaksizoglu, M.; Dindar, M.; Kirbiyik, H.; Helhel, S.
2014-12-01
A robotic telescope is a telescope that can make observations without hands-on human control. Its low level behavior is automatic and computer-controlled. Robotic telescopes usually run under the control of a scheduler, which provides high-level control by selecting astronomical targets for observation. TUBITAK National Observatory (TUG) T60 Robotic Telescope is controlled by open source OCAAS software, formally named TALON. This study introduces the improvements on TALON software, new electronic and mechanic designs. The designs and software improvements were implemented in the T60 telescope control software and tested on the real system successfully.